You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I trained the model and got a good validation WER.
However, I got very poor decoded results when I tried to load that checkpoint and ran it in --test mode.
Do you have any suggestion on this?
The text was updated successfully, but these errors were encountered:
There is a problem with the saved checkpoint. I'm not sure what happened and how was it happen.
As I found, the weights of latest.pth were not actually come from the last epoch.
I retrained the model and never found this problem again.
Hi All,
Thanks for author's work!
I trained the models recently with librispeech train-cleain-100, and the WER is very low during training:
INFO] Saved checkpoint (step = 80.0K, wer = 0.30) and status @ ckpt/Librispeech_subwords_1000/best_att.pth
when tested the performance with the best_att.pth, on test-clean or dev-clean, the performance is very poor, WERs were about 80%.
@kouohhashi Have the problem of high-reference-WER (about 80%) being resolved? @burin-n After you retrained the model did you get good performance, i.e., low WER?
Any updates for the WER for this repo, or any solutions for the problem?
If this one does not work, do you have any recommendations for a simple ASR package works?
I trained the model and got a good validation WER.
However, I got very poor decoded results when I tried to load that checkpoint and ran it in
--test
mode.Do you have any suggestion on this?
The text was updated successfully, but these errors were encountered: