Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

validation wer of training mode and test mode are different. #64

Open
burin-n opened this issue Oct 16, 2020 · 3 comments
Open

validation wer of training mode and test mode are different. #64

burin-n opened this issue Oct 16, 2020 · 3 comments

Comments

@burin-n
Copy link

burin-n commented Oct 16, 2020

I trained the model and got a good validation WER.
However, I got very poor decoded results when I tried to load that checkpoint and ran it in --test mode.

Do you have any suggestion on this?

@kouohhashi
Copy link

I have a similar experience. have you find a solution?

@burin-n
Copy link
Author

burin-n commented Nov 21, 2020

There is a problem with the saved checkpoint. I'm not sure what happened and how was it happen.
As I found, the weights of latest.pth were not actually come from the last epoch.
I retrained the model and never found this problem again.

@WAIML
Copy link

WAIML commented Dec 6, 2024

Hi All,
Thanks for author's work!
I trained the models recently with librispeech train-cleain-100, and the WER is very low during training:
INFO] Saved checkpoint (step = 80.0K, wer = 0.30) and status @ ckpt/Librispeech_subwords_1000/best_att.pth
when tested the performance with the best_att.pth, on test-clean or dev-clean, the performance is very poor, WERs were about 80%.

@kouohhashi Have the problem of high-reference-WER (about 80%) being resolved?
@burin-n After you retrained the model did you get good performance, i.e., low WER?

Any updates for the WER for this repo, or any solutions for the problem?
If this one does not work, do you have any recommendations for a simple ASR package works?

Thank you very much for your helpful information!

Regards,
Willy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants