You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi XR1988, I haven't worked on this since last year but from what I remember I got OK results on small datasets. I Would try and train on full librispeech to confirm that your environment is working okay and then move on from there.
I'm having the same problem on both versions U-1.0 and U-2.0. Even after training for over 50,000 steps, the UER and WER remain stubbornly around 90, and the output lengths on the validation set keep getting shorter.
If you are working on the 'inter' project, it might be that the 'inter' component is no longer supported from the start. I've been cloning everything directly on a cloud server. But it might not be necessary. It's likely only there to support pure CPU training (in reality, you can use both CPU and GPU by changing the configuration settings)"
🐛 Bug
https://github.com/facebookresearch/fairseq/blob/main/examples/wav2vec/unsupervised/scripts/prepare_audio_v2.sh
I believe prepare_audio_v2.sh is missing a line to learn kmeans e.g.
This would go between dump_mfcc and dump_km_label
The text was updated successfully, but these errors were encountered: