-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
loss: nan, acc: 0.0 #125
Comments
是不是某一步突然炸了的。。。 |
关掉fp16就好了。。。 |
加数据 |
你好,我也遇到这个问题了,但TrainConfig里面的use_fp16默认是False,你说的关掉fp16是指这个嘛? |
参考这个:#128 |
System Info
none
Information
🐛 Describe the bug
Dear Developer: I'm running the script https://github.com/X-LANCE/SLAM-LLM/blob/main/examples/asr_librispeech/scripts/finetune_whisper_large_linear_vicuna_ 7b.sh, and unlike the original, I replaced LLM with Qwen2-1.5b, and I have a problem with the training, as shown below.
loss: nan, acc: 0.0
How can I continue the experiment? @ddlBoJack
Error logs
1
Expected behavior
1
The text was updated successfully, but these errors were encountered: