-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
./download_models.sh and run_experiments.py: Torch invalid memory size - maybe an overflow? #47
Comments
I can solve this by updating pytorch-pretrained-bert to transformers but that leads to some import errors, for example with allennlp. So I updated also allennlp and that worked until one is trying to run the experiments. Using transformers instead of pytorch-pretrained-bert produces many exceptions in the code due to slightly different syntax and so on. So its really an overhead. If somebody knows how to get LAMA working with the old pytorch-pretrained-bert package, let me know. I even tried to change the cuda version, but still got the overflow error from above. |
Okey |
Hi! @blrtvs |
@Zjh-819 great! Thanks, I will try it. It would be awesome if it works :) |
Worked for me. Good job! |
Hi,
when I run ./download_models.sh., I get the following exception:
I tried different (newer) versions of torch, but that lead to the exact same dimension error that JXZe reports in Issue #32 :
But in #32 there is no recommendation how to fix this dimension error.
All the packages from requirements.txt are installed correctly, but I have overrides==3.1.0 instead of overrides==6.1.0 as the import "from allennlp.modules.elmo import _ElmoBiLm" in elmo_connector.py didn't work, it worked only after changing to 3.1.0. I also tried to skip the building vocab-part and downloaded the provided common_vocab.txts from the README, but the same Torch: invalid memory size -- maybe an overflow?-error occurs when running run_experiments.py .
Does anybody have an idea how to fix this?
The text was updated successfully, but these errors were encountered: