-
Notifications
You must be signed in to change notification settings - Fork 316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug Report] Load model problem #800
Comments
Do you have any way to have access to huggingface? This is a known issue where we are currently downloading config from huggingface, even when config is passed through #754. It will be patched at some point, but the easiest solution today is to make sure you have access to HuggingFace. If it is not possible for you to have access, let me know. |
I am trying to load codellama and getting the same error. I have logged in using the hugging face cli and have access to the model as I can use it normally without transformer lens. Any idea what is happening? Code:
Error:
|
Sorry, I can't access the hugging face online loading model due to environmental constraints. I want to load it through the local model. Is there any good solution?@bryce13950 |
Hello, I have a strange phenomenon. This makes me very puzzled.
I use the following code to load the GPT2-xl model locally, but it can run and load normally in a Jupyter file. When I use another script to load the model, I keep reporting that I am downloading it from hugging face official website, but my machine can't connect to hugging face.
Two Jupyter files use the same conda environment, and the running results are as follows:
加载失败的文件报错如下:
Load the model code as follows:
The text was updated successfully, but these errors were encountered: