-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama models not generating output and not invoking ollama models #1039
Comments
Same issue |
same |
1 similar comment
same |
Any news ? Same error here |
I have tried
as ollama uses 11434 i have exposed 0.0.0.0 :11434 with my server IP:11435 i have trying for the past 4 weeks now. |
I've got it to work with Ollama (phi4) by (1) enabling both Ollama and Google in settings, (2) selecting Google as the provider and Gemini 2.0 Flash in model list, (3) adding my Google AI Studio API key, (4) restarting bolt.diy through npm run dev, (5) disabling Google providing. I'm using Ollama in Docker and bolt.diy w/o docker on Linux Mint 21.3 |
DEBUG api.chat Total message length: 2, words this is the new error that i am getting. i will try running ollama on docker this week, |
I've had the same issue with QWEN and Ollama with Bolt.diy since 0.0.1 - It does work, but there are some nuances, that need fixing in each iteration. Ive attached a screenshot showing this working on 0.0.5
|
@jyotisah00 did the comments from before also fix for you and issue can be closed? |
No the comments are not helpful. I have done all the things but still not able to get an output. I tried yesterday night with the new version. It says internal server error. I will post an output today. |
@jyotisah00 ok, thanks. I think it would be better so discuss/investigate this in the community, as this is most likely not a bug/feature. its more configuration/hosting thing. https://thinktank.ottomator.ai/c/bolt-diy/bolt-diy-issues-and-troubleshooting/22 Let me know if this fits for you, then I would close this issue and we investigate there (also much more users active in community). |
Yes that would be helpful . I will ask the community.
Get Outlook for Android<https://aka.ms/AAb9ysg>
…________________________________
From: Leex ***@***.***>
Sent: Friday, January 17, 2025 5:19:15 PM
To: stackblitz-labs/bolt.diy ***@***.***>
Cc: jyotisah00 ***@***.***>; Mention ***@***.***>
Subject: Re: [stackblitz-labs/bolt.diy] Ollama models not generating output and not invoking ollama models (Issue #1039)
@jyotisah00<https://github.com/jyotisah00> ok, thanks. I think it would be better so discuss/investigate this in the community, as this is most likely not a bug/feature. its more configuration/hosting thing.
https://thinktank.ottomator.ai/c/bolt-diy/bolt-diy-issues-and-troubleshooting/22
Let me know if this fits for you, then I would close this issue and we investigate there (also much more users active in community).
—
Reply to this email directly, view it on GitHub<#1039 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AHWT4JDI6YUMGKE7BFUVNDT2LDU3XAVCNFSM6AAAAABUX73IEGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKOJYGE4TENRTGU>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Confirmation that Ollama is running requests from local lan :
Running Bolt.diy locally w/o docker
Bolt GUI on localhost
Bolt Settings
Everything is disabled except Ollama and env file has been set.
I have been trying for the last few days to make it work and tried multiple different fixes. in some cases the ollama models get invoked but no output.
Link to the Bolt URL that caused the error
http://localhost:5173/
Steps to reproduce
Run the application locally and link your ollama models and try a prompt.
Expected behavior
the ollama models should get invoked and bolt should provide an answer to the prompt.
Screen Recording / Screenshot
No response
Platform
Provider Used
No response
Model Used
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: