Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama models not generating output and not invoking ollama models #1039

Closed
jyotisah00 opened this issue Jan 7, 2025 · 12 comments
Closed

Ollama models not generating output and not invoking ollama models #1039

jyotisah00 opened this issue Jan 7, 2025 · 12 comments
Labels
question Further information is requested

Comments

@jyotisah00
Copy link

jyotisah00 commented Jan 7, 2025

Confirmation that Ollama is running requests from local lan :

image

Running Bolt.diy locally w/o docker

image
image

Bolt GUI on localhost

image

Bolt Settings

Everything is disabled except Ollama and env file has been set.
image

I have been trying for the last few days to make it work and tried multiple different fixes. in some cases the ollama models get invoked but no output.

Link to the Bolt URL that caused the error

http://localhost:5173/

Steps to reproduce

Run the application locally and link your ollama models and try a prompt.

Expected behavior

the ollama models should get invoked and bolt should provide an answer to the prompt.

Screen Recording / Screenshot

No response

Platform

  • OS: Windows
  • Browser: Edge, Chrome Canary, Firefox
  • Version: latest stable

Provider Used

No response

Model Used

No response

Additional context

No response

@MadisLemsalu
Copy link

Same issue

@mjtechguy
Copy link

same

1 similar comment
@snapiz
Copy link

snapiz commented Jan 11, 2025

same

@shmox75
Copy link

shmox75 commented Jan 11, 2025

Any news ? Same error here

@jyotisah00
Copy link
Author

jyotisah00 commented Jan 11, 2025

I have tried

  1. ollama on server 1+ bolt all on local windows w/o docker= failed (model list shows)
  2. ollama on server 1+ bolt all on local windows on docker= failed (model list shows)
  3. server 1 bolt w/o docker +server 2 ollama + gui on windows = failed (models list don't show )
  4. server 1 bolt on docker +server 2 ollama + gui on windows = failed (models list don't show )
  5. bolt w/o docker + ollama (both on same server ) = failed (models list don't show )
  6. bolt with docker + ollama (both on same server ) = (models list don't show )

as ollama uses 11434 i have exposed 0.0.0.0 :11434 with my server IP:11435

i have trying for the past 4 weeks now.
right now exploring bolt with open ai api licenses and it works fine. locally not at all.

@anammari
Copy link

I've got it to work with Ollama (phi4) by (1) enabling both Ollama and Google in settings, (2) selecting Google as the provider and Gemini 2.0 Flash in model list, (3) adding my Google AI Studio API key, (4) restarting bolt.diy through npm run dev, (5) disabling Google providing.

GUI in Chrome Canary:
image

bolt.diy in terminal:
image

I'm using Ollama in Docker and bolt.diy w/o docker on Linux Mint 21.3

@jyotisah00
Copy link
Author

DEBUG api.chat Total message length: 2, words
INFO LLMManager Getting dynamic models for Ollama
INFO LLMManager Got 4 dynamic models for Ollama
INFO stream-text Sending llm call to Ollama with model llama2:latest
DEBUG Ollama Base Url used: http://192.168.0.153:11435
ERROR api.chat AI_RetryError: Failed after 3 attempts. Last error: Internal Server Error

this is the new error that i am getting. i will try running ollama on docker this week,

@elysi
Copy link

elysi commented Jan 14, 2025

I've had the same issue with QWEN and Ollama with Bolt.diy since 0.0.1 - It does work, but there are some nuances, that need fixing in each iteration. Ive attached a screenshot showing this working on 0.0.5

  1. Do not set the Ollama Base URL in the Provider section via Bolt WebGUI (If you have, using the below step overwrites this so dont feel like you need to remove the configuration)
  2. Set the Base URL via the .env.local file in the root of the bolt.diy folder. Run mv .env.example .env.local and then vi/nanoing the file. Set the URL in the .env.local file.
  3. Do not use localhost or your local machine IP address (IPs starting 192.0x, 10.0x, or 172.0x). There are known issues with IP addresses and IPV6, set your OLLAMA_API_BASE_URL to http://127.0.0.1:11434
  4. Set your DEFAULT_NUM_CTX to 32768

Screenshot From 2025-01-14 10-38-32

@leex279
Copy link
Collaborator

leex279 commented Jan 16, 2025

@jyotisah00 did the comments from before also fix for you and issue can be closed?

@leex279 leex279 added the question Further information is requested label Jan 16, 2025
@jyotisah00
Copy link
Author

No the comments are not helpful. I have done all the things but still not able to get an output. I tried yesterday night with the new version. It says internal server error. I will post an output today.

@leex279
Copy link
Collaborator

leex279 commented Jan 17, 2025

@jyotisah00 ok, thanks. I think it would be better so discuss/investigate this in the community, as this is most likely not a bug/feature. its more configuration/hosting thing.

https://thinktank.ottomator.ai/c/bolt-diy/bolt-diy-issues-and-troubleshooting/22

Let me know if this fits for you, then I would close this issue and we investigate there (also much more users active in community).

@jyotisah00
Copy link
Author

jyotisah00 commented Jan 17, 2025 via email

@leex279 leex279 closed this as completed Jan 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

8 participants