Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: Chat Hangs Indefinitly When Using Any Filter #100

Open
Highsight opened this issue Jun 19, 2024 · 1 comment
Open

bug: Chat Hangs Indefinitly When Using Any Filter #100

Highsight opened this issue Jun 19, 2024 · 1 comment

Comments

@Highsight
Copy link

Whenever I attempt to add any sort of filter to any model in OpenWebUI, chatting with the model results in an indefinate hang. Pipelines and manifolds do not appear to have this issue, only filters. This lasts until I shut down the Pipelines Docker Container, at which point the model will respond without the filter info. I have tried this with multiple filters, including detoxify_filter_pipeline.py, llm_translate_filter_pipeline.py and home_assistant_filter.py.

I am running pipelines:latest-cuda and open-webui:dev-cuda. I have tried also just using open-webui:cuda and pipelines:latest with no difference. My machine utilizes and NVidia RTX 2060. I am using the llama3:8b model.

For the Home Assistant Filter I have the following values set:

  • Pipelines: *
  • Priority: 0
  • Openai Api Base Url: http://host.docker.internal:9099
  • Openai Api Key: 0p3n-w3bu!
  • Task Model: llama3:8b
  • Template: Use the following context as your learned knowledge, inside <context></context> XML tags.<context> {{CONTEXT}}</context>When answer to user:- If you don't know, just say that you don't know.- If you don't know when you are not sure, ask for clarification.Avoid mentioning that you obtained the information from the context.And answer according to the language of the user's question.
  • Home Assistant Url: <Private>
  • Home Assistant Token: <Private>

I'd appreciate any assistance in figuring this out, the Filter technology seems very interesting and I'd like to get to know it better.

@Highsight Highsight changed the title Chat Hangs Indefinitly When Using Any Filter bug: Chat Hangs Indefinitly When Using Any Filter Jun 19, 2024
@pressdarling
Copy link

  • Openai Api Base Url: http://host.docker.internal:9099
  • Openai Api Key: 0p3n-w3bu!
  • Task Model: llama3:8b

@Highsight I'm not an expert at this but if you've defined the filters' OpenAI API URL as the Pipelines base URL, then you're creating a loop. You're not running llama3:8b within Pipelines!

Try adding another OpenAI API URL/Key pair, and set the second OpenAI Base URL value to the value of OLLAMA_BASE_URL, e.g. http://host.docker.internal:11434, http://ollama:11434, http://localhost:11434 - per the troubleshooting guide depending on how/where you're hosting Ollama. This is separate to the Pipelines one (which you have there now), and the Ollama one.

While I don't know if this will actually fix it - I've had more pressing things than playing with the pipeline filter configurations - this should at least stop the whole thing from hanging. I think there are enough breadcrumbs in the code for those files if you want to figure out a more robust solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants