You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was hoping to be able to switch between ollama and openai with the same build depending if I'm running locally or in deployed environment, but it seems that even when then M1_CHAT_MODEL_PROVIDER=openai variable is passed to the container ( as in docker container ), the app continues trying to use ollama.
Did I miss something or is the model provider constrained to be set at build time ? If it is the case, what is the recommended way to approach this ?
The text was updated successfully, but these errors were encountered:
Thanks @geoand , what is the recommendation on how to approach this ? Can I somehow register all of them with different names and choose one based on another config property ?
More of a question than an issue first as nothing is really mentioned in the docs.
Version : 0.24.0.CR1
With the following settings :
I was hoping to be able to switch between ollama and openai with the same build depending if I'm running locally or in deployed environment, but it seems that even when then
M1_CHAT_MODEL_PROVIDER=openai
variable is passed to the container ( as in docker container ), the app continues trying to use ollama.Did I miss something or is the model provider constrained to be set at build time ? If it is the case, what is the recommended way to approach this ?
The text was updated successfully, but these errors were encountered: