-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use Mistral API for model mistral-medium #836
Comments
So, i try to pip install --upgrade litellm which i believe solve the issue with the API call but later got new error.
|
You need to run: |
Did your suggestion but fail. I already delete the API key here. |
it should be |
Same error
So i tried to change the api base url to 'https://api.mistral.ai/v1/chat/completions' but got same error. So, i check litellm version coz open-interpreter use it to call API.
So i go to litellm github to see latest version.
So, basically, liteLLM in this conda env not yet support mistral API (ver 0.13.2). The one that support is version 1.15. Sure, if i want --upgrade liteLLM in this conda env, i figure i will get the error like in my 2nd post.
So yeah... |
Thank you so much @Squallpka1 and @Notnaton for finding this problem. This has been fixed as of Let me know if it's still happening with this command, and if so we can reopen this issue:
|
Describe the bug
(oi) C:\Users\Rui Leonhart>interpreter --context_window 4000 --max_tokens 100 --model mistral/mistral-medium -y
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Traceback (most recent call last):
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\main.py", line 300, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(model=model, custom_llm_provider=custom_llm_provider, api_base=api_base)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 1821, in get_llm_provider
raise e
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 1818, in get_llm_provider
raise ValueError(f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/{model}',..)
Learn more: https://docs.litellm.ai/docs/providers")ValueError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/mistral/mistral-medium',..)
Learn more: https://docs.litellm.ai/docs/providersDuring handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "C:\Anaconda\anaconda3\envs\oi\Scripts\interpreter.exe_main.py", line 7, in
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 21, in start_terminal_interface
start_terminal_interface(self)
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 304, in start_terminal_interface
interpreter.chat()
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 77, in chat
for _ in self._streaming_chat(message=message, display=display):
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 92, in _streaming_chat
yield from terminal_interface(self, message)
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\terminal_interface\terminal_interface.py", line 115, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 113, in _streaming_chat
yield from self._respond()
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 148, in _respond
yield from respond(self)
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\respond.py", line 49, in respond
for chunk in interpreter._llm(messages_for_llm):
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\llm\convert_to_coding_llm.py", line 65, in coding_llm
for chunk in text_llm(messages):
^^^^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\llm\setup_text_llm.py", line 154, in base_llm
return litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 962, in wrapper
raise e
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 899, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\timeout.py", line 53, in wrapper
result = future.result(timeout=local_timeout_duration)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\concurrent\futures_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self._exception
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\timeout.py", line 42, in async_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\main.py", line 1403, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 3574, in exception_type
raise e
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 3556, in exception_type
raise APIError(status_code=500, message=str(original_exception), llm_provider=custom_llm_provider, model=model)
litellm.exceptions.APIError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/mistral/mistral-medium',..)
Learn more: https://docs.litellm.ai/docs/providersReproduce
Expected behavior
Just want to try mistral model due to its cheaper option API call. Perhaps the way i input stuff in wrong. But i believe this is like the liteLLM side of the problem.
Screenshots
No response
Open Interpreter version
0.1.17
Python version
Python 3.11.4
Operating System name and version
Windows 10
Additional context
No response
The text was updated successfully, but these errors were encountered: