Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use Mistral API for model mistral-medium #836

Closed
Squallpka1 opened this issue Dec 16, 2023 · 6 comments
Closed

How to use Mistral API for model mistral-medium #836

Squallpka1 opened this issue Dec 16, 2023 · 6 comments
Labels
Bug Something isn't working

Comments

@Squallpka1
Copy link

Describe the bug

(oi) C:\Users\Rui Leonhart>interpreter --context_window 4000 --max_tokens 100 --model mistral/mistral-medium -y

In my C: drive, i have image "ComfyUI_00032_.png". I want the first half of the image.

Provider List: https://docs.litellm.ai/docs/providers

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

    Python Version: 3.11.4
    Pip Version: 23.3.1
    Open-interpreter Version: cmd:Interpreter, pkg: 0.1.17
    OS Version and Architecture: Windows-10-10.0.19045-SP0
    CPU Info: AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD
    RAM Info: 15.95 GB, used: 7.54, free: 8.40


    Interpreter Info
    Vision: False
    Model: mistral/mistral-medium
    Function calling: False
    Context window: 4000
    Max tokens: 100

    Auto run: True
    API base: None
    Local: False

    Curl output: Not local

Traceback (most recent call last):
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\main.py", line 300, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(model=model, custom_llm_provider=custom_llm_provider, api_base=api_base)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 1821, in get_llm_provider
raise e
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 1818, in get_llm_provider
raise ValueError(f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/{model}',..) Learn more: https://docs.litellm.ai/docs/providers")
ValueError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/mistral/mistral-medium',..) Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "C:\Anaconda\anaconda3\envs\oi\Scripts\interpreter.exe_main
.py", line 7, in
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 21, in start_terminal_interface
start_terminal_interface(self)
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 304, in start_terminal_interface
interpreter.chat()
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 77, in chat
for _ in self._streaming_chat(message=message, display=display):
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 92, in _streaming_chat
yield from terminal_interface(self, message)
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\terminal_interface\terminal_interface.py", line 115, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 113, in _streaming_chat
yield from self._respond()
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 148, in _respond
yield from respond(self)
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\respond.py", line 49, in respond
for chunk in interpreter._llm(messages_for_llm):
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\llm\convert_to_coding_llm.py", line 65, in coding_llm
for chunk in text_llm(messages):
^^^^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\llm\setup_text_llm.py", line 154, in base_llm
return litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 962, in wrapper
raise e
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 899, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\timeout.py", line 53, in wrapper
result = future.result(timeout=local_timeout_duration)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\concurrent\futures_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self._exception
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\timeout.py", line 42, in async_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\main.py", line 1403, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 3574, in exception_type
raise e
File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 3556, in exception_type
raise APIError(status_code=500, message=str(original_exception), llm_provider=custom_llm_provider, model=model)
litellm.exceptions.APIError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/mistral/mistral-medium',..) Learn more: https://docs.litellm.ai/docs/providers

Reproduce

  1. conda activate env.
  2. write 'interpreter --context_window 4000 --max_tokens 100 --model mistral/mistral-medium -y' in terminal.
  3. write what i want to do
  4. error

Expected behavior

Just want to try mistral model due to its cheaper option API call. Perhaps the way i input stuff in wrong. But i believe this is like the liteLLM side of the problem.

Screenshots

No response

Open Interpreter version

0.1.17

Python version

Python 3.11.4

Operating System name and version

Windows 10

Additional context

No response

@Squallpka1 Squallpka1 added the Bug Something isn't working label Dec 16, 2023
@Squallpka1
Copy link
Author

So, i try to pip install --upgrade litellm which i believe solve the issue with the API call but later got new error.

(oi) C:\Users\Rui Leonhart>interpreter --context_window 4000 --max_tokens 100 --model mistral/mistral-medium -y
> In my C: drive, i have image "ComfyUI_00032_.png". I want the first half of the image.

  Plan:

   1 Identify the image dimensions using Python's Pillow library.
   2 Crop the image to obtain the first half.
   3 Save the cropped image as a new file.

  First, let's import the necessary libraries and check the image dimensions.


        Python Version: 3.11.4
        Pip Version: 23.3.1
        Open-interpreter Version: cmd:Interpreter, pkg: 0.1.17
        OS Version and Architecture: Windows-10-10.0.19045-SP0
        CPU Info: AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD
        RAM Info: 15.95 GB, used: 7.10, free: 8.85


        Interpreter Info
        Vision: False
        Model: mistral/mistral-medium
        Function calling: False
        Context window: 4000
        Max tokens: 100

        Auto run: True
        API base: None
        Local: False

        Curl output: Not local


Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Anaconda\anaconda3\envs\oi\Scripts\interpreter.exe\__main__.py", line 7, in <module>
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 21, in start_terminal_interface
    start_terminal_interface(self)
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 304, in
start_terminal_interface
    interpreter.chat()
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 77, in chat
    for _ in self._streaming_chat(message=message, display=display):
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 92, in _streaming_chat
    yield from terminal_interface(self, message)
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\terminal_interface\terminal_interface.py", line 115, in
terminal_interface
    for chunk in interpreter.chat(message, display=False, stream=True):
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 113, in _streaming_chat
    yield from self._respond()
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 148, in _respond
    yield from respond(self)
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\respond.py", line 49, in respond
    for chunk in interpreter._llm(messages_for_llm):
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\llm\convert_to_coding_llm.py", line 75, in coding_llm
    accumulated_block += content
TypeError: can only concatenate str (not "NoneType") to str

@Notnaton
Copy link
Collaborator

You need to run:
pip install open-interpreter --force-reinstall
interpreter --api_base https://api.mistral.ai/v1/ --model <modelname> --api_key <key>

@Squallpka1
Copy link
Author

You need to run: pip install open-interpreter --force-reinstall interpreter --api_base https://api.mistral.ai/v1/ --model <modelname> --api_key <key>

(oi) C:\Users\Rui Leonhart>interpreter --api_base https://api.mistral.ai/v1/ --model mistral-medium --api_key 

▌ Model set to mistral-medium

Open Interpreter will require approval before running code.

Use interpreter -y to bypass this.

Press CTRL-C to exit.

> hi

We were unable to determine the context window of this model. Defaulting to 3000.
If your model can handle more, run interpreter --context_window {token limit} or interpreter.context_window = {token limit}.
Also, please set max_tokens: interpreter --max_tokens {max tokens per response} or interpreter.max_tokens = {max tokens per response}


Provider List: https://docs.litellm.ai/docs/providers


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


        Python Version: 3.11.4
        Pip Version: 23.3.1
        Open-interpreter Version: cmd:Interpreter, pkg: 0.1.17
        OS Version and Architecture: Windows-10-10.0.19045-SP0
        CPU Info: AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD
        RAM Info: 15.95 GB, used: 6.74, free: 9.21


        Interpreter Info
        Vision: False
        Model: mistral-medium
        Function calling: False
        Context window: None
        Max tokens: None

        Auto run: False
        API base: https://api.mistral.ai/v1/
        Local: False

        Curl output: Not local


Traceback (most recent call last):
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\main.py", line 300, in completion
    model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(model=model, custom_llm_provider=custom_llm_provider, api_base=api_base)
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 1821, in get_llm_provider
    raise e
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 1818, in get_llm_provider
    raise ValueError(f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/{model}',..)` Learn more: https://docs.litellm.ai/docs/providers")
ValueError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/mistral-medium',..)` Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Anaconda\anaconda3\envs\oi\Scripts\interpreter.exe\__main__.py", line 7, in <module>
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 21, in start_terminal_interface
    start_terminal_interface(self)
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 304, in start_terminal_interface
    interpreter.chat()
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 77, in chat
    for _ in self._streaming_chat(message=message, display=display):
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 92, in _streaming_chat
    yield from terminal_interface(self, message)
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\terminal_interface\terminal_interface.py", line 115, in terminal_interface
    for chunk in interpreter.chat(message, display=False, stream=True):
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 113, in _streaming_chat
    yield from self._respond()
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 148, in _respond
    yield from respond(self)
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\respond.py", line 49, in respond
    for chunk in interpreter._llm(messages_for_llm):
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\llm\convert_to_coding_llm.py", line 65, in coding_llm
    for chunk in text_llm(messages):
                 ^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\llm\setup_text_llm.py", line 154, in base_llm
    return litellm.completion(**params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 962, in wrapper
    raise e
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 899, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\timeout.py", line 53, in wrapper
    result = future.result(timeout=local_timeout_duration)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\concurrent\futures\_base.py", line 456, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\concurrent\futures\_base.py", line 401, in __get_result
    raise self._exception
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\timeout.py", line 42, in async_func
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\main.py", line 1403, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 3574, in exception_type
    raise e
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 3556, in exception_type
    raise APIError(status_code=500, message=str(original_exception), llm_provider=custom_llm_provider, model=model)
litellm.exceptions.APIError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/mistral-medium',..)` Learn more: https://docs.litellm.ai/docs/providers

Did your suggestion but fail. I already delete the API key here.

@Notnaton
Copy link
Collaborator

it should be --model mistral/<mistral-model>

@Squallpka1
Copy link
Author

Same error

(base) C:\Users\Rui Leonhart>conda activate oi

(oi) C:\Users\Rui Leonhart>interpreter --api_base https://api.mistral.ai/v1/ --model mistral/mistral-medium --api_key 

▌ Model set to mistral/mistral-medium

Open Interpreter will require approval before running code.

Use interpreter -y to bypass this.

Press CTRL-C to exit.

> hi

We were unable to determine the context window of this model. Defaulting to 3000.
If your model can handle more, run interpreter --context_window {token limit} or interpreter.context_window = {token limit}.
Also, please set max_tokens: interpreter --max_tokens {max tokens per response} or interpreter.max_tokens = {max tokens per response}


Provider List: https://docs.litellm.ai/docs/providers


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


        Python Version: 3.11.4
        Pip Version: 23.3.1
        Open-interpreter Version: cmd:Interpreter, pkg: 0.1.17
        OS Version and Architecture: Windows-10-10.0.19045-SP0
        CPU Info: AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD
        RAM Info: 15.95 GB, used: 5.97, free: 9.98


        Interpreter Info
        Vision: False
        Model: mistral/mistral-medium
        Function calling: False
        Context window: None
        Max tokens: None

        Auto run: False
        API base: https://api.mistral.ai/v1/
        Local: False

        Curl output: Not local


Traceback (most recent call last):
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\main.py", line 300, in completion
    model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(model=model, custom_llm_provider=custom_llm_provider, api_base=api_base)
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 1821, in get_llm_provider
    raise e
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 1818, in get_llm_provider
    raise ValueError(f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/{model}',..)` Learn more: https://docs.litellm.ai/docs/providers")
ValueError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/mistral/mistral-medium',..)` Learn more: https://docs.litellm.ai/docs/providers

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Anaconda\anaconda3\envs\oi\Scripts\interpreter.exe\__main__.py", line 7, in <module>
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 21, in start_terminal_interface
    start_terminal_interface(self)
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 304, in start_terminal_interface
    interpreter.chat()
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 77, in chat
    for _ in self._streaming_chat(message=message, display=display):
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 92, in _streaming_chat
    yield from terminal_interface(self, message)
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\terminal_interface\terminal_interface.py", line 115, in terminal_interface
    for chunk in interpreter.chat(message, display=False, stream=True):
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 113, in _streaming_chat
    yield from self._respond()
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 148, in _respond
    yield from respond(self)
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\respond.py", line 49, in respond
    for chunk in interpreter._llm(messages_for_llm):
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\llm\convert_to_coding_llm.py", line 65, in coding_llm
    for chunk in text_llm(messages):
                 ^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\llm\setup_text_llm.py", line 154, in base_llm
    return litellm.completion(**params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 962, in wrapper
    raise e
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 899, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\timeout.py", line 53, in wrapper
    result = future.result(timeout=local_timeout_duration)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\concurrent\futures\_base.py", line 456, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\concurrent\futures\_base.py", line 401, in __get_result
    raise self._exception
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\timeout.py", line 42, in async_func
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\main.py", line 1403, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 3574, in exception_type
    raise e
  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\litellm\utils.py", line 3556, in exception_type
    raise APIError(status_code=500, message=str(original_exception), llm_provider=custom_llm_provider, model=model)
litellm.exceptions.APIError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/mistral/mistral-medium',..)` Learn more: https://docs.litellm.ai/docs/providers

So i tried to change the api base url to 'https://api.mistral.ai/v1/chat/completions' but got same error.

So, i check litellm version coz open-interpreter use it to call API.

(oi) C:\Users\Rui Leonhart>pip show litellm
Name: litellm
Version: 0.13.2
Summary: Library to easily interface with LLM API providers
Home-page:
Author: BerriAI
Author-email:
License: MIT
Location: C:\Anaconda\anaconda3\envs\oi\Lib\site-packages
Requires: appdirs, certifi, click, importlib-metadata, jinja2, openai, python-dotenv, tiktoken, tokenizers
Required-by: open-interpreter

So i go to litellm github to see latest version.

v1.15.0 [Latest](https://github.com/BerriAI/litellm/releases/latest)
@[ishaan-jaff](https://github.com/ishaan-jaff) ishaan-jaff released this yesterday
· [21 commits ](https://github.com/BerriAI/litellm/compare/v1.15.0...main)to main since this release
[ v1.15.0](https://github.com/BerriAI/litellm/tree/v1.15.0)
https://github.com/BerriAI/litellm/commit/8522bb60f392b322cf228782298e05fac7302b1a
What's Changed
LiteLLM Proxy now maps exceptions for 100+ LLMs to the OpenAI format https://docs.litellm.ai/docs/proxy/quick_start
🧨 Log all LLM Input/Output to [@dynamodb](https://twitter.com/dynamodb) set litellm.success_callback = ["dynamodb"] https://docs.litellm.ai/docs/proxy/logging#logging-proxy-inputoutput---dynamodb
⭐️ Support for [@MistralAI](https://twitter.com/MistralAI) API, Gemini PRO

So, basically, liteLLM in this conda env not yet support mistral API (ver 0.13.2). The one that support is version 1.15.

Sure, if i want --upgrade liteLLM in this conda env, i figure i will get the error like in my 2nd post.

  File "C:\Anaconda\anaconda3\envs\oi\Lib\site-packages\interpreter\core\llm\convert_to_coding_llm.py", line 75, in coding_llm
    accumulated_block += content
TypeError: can only concatenate str (not "NoneType") to str

So yeah...
Im not really want to use mistral. Just want to see what other option i have other than OpenAI API.

@KillianLucas
Copy link
Collaborator

Thank you so much @Squallpka1 and @Notnaton for finding this problem.

This has been fixed as of 0.2.0 a few minutes ago— I believe it was due to the new OpenAI API (which LiteLLM simulates with Mistral) changing their format (which we've adopted in 0.2.0).

Let me know if it's still happening with this command, and if so we can reopen this issue:

interpreter --model mistral/mistral-medium

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants