Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated litellm and openai #793

Closed
wants to merge 8 commits into from
Closed

Updated litellm and openai #793

wants to merge 8 commits into from

Conversation

Notnaton
Copy link
Collaborator

@Notnaton Notnaton commented Nov 26, 2023

Describe the changes you have made:

Updated LiteLLM and OpenAI
Needed to add a check if content is None.
Only tested with LM Studio

Reference any relevant issue (Fixes #000)

#765

  • I have performed a self-review of my code:

I have tested the code on the following OS:

  • Windows
  • MacOS
  • Linux

AI Language Model (if applicable)

  • GPT4
  • GPT3
  • Llama 7B
  • Llama 13B
  • Llama 34B
  • Huggingface model (Please specify which one)

Needed to add a check if content is None.
Only tested with LM Studio
@Notnaton
Copy link
Collaborator Author

Notnaton commented Nov 26, 2023

This might fix this: v0.1.15...v0.1.16#diff-759d9f02bb48ecd82e5b24e5725270c6932e76159967c46ee8742470424dce52

This needs more testing with openai api and others
Need to check code for things that have changed:
https://docs.litellm.ai/docs/migration

What changed?
Requires openai>=1.0.0
openai.InvalidRequestError → openai.BadRequestError
openai.ServiceUnavailableError → openai.APIStatusError
NEW litellm client, allow users to pass api_key
litellm.Litellm(api_key="sk-123")
response objects now inherit from BaseModel (prev. OpenAIObject)
NEW default exception - APIConnectionError (prev. APIError)
litellm.get_max_tokens() now returns an int not a dict

max_tokens = litellm.get_max_tokens("gpt-3.5-turbo") # returns an int not a dict 
assert max_tokens==4097

Streaming - OpenAI Chunks now return None for empty stream chunks. This is how to process stream chunks with content

response = litellm.completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

@CyanideByte
Copy link
Contributor

CyanideByte commented Nov 27, 2023

This still causes issues with gpt-4 for me.

(oi) C:\oi>interpreter

▌ Model set to gpt-4-1106-preview

Open Interpreter will require approval before running code.

Use interpreter -y to bypass this.

Press CTRL-C to exit.

hi

    Python Version: 3.11.5
    Pip Version: 23.3
    Open-interpreter Version: cmd:Interpreter, pkg: 0.1.16
    OS Version and Architecture: Windows-10-10.0.19045-SP0
    CPU Info: Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
    RAM Info: 31.95 GB, used: 9.88, free: 22.07


    Interpreter Info
    Vision: False
    Model: gpt-4-1106-preview
    Function calling: True
    Context window: 128000
    Max tokens: 4096

    Auto run: False
    API base: None
    Local: False

    Curl output: Not local

Traceback (most recent call last):
File "C:\Users\Cyanide\miniconda3\envs\oi\Lib\site-packages\pydantic\main.py", line 753, in getattr
return pydantic_extra[item]
~~~~~~~~~~~~~~^^^^^^
KeyError: 'items'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "C:\Users\Cyanide\miniconda3\envs\oi\Scripts\interpreter.exe_main
.py", line 7, in
File "C:\Users\Cyanide\miniconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 21, in start_terminal_interface
start_terminal_interface(self)
File "C:\Users\Cyanide\miniconda3\envs\oi\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 305, in start_terminal_interface
interpreter.chat()
File "C:\Users\Cyanide\miniconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 77, in chat
for _ in self._streaming_chat(message=message, display=display):
File "C:\Users\Cyanide\miniconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 92, in _streaming_chat
yield from terminal_interface(self, message)
File "C:\Users\Cyanide\miniconda3\envs\oi\Lib\site-packages\interpreter\terminal_interface\terminal_interface.py", line 115, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "C:\Users\Cyanide\miniconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 113, in _streaming_chat
yield from self._respond()
File "C:\Users\Cyanide\miniconda3\envs\oi\Lib\site-packages\interpreter\core\core.py", line 148, in _respond
yield from respond(self)
File "C:\Users\Cyanide\miniconda3\envs\oi\Lib\site-packages\interpreter\core\respond.py", line 49, in respond
for chunk in interpreter._llm(messages_for_llm):
File "C:\Users\Cyanide\miniconda3\envs\oi\Lib\site-packages\interpreter\core\llm\setup_openai_coding_llm.py", line 135, in coding_llm
accumulated_deltas = merge_deltas(accumulated_deltas, delta)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Cyanide\miniconda3\envs\oi\Lib\site-packages\interpreter\core\utils\merge_deltas.py", line 11, in merge_deltas
for key, value in delta.items():
^^^^^^^^^^^
File "C:\Users\Cyanide\miniconda3\envs\oi\Lib\site-packages\pydantic\main.py", line 755, in getattr
raise AttributeError(f'{type(self).name!r} object has no attribute {item!r}') from exc
AttributeError: 'Delta' object has no attribute 'items'

(oi) C:\oi>

@Notnaton
Copy link
Collaborator Author

@CyanideByte thanks, I will check it out after work 👍

@catmeowjiao
Copy link

@Notnaton You need to modify all the code that uses the OpenAI module, and I will submit a pull request to solve this problem

@Notnaton
Copy link
Collaborator Author

Notnaton commented Dec 1, 2023

Excellent @catmeowjiao

@ishaan-jaff
Copy link
Contributor

hi @Notnaton let me know if you run into any issues

@Notnaton Notnaton linked an issue Dec 3, 2023 that may be closed by this pull request
@catmeowjiao
Copy link

I'm sorry, I've been very busy lately, so the pull request may be delayed. Please forgive me!

@Notnaton Notnaton linked an issue Dec 5, 2023 that may be closed by this pull request
@Notnaton
Copy link
Collaborator Author

Notnaton commented Dec 6, 2023

I'll close this one and make a new, because of out of date. Having some issues with updating too, many changes to the structure.

@Notnaton Notnaton closed this Dec 6, 2023
@vijaykramesh
Copy link

any update on this @Notnaton ? I was looking into embedding open-interpreter as a griptape tool but the version conflicts make it difficult..

@Notnaton Notnaton deleted the update-litellm-and-openai branch December 18, 2023 21:45
@Notnaton
Copy link
Collaborator Author

Notnaton commented Dec 18, 2023

I'm going to look into it again, but it might be after xmas/new years. A lot of people are running into problems with outdated openai/litellm packages.
Maybe tomorrow, no promises @vijaykramesh

@vijaykramesh
Copy link

haha I thought I was going insane, I had just set a remote to your fork, was trying to checkout your branch, and suddenly it was gone. Then I refreshed the UI and it also was 404! Fast response 🥇

No worries, look forward to checking back

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Context window Can not set max_tokens while use gpt-3.5-turbo model. update litellm dependency
5 participants