Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to configure the openai API proxy endpoint? #823

Closed
xjspace opened this issue Dec 6, 2023 · 4 comments
Closed

How to configure the openai API proxy endpoint? #823

xjspace opened this issue Dec 6, 2023 · 4 comments

Comments

@xjspace
Copy link

xjspace commented Dec 6, 2023

Is your feature request related to a problem? Please describe.

Hi, how to set the api proxy api instead of the official open api address?
could i place it into .env? what's the ENV name?

Describe the solution you'd like

How to configure the openai API proxy endpoint?

Describe alternatives you've considered

No response

Additional context

No response

@xjspace xjspace added the Enhancement New feature or request label Dec 6, 2023
@Notnaton
Copy link
Collaborator

Notnaton commented Dec 6, 2023

You can use the --api_base https://host.com/v1

Or you can edit the config:
interpreter --config
Add api_base: "https://host.com/v1"

@xjspace let me know if this solves your question.

@Notnaton Notnaton added question and removed Enhancement New feature or request labels Dec 6, 2023
@RisingVoicesBk
Copy link

Notnaton, strangely when I did this Openai still appears in the model name when running code. But I have been trying to run code llama through huggingface: this is the line I'm referring to: Model: openai/huggingface/codellama/CodeLlama-34b-Instruct-hf

Interpreter Info

    Vision: False
    Model: openai/huggingface/codellama/CodeLlama-34b-Instruct-hf
    Function calling: None
    Context window: 3000
    Max tokens: 400

    Auto run: False
    API base: https://api-inference.huggingface.co/models/codellama/CodeLlama-34b-Instruct-hf
    Offline: False

@Notnaton
Copy link
Collaborator

This is because we add it, so litellm uses openai format to communicate with the endpoint.
There is a change coming up to not do this anymore, next update?

@Notnaton
Copy link
Collaborator

#955

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants