Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Access to on premisse LLM #544

Open
jmgabriel opened this issue Jun 5, 2024 · 1 comment
Open

Access to on premisse LLM #544

jmgabriel opened this issue Jun 5, 2024 · 1 comment

Comments

@jmgabriel
Copy link

Expected Behavior

I would like to bind LoLLMs to a LLM that is deployed on premisse in our organization.

Current Behavior

It seems there is no binding allowing to set an endpoint, an engine, a version, a token ...

Apologies if I missed anything in the doc.

@ParisNeo
Copy link
Owner

Hi. Ofcourse this is possible.

I have done this multiple times in multiple places.

You have many options.
For example, you can install a vllm server on a server then on lollms, select vllm binding, and after installing it, you can setup the link to your server as well as the parameyters like the key if you have one etc. Then you'll be able to access your served models on your client pc on lollms.

You can do the same using ollama as server.

Best regards

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants