Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using custom llm #735

Open
shazil-ahmed opened this issue Aug 15, 2024 · 0 comments
Open

using custom llm #735

shazil-ahmed opened this issue Aug 15, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@shazil-ahmed
Copy link

shazil-ahmed commented Aug 15, 2024

I have my llm hosted on an ec2 instance through vllm. I'd like to use that llm while evaluating through uptrain. I currently use that llm in my RAG service by providing its name and its base_url. But I am unable to do the same for uptrain. Despite updating the api_base, it doesnt get routed to the llm at my ec2 instance and instead keeps on asking for an openai api key.

Code is below

from uptrain import EvalLLM, Evals, Settings as uptrain_settings

settings = uptrain_settings(model="TheBloke/Mistral-7B-Instruct-v0.1-AWQ", api_base="http://13.2142.169.431:8000/v1")

eval_llm = EvalLLM(settings=settings)

results = eval_llm.evaluate(
  data=responses,
  checks=[Evals.CONTEXT_RELEVANCE, Evals.FACTUAL_ACCURACY, Evals.RESPONSE_COMPLETENESS]
)

** PS **
The api_base above is tweaked a little before posting here(for security purposes)
I'm providing the exact same api_base and model to the llama_index RAG service and its working perfectly, but the uptrain eval is not.

@shazil-ahmed shazil-ahmed added the enhancement New feature or request label Aug 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant