-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VESSL AI LLMProvider integration #17414
base: main
Are you sure you want to change the base?
Conversation
vesslai integration
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need to commit this file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should write an actual readme (see other llms, should show the install and basic usage)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I checked the sources of other llms, and most of them seem to be the same as "python_sources()". What specific details need to be included?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I left the comment on the wrong file, this was intended for the README.md file 😅
|
||
self.organization_name = organization_name | ||
|
||
def serve( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious about the decision to do serve and connect outside of the __init__()
function? Do your users often switch this after the llm object is created? In most llama-index LLMs, you would just do llm = VesslAILLM(...)
and then from there you can directly use it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its fine either way tbh, was just curious
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the feedback. To use VESSL, authentication through configure is required. I wanted to handle this process during initialization and explicitly separate the serving and connection of the llm_provider afterward. Internally at VESSL, we have discussed this flow, and it seems to be fine.
llm = VesslAILLM() | ||
|
||
#1 Serve hf model name | ||
llm.serve( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thoughts on making serve and connect async? Seems like this could possibly be a blocking operation with wait_for_gateway_enabled
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets put example notebooks in docs/docs/examples/llms/
print(f"The service {service_name} is currently rolling out.") | ||
if _request_abort_rollout(service_name): | ||
print("Waiting for the existing rollout to be aborted...") | ||
time.sleep(30) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ouch. Another vote to have async imo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's quite a lot of code, is any of it testable? (you'd have to mock out api calls though)
Description
Fixes # (issue)
New Package?
Did I fill in the
tool.llamahub
section in thepyproject.toml
and provide a detailed README.md for my new integration or package?Version Bump?
Did I bump the version in the
pyproject.toml
file of the package I am updating? (Except for thellama-index-core
package)Type of Change
Please delete options that are not relevant.
How Has This Been Tested?
Your pull-request will likely not be merged unless it is covered by some form of impactful unit testing.
Suggested Checklist:
make format; make lint
to appease the lint gods