Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix lmi/vllm virtual envs, update to vllm 0.7.1 #2703

Merged
merged 1 commit into from
Feb 3, 2025

Conversation

siddvenk
Copy link
Contributor

@siddvenk siddvenk commented Feb 3, 2025

Description

This changes updates to vllm 0.7.1, which involves shuffling around some dependencies and being less strict with dependency versions.

Additionally, it updates the chat processing for vllm to be functional. There is still a good amount we need to implement for chat processing, that i'll take up in a follow up PR:

  • use the sampling params provided directly from the vllm chat object (to_sampling_params method). this will ensure we use the correct sampling params for chat
  • validate function calling and tool usage with this update
  • allow user to specify override chat template, and chat format

I have tested this with (single test for each):

  • HF non rolling batch
  • HF scheduler rolling batch
  • vLLM rolling batch
  • lmi-dist rolling batch

I also added a chat test for mistral with vllm.

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Checklist:

  • Please add the link of Integration Tests Executor run with related tests.
  • Have you manually built the docker image and verify the change?
  • Have you run related tests? Check how to set up the test environment here; One example would be pytest tests.py -k "TestCorrectnessLmiDist" -m "lmi_dist"
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

  • Test A
    Logs for Test A

  • Test B
    Logs for Test B

@siddvenk siddvenk requested review from zachgk and a team as code owners February 3, 2025 02:20
@@ -21,16 +21,11 @@
resolve_chat_template_content_format)


def is_chat_completions_request(inputs: Dict) -> bool:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleted because it's not used

@@ -41,12 +36,6 @@ def parse_chat_completions_request_vllm(
"You must enable rolling batch to use the chat completions format."
)

if not is_mistral_tokenizer and not hasattr(tokenizer,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleted because the vllm utils do this validation for us already

git reset --hard 4b2092c
$venv_pip install .
cd ..
rm -rf AutoFP8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we not need FP8 installation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not anymore! we're using llm compressor now #2701

@siddvenk siddvenk merged commit 1d05281 into deepjavalibrary:master Feb 3, 2025
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants