Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
pandyamarut authored Aug 9, 2024
1 parent 967eaba commit 571ef2b
Showing 1 changed file with 4 additions and 2 deletions.
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,10 @@ Deploy OpenAI-Compatible Blazing-Fast LLM Endpoints powered by the [vLLM](https:
### 1. UI for Deploying vLLM Worker on RunPod console:
![Demo of Deploying vLLM Worker on RunPod console with new UI](media/ui_demo.gif)

### 2. Worker vLLM `v1.2.0` with vLLM `0.5.4` now available under `stable` tags
Update v1.1 is now available, use the image tag `runpod/worker-v1-vllm:stable-cuda12.1.0`.
### 2. Worker vLLM `v1.2.0` with vLLM `0.5.4` now available under `stable` tags
**[Note]**: Current stable docker image version still runs vllm v0.5.3, It will be updated soon.

Update v1.1.0 is now available, use the image tag `runpod/worker-v1-vllm:stable-cuda12.1.0`.

### 3. OpenAI-Compatible [Embedding Worker](https://github.com/runpod-workers/worker-infinity-embedding) Released
Deploy your own OpenAI-compatible Serverless Endpoint on RunPod with multiple embedding models and fast inference for RAG and more!
Expand Down

0 comments on commit 571ef2b

Please sign in to comment.