Skip to content

Commit 6e084cd

Browse files
authored
Update latest version (#27)
1 parent 6b4c69e commit 6e084cd

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,8 @@
3030

3131
**LATEST RELEASE: You are currently on the main branch which tracks
3232
under-development progress towards the next release. The current release branch
33-
is [r23.11](https://github.com/triton-inference-server/vllm_backend/tree/r23.11)
34-
and which corresponds to the 23.11 container release on
33+
is [r23.12](https://github.com/triton-inference-server/vllm_backend/tree/r23.12)
34+
and which corresponds to the 23.12 container release on
3535
[NVIDIA GPU Cloud (NGC)](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver).**
3636

3737
# vLLM Backend
@@ -96,9 +96,9 @@ A sample command to build a Triton Server container with all options enabled is
9696
--endpoint=grpc
9797
--endpoint=sagemaker
9898
--endpoint=vertex-ai
99-
--upstream-container-version=23.10
100-
--backend=python:r23.10
101-
--backend=vllm:r23.10
99+
--upstream-container-version=23.12
100+
--backend=python:r23.12
101+
--backend=vllm:r23.12
102102
```
103103

104104
### Option 3. Add the vLLM Backend to the Default Triton Container

0 commit comments

Comments
 (0)