-
Notifications
You must be signed in to change notification settings - Fork 11
Open
Labels
enhancementNew feature or requestNew feature or request
Description
I've now implemented vLLM but cant find any way to control prompt caching across requests so it slows down as you get deeper into the generation.
sglang with its Radix cache strategy should actually be PERFECT for our usecase.
Refer specifically to the sglang backend section, does this 'just work faster'?
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --chat-template llama-2
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request