Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 59.6k 10.6k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 2k 249

  3. recipes recipes Public

    Common recipes to run vLLM

    Jupyter Notebook 153 53

Repositories

Showing 10 of 24 repositories
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    vllm-project/llm-compressor’s past year of commit activity
    Python 2,049 Apache-2.0 249 59 (12 issues need help) 40 Updated Oct 7, 2025
  • speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    vllm-project/speculators’s past year of commit activity
    Python 53 Apache-2.0 9 3 (2 issues need help) 18 Updated Oct 7, 2025
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    vllm-project/vllm-spyre’s past year of commit activity
    Python 35 Apache-2.0 26 6 15 Updated Oct 7, 2025
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 59,586 Apache-2.0 10,554 1,836 (31 issues need help) 1,154 Updated Oct 7, 2025
  • vllm-gaudi Public

    Community maintained hardware plugin for vLLM on Intel Gaudi

    vllm-project/vllm-gaudi’s past year of commit activity
    Python 11 48 1 44 Updated Oct 7, 2025
  • ci-infra Public

    This repo hosts code for vLLM CI & Performance Benchmark infrastructure.

    vllm-project/ci-infra’s past year of commit activity
    HCL 21 38 0 16 Updated Oct 7, 2025
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    vllm-project/guidellm’s past year of commit activity
    Python 613 Apache-2.0 87 85 (5 issues need help) 27 Updated Oct 7, 2025
  • semantic-router Public

    Intelligent Mixture-of-Models Router for Efficient LLM Inference

    vllm-project/semantic-router’s past year of commit activity
    Python 1,610 Apache-2.0 181 74 (15 issues need help) 18 Updated Oct 7, 2025
  • aibrix Public

    Cost-efficient and pluggable Infrastructure components for GenAI inference

    vllm-project/aibrix’s past year of commit activity
    Go 4,286 Apache-2.0 467 219 (19 issues need help) 27 Updated Oct 7, 2025
  • flash-attention Public Forked from Dao-AILab/flash-attention

    Fast and memory-efficient exact attention

    vllm-project/flash-attention’s past year of commit activity
    Python 93 BSD-3-Clause 2,057 0 15 Updated Oct 7, 2025