Optimum ExecuTorch enables efficient deployment of transformer models using Meta's ExecuTorch framework. It provides:
- π Easy conversion of Hugging Face models to ExecuTorch format
- β‘ Optimized inference with hardware-specific optimizations
- π€ Seamless integration with Hugging Face Transformers
- π± Efficient deployment on various devices
Install conda on your machine. Then, create a virtual environment to manage our dependencies.
conda create -n optimum-executorch python=3.11
conda activate optimum-executorch
git clone https://github.com/huggingface/optimum-executorch.git
cd optimum-executorch
pip install '.[tests]'
- π Install from pypi coming soon...
You can install executorch
and transformers
from source, where you can access new ExecuTorch
compatilbe models from transformers
and new features from executorch
as both repos are under
rapid deployment.
Follow these steps manually:
From the root directory where optimum-executorch
is cloned:
# Clone the ExecuTorch repository
git clone https://github.com/pytorch/executorch.git
cd executorch
# Checkout the stable branch to ensure stability
git checkout viable/strict
# Install ExecuTorch
python ./install_executorch.py
cd ..
From the root directory where optimum-executorch
is cloned:
# Clone the Transformers repository
git clone https://github.com/huggingface/transformers.git
cd transformers
# Install Transformers in editable mode
pip install -e .
cd ..
There are two ways to use Optimum ExecuTorch:
from optimum.executorch import ExecuTorchModelForCausalLM
from transformers import AutoTokenizer
# Load and export the model on-the-fly
model_id = "HuggingFaceTB/SmolLM2-135M-Instruct"
model = ExecuTorchModelForCausalLM.from_pretrained(
model_id,
recipe="xnnpack",
attn_implementation="custom_sdpa", # Use custom SDPA implementation for better performance
**{"qlinear": True}, # quantize linear layers with 8da4w
)
# Generate text right away
tokenizer = AutoTokenizer.from_pretrained(model_id)
generated_text = model.text_generation(
tokenizer=tokenizer,
prompt="Once upon a time",
max_seq_len=32,
)
print(generated_text)
Note: If an ExecuTorch model is already cached on the Hugging Face Hub, the API will automatically skip the export step and load the cached
.pte
file. To test this, replace themodel_id
in the example above with"executorch-community/SmolLM2-135M"
, where the.pte
file is pre-cached. Additionally, the.pte
file can be directly associated with the eager model, as demonstrated in this example.
Use the CLI tool to convert your model to ExecuTorch format:
optimum-cli export executorch \
--model "HuggingFaceTB/SmolLM2-135M-Instruct" \
--task "text-generation" \
--recipe "xnnpack" \
--output_dir="hf_smollm2" \
--use_custom_sdpa \
--qlinear
Explore the various export options by running the command: optimum-cli export executorch --help
Use the exported model for text generation:
from optimum.executorch import ExecuTorchModelForCausalLM
from transformers import AutoTokenizer
# Load the exported model
model = ExecuTorchModelForCausalLM.from_pretrained("./hf_smollm2")
# Initialize tokenizer and generate text
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM2-135M-Instruct")
generated_text = model.text_generation(
tokenizer=tokenizer,
prompt="Once upon a time",
max_seq_len=128
)
print(generated_text)
Supported using custom SDPA with Hugging Face Transformers, boosting performance by 3x compared to default SDPA, based on tests with HuggingFaceTB/SmolLM2-135M
.
Currently, Optimum-ExecuTorch supports the XNNPACK Backend with custom SDPA for efficient execution on mobile CPUs.
For a comprehensive overview of all backends supported by ExecuTorch, please refer to the ExecuTorch Backend Overview.
We currently support Post-Training Quantization (PTQ) for linear layers using int8 dynamic per-token activations and int4 grouped per-channel weights (aka 8da4w
), as well as int8 channelwise embedding quantization.
π Stay tuned as more optimizations and performance enhancements are coming soon!
The following models have been successfully tested with Executorch. For details on the specific optimizations supported and how to use them for each model, please consult their respective test files in the tests/models/
directory.
We currently support a wide range of popular transformer models, including encoder-only, decoder-only, and encoder-decoder architectures, as well as models specialized for various tasks like text generation, translation, summarization, and mask prediction, etc. These models reflect the current trends and popularity across the Hugging Face community:
- Albert:
albert-base-v2
and its variants - Bert: Google's
bert-base-uncased
and its variants - Distilbert:
distilbert-base-uncased
and its variants - Eurobert:
EuroBERT-210m
and its variants - Roberta: FacebookAI's
xlm-roberta-base
and its variants
- Gemma:
Gemma-2b
and its variants - Gemma2:
Gemma-2-2b
and its variants - Gemma3:
Gemma-3-1b
and its variants (requires install latesttransformers (4.52.0.dev0)
manually from source) - Llama:
Llama-3.2-1B
and its variants - Qwen2:
Qwen2.5-0.5B
and its variants - Qwen3:
Qwen3-0.6B
and its variants - Olmo:
OLMo-1B-hf
and its variants - Phi4:
Phi-4-mini-instruct
and its variants - Smollm: π€
SmolLM2-135M
and its variants
- T5: Google's
T5
and its variants
- Cvt: Convolutional Vision Transformer
- Deit: Distilled Data-efficient Image Transformer (base-sized)
- Dit: Document Image Transformer (base-sized)
- EfficientNet: EfficientNet (b0-b7 sized)
- Focalnet: FocalNet (tiny-sized)
- Mobilevit: Apple's MobileViT xx-small
- Mobilevit2: Apple's MobileViTv2
- Pvt: Pyramid Vision Transformer (tiny-sized)
- Swin: Swin Transformer (tiny-sized)
- Whisper: OpenAI's
Whisper
and its variants
π Note: This list is continuously expanding. As we continue to expand support, more models will be added.
Check our ExecuTorch GitHub repo directly for:
- More backends and performance optimization options
- Deployment guides for Android, iOS, and embedded devices
- Additional examples and benchmarks
We love your input! We want to make contributing to Optimum ExecuTorch as easy and transparent as possible. Check out our:
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- Report bugs through GitHub Issues