Optimized Qwen2.5-3B using GPTQ, reducing size from 5.75GB → 1.93GB and improving inference speed. Ideal for efficient edge AI deployments.
-
Updated
May 24, 2025 - Python
Optimized Qwen2.5-3B using GPTQ, reducing size from 5.75GB → 1.93GB and improving inference speed. Ideal for efficient edge AI deployments.
Add a description, image, and links to the edge-llm topic page so that developers can more easily learn about it.
To associate your repository with the edge-llm topic, visit your repo's landing page and select "manage topics."