-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Open
Labels
bugConfirmed bugsConfirmed bugs
Description
🐛 Bug
To Reproduce
Steps to reproduce the behavior:
- Use Anaconda Prompt terminal > activate webllm
- set TVM_SOURCE_DIR=C:\Users\Username\mlc-llm\3rdparty\tvm
- set MLC_LLM_SOURCE_DIR=C:\Users\Username\mlc-llm
- call C:\Users\Username\emsdk\emsdk_env.bat
- cd C:\Users\Username\dist\models
- git clone https://huggingface.co/mlc-ai/Qwen3-0.6B-q4f16_1-MLC
- python -m mlc_llm compile dist\models\Qwen3-0.6B-q4f16_1-MLC\mlc-chat-config.json --overrides "context_window_size=4096;prefill_chunk_size=1024" --device webgpu -o dist\libs\Qwen3-0.6B-q4f16_1-webgpu.wasm
Expected behavior
Environment
- Platform (e.g. WebGPU/Vulkan/IOS/Android/CUDA):
- Operating system (e.g. Ubuntu/Windows/MacOS/...):
- Device (e.g. iPhone 12 Pro, PC+RTX 3090, ...)
- How you installed MLC-LLM (
conda, source): - How you installed TVM (
pip, source): - Python version (e.g. 3.10):3.13
- GPU driver version (if applicable):AMD Radeon 680M (25.8.1)/ Nvidia Geforce RTX4060 Laptop GPU (580.97)
- CUDA/cuDNN version (if applicable): CUDA Version: 13.0
- TVM Hash Tag , applicable if you compile models): GIT_COMMIT_HASH: c6fb2be79f654588fd94727a74b9ca0754f63fa4
- Any other relevant information:
Additional context
Different from the official tutorial is
https://llm.mlc.ai/docs/install/emcc.html
Step 3 cannot use "./web/prep_emcc_deps.sh", use choco install make to directly use make for compilation within \mlc-llm\web.
The directory of \mlc-llm\web\dist\wasm
2,112 mlc_wasm_runtime.bc
57 mlc_wasm_runtime.d
195 mlc_wasm_runtime.wasm
Similarly, use make to compile within \mlc-llm\3rdparty\tvm\web
The directory of \mlc-llm\3rdparty\tvm\web\dist\wasm
80,935 tvmjs_runtime.js
4,577,631 tvmjs_runtime.wasm
173,764 tvmjs_support.bc
4,125 tvmjs_support.d
8,327,796 wasm_runtime.bc
11,530 wasm_runtime.d
233,464 webgpu_runtime.bc
5,047 webgpu_runtime.d
Metadata
Metadata
Assignees
Labels
bugConfirmed bugsConfirmed bugs