Skip to content

Commit 98c12cf

Browse files
authored
[Doc] fix the autoAWQ example (#7937)
1 parent f52a43a commit 98c12cf

File tree

1 file changed

+12
-8
lines changed

1 file changed

+12
-8
lines changed

docs/source/quantization/auto_awq.rst

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -19,27 +19,31 @@ You can quantize your own models by installing AutoAWQ or picking one of the `40
1919
2020
$ pip install autoawq
2121
22-
After installing AutoAWQ, you are ready to quantize a model. Here is an example of how to quantize Vicuna 7B v1.5:
22+
After installing AutoAWQ, you are ready to quantize a model. Here is an example of how to quantize `mistralai/Mistral-7B-Instruct-v0.2`:
2323

2424
.. code-block:: python
2525
2626
from awq import AutoAWQForCausalLM
2727
from transformers import AutoTokenizer
28-
29-
model_path = 'lmsys/vicuna-7b-v1.5'
30-
quant_path = 'vicuna-7b-v1.5-awq'
28+
29+
model_path = 'mistralai/Mistral-7B-Instruct-v0.2'
30+
quant_path = 'mistral-instruct-v0.2-awq'
3131
quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }
32-
32+
3333
# Load model
34-
model = AutoAWQForCausalLM.from_pretrained(model_path, **{"low_cpu_mem_usage": True})
34+
model = AutoAWQForCausalLM.from_pretrained(
35+
model_path, **{"low_cpu_mem_usage": True, "use_cache": False}
36+
)
3537
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
36-
38+
3739
# Quantize
3840
model.quantize(tokenizer, quant_config=quant_config)
39-
41+
4042
# Save quantized model
4143
model.save_quantized(quant_path)
4244
tokenizer.save_pretrained(quant_path)
45+
46+
print(f'Model is quantized and saved at "{quant_path}"')
4347
4448
To run an AWQ model with vLLM, you can use `TheBloke/Llama-2-7b-Chat-AWQ <https://huggingface.co/TheBloke/Llama-2-7b-Chat-AWQ>`_ with the following command:
4549

0 commit comments

Comments
 (0)