-
Notifications
You must be signed in to change notification settings - Fork 375
Description
PLEASE READ BEFORE SUBMITTING AN ISSUE
MagicQuill is not a commercial software but a research project. While we strive to improve and maintain it, support is provided on a best-effort basis. Please be patient and respectful in your communications.
To help us respond faster and better, please ensure the following:
- Search Existing Resources: Have you looked through the documentation (e.g., hardware requirement and setup steps), and searched online for potential solutions?
- Avoid Duplication: Check if a similar issue already exists.
If the issue persists, fill out the details below.
Checklist
- I have searched the documentation and FAQs.
- I have searched for similar issues but couldn’t find a solution.
- I have provided clear and detailed information about the issue.
Issue/Feature Request Description
Type of Issue:
- Bug
- Feature Request
- Question
Summary:
Did a fresh install using linux, and ./linux_setup.sh
We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set max_memory
in to a higher value to use more memory (at your own risk).
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/chris/ai/images/MagicQuill/gradio_run.py", line 24, in
llavaModel = LLaVAModel()
File "/home/chris/ai/images/MagicQuill/MagicQuill/llava_new.py", line 26, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
File "/home/chris/ai/images/MagicQuill/MagicQuill/LLaVA/llava/model/builder.py", line 117, in load_pretrained_model
model = LlavaLlamaForCausalLM.from_pretrained(
File "/home/chris/anaconda3/envs/MagicQuill/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3850, in from_pretrained
) = cls._load_pretrained_model(
File "/home/chris/anaconda3/envs/MagicQuill/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4284, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/home/chris/anaconda3/envs/MagicQuill/lib/python3.10/site-packages/transformers/modeling_utils.py", line 839, in _load_state_dict_into_meta_model
set_module_quantized_tensor_to_device(model, param_name, param_device, value=param)
File "/home/chris/anaconda3/envs/MagicQuill/lib/python3.10/site-packages/transformers/integrations/bitsandbytes.py", line 121, in set_module_quantized_tensor_to_device
new_value = bnb.nn.Params4bit(new_value, requires_grad=False, **kwargs).to(device)
File "/home/chris/anaconda3/envs/MagicQuill/lib/python3.10/site-packages/bitsandbytes/nn/modules.py", line 331, in to
return self._quantize(device)
File "/home/chris/anaconda3/envs/MagicQuill/lib/python3.10/site-packages/bitsandbytes/nn/modules.py", line 296, in _quantize
w_4bit, quant_state = bnb.functional.quantize_4bit(
File "/home/chris/anaconda3/envs/MagicQuill/lib/python3.10/site-packages/bitsandbytes/functional.py", line 1237, in quantize_4bit
lib.cquantize_blockwise_fp16_nf4(*args)
AttributeError: 'NoneType' object has no attribute 'cquantize_blockwise_fp16_nf4'
Error: Failed to run MagicQuill.