-
Notifications
You must be signed in to change notification settings - Fork 266
Description
When I trying to run video_chat_with_MOSS, I got this issue:
当我在运行video_chat_with_MOSS的时候,报错:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 322, in run_predict
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1015, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 833, in call_function
prediction = await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 2505, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 1005, in run
result = context.run(func, *args)
File "/home/Ask-Anything/video_chat_with_MOSS/moss.py", line 44, in run_text
outputs = self.generator(history,max_new_tokens=1024, num_return_sequences=1, num_beams=1, do_sample=True,
File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_generation.py", line 209, in call
return super().call(text_inputs, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 1109, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 1116, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 1015, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_generation.py", line 251, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 1477, in generate
input_ids, model_kwargs = self._expand_inputs_for_generation(
File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 690, in _expand_inputs_for_generation
input_ids = input_ids.repeat_interleave(expand_size, dim=0)
RuntimeError: repeats can not be negative
And I add a print("expand_size:", expand_size) before this lane, it shows expand_size: 1
I don't know what's going on here? How could I fix this?
然后我加入了打印expand_size,显示expand_size为1,并不是报错中提到的为负数了,所以我现在不知道该怎么解决了,是不是gradio版本的问题?