Having cuda memory error while running Prompt Node with gpu #5186
Replies: 1 comment
-
Answered by @julian-risch on Stack Overflow and Discord:
|
Beta Was this translation helpful? Give feedback.
-
Answered by @julian-risch on Stack Overflow and Discord:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi. I am having Cuda ran out of memory issue while running this code:
prompt_node = PromptNode(model_name_or_path = 'google/flan-t5-xl',
default_prompt_template=lfqa_prompt,
use_gpu=True,
max_length=300)
I tried to solve the issue with Cuda. I am using GPU with the retriever, and it works fine. Only having the issue when I use prompt node. Any suggestion on how to fix it?
The error is:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 14.85 GiB total capacity; 4.02 GiB already allocated; 17.44 MiB free; 4.02 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Beta Was this translation helpful? Give feedback.
All reactions