Skip to content

RuntimeError: Could not run 'prepacked::conv2d_clamp_run' with arguments from the 'CUDA' backend. #62

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
JerryAuas opened this issue Jul 13, 2022 · 2 comments

Comments

@JerryAuas
Copy link

I use the cuda-10.2 torch-1.9.0
when i use it, i got this error. I try to do something, but i cant solved this error, hopefull you can give me some help @yasenh. Thx!

@JerryAuas
Copy link
Author

cuda is available
Class: [ person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush, ]
Run once on empty image
----------New Frame----------
pre-process takes : 1898 ms
terminate called after throwing an instance of 'std::runtime_error'
what(): The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/torch/models/yolo/___torch_mangle_1108.py", line 16, in forward
_7 = torch.slice(_6, 3, 1, 9223372036854775807, 2)
input = torch.cat([_1, _3, _5, _7], 1)
_8 = ops.prepacked.conv2d_clamp_run(input, CONSTANTS.c0)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
input0 = torch.mul(_8, torch.sigmoid(_8))
_9 = ops.prepacked.conv2d_clamp_run(input0, CONSTANTS.c1)

Traceback of TorchScript, original code (most recent call last):
%weight, %bias, %stride, %padding, %dilation, %groups,
%output_min_max, %output_min_max)
%r = prepacked::conv2d_clamp_run(%input, %packed_weight_bias)
~~~~~~~~~ <--- HERE
return (%r)
RuntimeError: Could not run 'prepacked::conv2d_clamp_run' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'prepacked::conv2d_clamp_run' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode].

CPU: registered at ../aten/src/ATen/native/xnnpack/RegisterOpContextClass.cpp:84 [kernel]
BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
AutogradOther: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:43 [backend fallback]
AutogradXLA: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:47 [backend fallback]
Tracer: fallthrough registered at ../torch/csrc/jit/frontend/tracer.cpp:999 [backend fallback]
Autocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:250 [backend fallback]
Batched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]
VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]

This is the terminal log.

@wujunyi1412
Copy link

wujunyi1412 commented Jul 21, 2023

I have the same issue, and do not solve it.
江苏老乡

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants