-
Notifications
You must be signed in to change notification settings - Fork 371
Description
🚀 Feature
We would like the Opacus DPSGD to work with the case where the neural network input is a torch sparse coo tensor.
Motivation
Similar to issue #350 , there are cases where the input of the neural network is a torch sparse tensor. In our case, our data is exactly a torch sparse coo tensor and it is impossible to fit the dense version of it into GPU. It would be great if Opacus DPSGD (grad_sampler...etc) is compatible with input of the nerual networks being a sparse tensor.
Pitch
We would like Opacus to be compatible with the case where torch sparse coo tensor is the neural network input. Currently, even if I modify the grad_sample_module.py L62 from = grad_sample
to += grad_sample
to prevent errors, the results are still incorrect. That is, the resulting gradients are different (with a fixed seed) for dense input vs sparse input. The model cannot be trained well with the sparse input while it can with the dense input. It would be a great help if there is any suggestion on solving this issue.
Alternatives
None.
Additional context
None.
Looking forward to hearing back from you, thank you in advance!