Skip to content

Precaching and FFT planning #32

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ChrisRackauckas opened this issue Feb 16, 2022 · 2 comments
Open

Precaching and FFT planning #32

ChrisRackauckas opened this issue Feb 16, 2022 · 2 comments

Comments

@ChrisRackauckas
Copy link
Member

Similar to the FastDense optimizations in SciML/DiffEqFlux.jl#671, this library can definitely benefit from having pre-cached versions of the operations since the neural networks are generally small. In addition, the plan_fft part could be cached and reused for subsequent calls. Given the amount of reuse, direct control of the planning could be helpful:

The flags argument is a bitwise-or of FFTW planner flags, defaulting to FFTW.ESTIMATE. e.g. passing FFTW.MEASURE or FFTW.PATIENT will instead spend several seconds (or more) benchmarking different possible FFT algorithms and picking the fastest one; see the FFTW manual for more information on planner flags. The optional timelimit argument specifies a rough upper bound on the allowed planning time, in seconds. Passing FFTW.MEASURE or FFTW.PATIENT may cause the input array A to be overwritten with zeros during plan creation.

Note that the precaching only removes allocations in cases with a single forward before reverse. A separate pointer bumping method would be necessary to precache a whole batch of test inputs, if multiple batches are used in one loss equation.

@pzimbrod
Copy link
Contributor

I did toy around with FFT Plans at an earlier point (#11 #14) but then put it off since it turned out to be too much hassle for me at that time - does that cover something different?

@ChrisRackauckas
Copy link
Member Author

Nope, that's the same thing. To avoid those issues, a direct rrule will be needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants