-
Notifications
You must be signed in to change notification settings - Fork 355
Description
Is your feature request related to a problem? Please describe.
After training a model, running model.predict is really slow.. 100-200x slower than I think it should be. Perhaps it is user error but I did not come across a way to make this faster? Or maybe it is a bug if inference is supposed to be faster?
Inference for a very simple model takes ~5 ms, but the underlying numpy computations should only take 20 µs.
Here is a speed test result:
https://github.com/florisvb/Nonlinear_and_Data_Driven_Estimation/blob/main/Lesson_15_SINDY/speed_test.png
Describe the solution you'd like
Two options I can think of:
- Modify the predict method to be faster.. ie. is there stuff that predict is computing that is unnecessary? Is there an issue with the types? Is there a way to get a sparse model so that we're not wasting computer on features that will be zeroed out anyways?
- Export the model as a fast inference model. For example, can the non zero features and their coefficients be exported to a portable file type and be reconstructed as a super fast function? This would be really nice to have, regardless.
Describe alternatives you've considered
I could do option 2 with enough time, but it would be nice if there were a pysindy method to save and load a model? Or maybe there is and I did not find it.
Additional context
This is not a minimal example, but it is a fully functional notebook that shows the issue. It will run in google colab, but takes a little time to install everything and download data used in pysindy:
https://github.com/florisvb/Nonlinear_and_Data_Driven_Estimation/blob/main/Lesson_15_SINDY/A_planar_drone_pysindy_speed_test.ipynb