Skip to content

Developing a system for benchmarking #304

@TomDonoghue

Description

@TomDonoghue

There has been some discussion of developing more of a consistent setup / approach for profiling the module / models, and using this as we go forward.

Recent optimizations (#299) spurred this discussion including the following comment (to keep in mind) about further work on benchmarking: #299 (comment)

For some basic record keeping, in terms of model fitting, we currently spend basically all of our time in curve_fit, with there not being much obvious opportunity to speed things up beyond what was already done in #299. See this comment (#299 (comment)) for some comments on potentially exploring different fitting approaches.

In terms of some basic / benchmarking sims / tests, there are some starting points:

I think the general idea would be to develop anything like this with or after the 2.0 release - so for now we can use this issue for any further discussion, and keep this in mind as we move towards 2.0 and beyond.

Metadata

Metadata

Assignees

No one assigned

    Labels

    >2.0Idea for beyond specparam 2.0.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions