Description
This issue relates to step size multiples and observers.
Upon completion of a simulator step, execution
calls
observer::simulator_step_complete(..., lastStep, ...)
,
where lastStep
is incremented in each execution step.
This creates a slight incorrectness for the end user of a time_series_observer
.
The design intent is to first query the observer for two step numbers:
time_series_provider::get_step_numbers(simulator_index, duration, gsl::span<step_number>)
These represent the beginning and the end of the time range the calling code will request values for.
The calling code will then (in cse-cserver-go's case) allocate arrays for values, step number and time points based on the difference between these two numbers.
When a simulator has a step size decimation factor > 1
, this leads to an over-allocation of memory in the calling code. At most half of the allocated arrays will be filled up when subsequently calling time_series_provider::get_xxx_samples()
.
Before discussing a fix, is this worth fixing?
TLDR; Wasted memory when extracting time series for multiple step sized slaves.