Skip to content

Make C3D read time test relative #2221

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

aseth1
Copy link
Member

@aseth1 aseth1 commented Jul 17, 2018

Fixes issue #2217

Brief summary of changes

Update to #2218

Testing I've completed

All tests pass locally.

Looking for feedback on...

N/A

CHANGELOG.md (choose one)

  • no need to update

This change is Reviewable

@chrisdembia
Copy link
Member


464The following tests FAILED:
465	  2 - testC3DFileAdapter (Failed)
466Errors while running CTest
467Command exited with code 8

:(

@aseth1
Copy link
Member Author

aseth1 commented Jul 17, 2018

It is very suspicious that the MaximumLoadTimeInMS = 100ms in the second case. That means the loadTime for the standard file was 2ms, which is 2x faster than my machine. In the first case, 'walking2.c3d' loaded in 401ms and passed, which indicates a loadTime > 8ms (2x slower than my machine). What could be creating such large variability in loadTime?

@chrisdembia
Copy link
Member

What could be creating such large variability in loadTime?

I don't know, but it seems like this approach won't work.

@aseth1
Copy link
Member Author

aseth1 commented Jul 17, 2018

The issue seems to stem from std::clock not necessarily being monotonic and depends on what other demands are on the core, see here. The correct class to use for comparing timing then, is perhaps std::chrono::steady_clock?

…notonically increasing instead of std::clock() for timing.
@aseth1 aseth1 changed the title Make c3 d read time test relative Make C3D read time test relative Jul 17, 2018
@aseth1
Copy link
Member Author

aseth1 commented Jul 18, 2018

@chrisdembia and @tkuchida, I believe the combination of using std::chrono::steady_clock and using a size-based rationale for scaling the load time of the benchmark file makes this test more robust across different platforms. However, it is also far less stringent, at least locally, where the MaximumLoadTimeInS is close to 2s and loading the C3D files typically takes less than 50ms.

@chrisdembia
Copy link
Member

Did you try running the appveyor/pr test multiple times?

@tkuchida
Copy link
Member

I believe the combination of using std::chrono::steady_clock and using a size-based rationale for scaling the load time of the benchmark file makes this test more robust across different platforms. However, it is also far less stringent, at least locally, where the MaximumLoadTimeInS is close to 2s and loading the C3D files typically takes less than 50ms.

The CPU could be doing other things when loading "test.sto" and/or "walking*.c3d", so I'm not sure wall clock time is the best metric. It also isn't obvious that it would take 1000× longer to load a file that is 1000× larger---wouldn't it depend on, e.g., how fragmented the file is (seek time)? It's difficult to think of a solution without knowing why the CI machine is slow. Perhaps the wall clock speed test should be skipped on CI?

@aseth1
Copy link
Member Author

aseth1 commented Jul 19, 2018

I'm not sure wall clock time is the best metric

std::chrono::steady_clock is not wall clock time. It should be OK at measuring intervals.

It also isn't obvious that it would take 1000× longer to load a file that is 1000× larger---wouldn't it depend on, e.g., how fragmented the file is (seek time)?

Yes it's possible, but the test files are pretty small. I am hoping that because the benchmark test.sto file is tiny it easily fits within one allocation block. Of course, file open time could be the dominant time in this case so that the limit reflects the operation of opening the file 1000 times and has nothing to do with time to read data. This seems to be the case since I can read the 1000x sized file typically within 20x the time to load the benchmark file.

It's difficult to think of a solution without knowing why the CI machine is slow.

Slow machines would be fine. The paradox I was running into was that CI would occasionally read the benchmark file at blazing speeds (faster than my desktop) such that the relative max load time was unrealistically fast and the test fails.

Perhaps the wall clock speed test should be skipped on CI?

I could remove the test condition altogether and just have the test report the time. We would then have to manually monitor this time to make sure the load times don't creep or jump up again.

@tkuchida
Copy link
Member

Yes it's possible, but the test files are pretty small. I am hoping that because the benchmark test.sto file is tiny it easily fits within one allocation block. Of course, file open time could be the dominant time in this case so that the limit reflects the operation of opening the file 1000 times and has nothing to do with time to read data. This seems to be the case since I can read the 1000x sized file typically within 20x the time to load the benchmark file.

My point was that I wouldn't expect a file to take 1000x longer to open if it's 1000x larger, not that files will necessarily be fragmented.

Slow machines would be fine. The paradox I was running into was that CI would occasionally read the benchmark file at blazing speeds (faster than my desktop) such that the relative max load time was unrealistically fast and the test fails.

Yes, I recognize that the absolute speed of the machine is irrelevant: the issue is that files sometimes take more/less time to read than expected, based on linearly extrapolating the read time of a small file.

I could remove the test condition altogether and just have the test report the time. We would then have to manually monitor this time to make sure the load times don't creep or jump up again.

That seems reasonable. It's also possible to skip tests in the .yml files.

@jimmyDunne
Copy link
Member

@aseth1 is this still necessary? With the performance improvements with dealing with tables this seems unnecessary. Can I close and delete?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants