-
Notifications
You must be signed in to change notification settings - Fork 118
Description
Existing tests primarily focus on "integration" testing, i.e. running the entire modeling system as an integrated unit, and checking if the models run correctly end-to-end, giving expected results. This is ultimately an important set of tests, but is of limited usefulness for diagnosing problems when they occur-- a failing integration test shows that a problem exists, but finding the precise cause can be tricky, especially when the problem is not a model crash but rather just a modest change in output results.
Unit tests, on the other hand, test small pieces of functionality. Rather than just asking if the results are the same, we ask about each intermediate step. For example, for a choice model we can check:
- given the input data, are each of the explanatory variables (the lines in the model spec) computed correctly?
- given the configuration, is the set of available alternatives correct?
- given the explanatory variables, is the complete utility value for each alternative calculated correctly?
- given the correct utility values, are the probabilities computed correctly?
- given the probabilities, are we generating a reasonable distribution of choices?
This is obviously much more work to set up these checks, but given that work to do so, we can have much greater confidence that the model will still work correctly when modified; and the testing burden is lowered for modification. For example, to replace a MNL model with a NL model, we need only write one quick new test to add the suite above, as the NL model only changes the 4th test... there's no need to re-run the entire model to do the tests, just the 4th line.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status