Is your feature request related to a problem? Please describe.
In the current version of the test_all_metrics.py we are only testing for positive test cases in the context of metrics and their functionality. It would be nice to have negative test cases to ensure that the metrics return the right errors in case of an incorrect inputs to the metrics API and other related cases.