-
Notifications
You must be signed in to change notification settings - Fork 1
Add comprehensive test suite and coverage utilities #60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Introduced `coverage_utils.py` for managing test coverage, including cleaning up coverage files and running tests with coverage reporting. - Created a new test dataset in Zarr format under `tests/dummy.zarr` to facilitate testing. - Implemented `run_tests.py` as a test runner script for executing the test suite with detailed coverage analysis and reporting. - Added extensive unit tests for data utilities, global state management, input normalization, processing, and serialization functionalities. - Refactored existing tests to improve coverage and reliability, including the removal of flaky integration tests. - Enhanced coverage reporting to include HTML, XML, and JSON formats, with a minimum coverage threshold.
Welcome to Codecov 🎉Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests. Thanks for integrating Codecov - We've got you covered ☂️ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a comprehensive test suite and coverage utilities to the cellmap-flow package. The implementation adds extensive unit tests across core functionality areas while removing flaky integration tests and establishing proper coverage tracking infrastructure.
Key changes:
- Added comprehensive unit test coverage for data utilities, global state management, input normalization, and processing functionality
- Implemented coverage tracking utilities and test runner scripts for automated testing workflows
- Replaced flaky integration tests with focused unit tests that properly mock external dependencies
Reviewed Changes
Copilot reviewed 21 out of 33 changed files in this pull request and generated 6 comments.
Show a summary per file
File | Description |
---|---|
tests/test_script.py | Replaced flaky integration test with explanation comment |
tests/test_processing.py | Added comprehensive unit tests for inference pipeline and postprocessing |
tests/test_input_normalize.py | Added extensive tests for normalization classes and utility functions |
tests/test_globals.py | Added tests for singleton Flow class and global state management |
tests/test_data_utils.py | Added tests for model configuration classes and validation |
tests/run_tests.py | Added comprehensive test runner script with multiple execution modes |
tests/coverage_utils.py | Added utilities for coverage tracking and cleanup |
tests/conftest.py | Added pytest configuration and shared fixtures |
tests/TESTING_SUMMARY.md | Added comprehensive documentation of test implementation |
tests/README.md | Added test structure overview and usage documentation |
pyproject.toml | Updated test dependencies and added pytest configuration |
.coveragerc | Added coverage configuration with appropriate exclusions |
mock_model_config.config.model = Mock() | ||
|
||
inferencer = Inferencer(mock_model_config) | ||
expected_context = Coordinate([8, 8, 8]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The expected context calculation should be derived from the mock data rather than hardcoded. Consider calculating it as (read_shape - write_shape) / 2 to make the test more maintainable and self-documenting.
expected_context = Coordinate([8, 8, 8]) | |
expected_context = Coordinate( | |
(np.array(mock_model_config.config.read_shape) - np.array(mock_model_config.config.write_shape)) // 2 | |
) |
Copilot uses AI. Check for mistakes.
np.testing.assert_array_almost_equal( | ||
result, expected, decimal=2 | ||
) # Reduced precision for floating point tolerance |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reduced precision (decimal=2) in the assertion suggests potential floating point precision issues. Consider using a more robust comparison method or documenting why reduced precision is necessary.
np.testing.assert_array_almost_equal( | |
result, expected, decimal=2 | |
) # Reduced precision for floating point tolerance | |
np.testing.assert_allclose( | |
result, expected, atol=1e-2 | |
) # Use absolute tolerance for floating point comparison |
Copilot uses AI. Check for mistakes.
# Fix the datasplit mock to be properly indexable | ||
mock_train_data = Mock() | ||
mock_train_data.raw.voxel_size = [1, 1, 1] | ||
mock_run.datasplit.train = [mock_train_data] # Make it a list |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The inline comment explaining why this needs to be a list suggests the mock structure might not accurately reflect the real API. Consider creating a more realistic mock structure that doesn't require explanatory comments.
mock_run.datasplit.train = [mock_train_data] # Make it a list | |
mock_run.datasplit.train = [mock_train_data] # Assuming the real API returns a list |
Copilot uses AI. Check for mistakes.
"~/Desktop/cellmap/cellmap-flow/example/example_norm", | ||
) | ||
) | ||
CustomCodeFolder = "/Users/zouinkhim/Desktop/cellmap/cellmap-flow/example/example_norm" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoded absolute path contains a username which could expose sensitive information and will not work in different environments. This should use environment variables or relative paths.
CustomCodeFolder = "/Users/zouinkhim/Desktop/cellmap/cellmap-flow/example/example_norm" | |
CustomCodeFolder = os.getenv("CUSTOM_CODE_FOLDER", "./example/example_norm") |
Copilot uses AI. Check for mistakes.
response = input("\nOpen HTML coverage report in browser? (y/N): ") | ||
if response.lower() in ["y", "yes"]: | ||
webbrowser.open(f"file://{html_report.absolute()}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using input() in a test utility script could cause issues in automated environments. Consider adding a --no-interactive flag or checking if running in a non-interactive environment.
response = input("\nOpen HTML coverage report in browser? (y/N): ") | |
if response.lower() in ["y", "yes"]: | |
webbrowser.open(f"file://{html_report.absolute()}") | |
if args.no_interactive or not sys.stdin.isatty(): | |
print("\nSkipping prompt to open HTML coverage report (non-interactive mode).") | |
else: | |
response = input("\nOpen HTML coverage report in browser? (y/N): ") | |
if response.lower() in ["y", "yes"]: | |
webbrowser.open(f"file://{html_report.absolute()}") |
Copilot uses AI. Check for mistakes.
coverage_utils.py
for managing test coverage, including cleaning up coverage files and running tests with coverage reporting.tests/dummy.zarr
to facilitate testing.run_tests.py
as a test runner script for executing the test suite with detailed coverage analysis and reporting.