-
Notifications
You must be signed in to change notification settings - Fork 3
feat: support codspeed #81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #81 +/- ##
=======================================
Coverage 95.12% 95.12%
=======================================
Files 3 3
Lines 82 82
=======================================
Hits 78 78
Misses 4 4 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
WalkthroughA CodSpeed benchmarking workflow was added to GitHub Actions, integrating pytest-codspeed for performance measurement. The Changes
Sequence Diagram(s)sequenceDiagram
participant Developer
participant GitHub
participant Runner
participant CodSpeed
Developer->>GitHub: Push/PR/Manual Dispatch
GitHub->>Runner: Trigger CodSpeed workflow
Runner->>Runner: Checkout code
Runner->>Runner: Setup Python 3.13
Runner->>Runner: Install dev dependencies (incl. pytest-codspeed)
Runner->>CodSpeed: Run CodSpeed Action (with secret token)
Runner->>Runner: Run pytest on tests/ with CodSpeed integration
Runner->>CodSpeed: Report benchmark results
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (1)
tests/test_clang_format.py (1)
53-64
: Parameter values are ignored – test never usesargs
/expected_retval
The current body hard-codes the CLI flags and asserts on-1
, so the parametrisation brings no value and the test semantics are wrong.- ret, _ = run_clang_format(["--dry-run", str(test_file)]) - assert ret == -1 # Dry run should not fail + ret, _ = run_clang_format(args + ["--dry-run", str(test_file)]) + # Dry-run should succeed; use the parametrised expectation + assert ret == expected_retval
♻️ Duplicate comments (4)
tests/test_clang_tidy.py (1)
36-47
: Same remark as above – can be removed once a module-level marker is usedtests/test_clang_format.py (3)
28-44
: Same as previous comment – can be dropped after introducing a module-level marker.
72-87
: Decorator duplication – see earlier suggestion aboutpytestmark
.
89-104
: Decorator duplication – see earlier suggestion aboutpytestmark
.
🧹 Nitpick comments (7)
pyproject.toml (1)
52-53
: Pinpytest-codspeed
to a version range for deterministic CI
Leaving the dependency unpinned means the CI can suddenly break when a new major (or even minor) release introduces breaking changes. Consider constraining the version, e.g.pytest-codspeed>=0.4,<1.0
.- "pytest-codspeed", + "pytest-codspeed>=0.4,<1.0",tests/test_clang_tidy.py (1)
15-26
: Prefer a module-level marker to avoid repetitive decorators
Instead of repeating@pytest.mark.benchmark
on every test you can add once at the top of the module:import pytest pytestmark = pytest.mark.benchmarkThis keeps the decorator stack small and the test body cleaner.
tests/test_util.py (1)
13-14
: Use a module-levelpytestmark
to reduce noise
Repeated decorators make the file harder to scan; a singlepytestmark = pytest.mark.benchmarkat top level marks every test in the module.
tests/test_clang_format.py (1)
7-17
: Consider one global benchmark mark instead of per-test decorators
A singlepytestmark = pytest.mark.benchmark
at the top of the file achieves the same effect with less duplication..github/workflows/codspeed.yml (3)
17-24
: Enable pip caching to speed up installs
actions/setup-python@v5
supportscache: pip
; enabling it cuts install time on subsequent runs.- uses: actions/setup-python@v5 with: - python-version: "3.13" + python-version: "3.12" + cache: "pip"
23-23
: Quote extras spec to avoid YAML flow-sequence quirksQuoting makes it unambiguously a string.
- run: pip install -e .[dev] + run: "pip install -e .[dev]"
4-8
: Considerpull_request_target
for benchmark reporting
pull_request_target
runs on the base ref, giving the workflow access to secrets while still checking out the PR’s code, which often simplifies benchmark publishing.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
.github/workflows/codspeed.yml
(1 hunks).gitignore
(1 hunks)pyproject.toml
(1 hunks)tests/test_clang_format.py
(5 hunks)tests/test_clang_tidy.py
(2 hunks)tests/test_util.py
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Run benchmarks
🔇 Additional comments (1)
.gitignore (1)
21-23
: Good call on ignoring CodSpeed artefacts
The new.codspeed/
entry keeps benchmark results out of VCS – LGTM.
CodSpeed Performance ReportCongrats! CodSpeed is installed 🎉
You will start to see performance impacts in the reports once the benchmarks are run from your default branch.
|
Summary by CodeRabbit
Chores
.codspeed/
directory to.gitignore
to prevent it from being tracked.Tests