chore(profiling): [4/n] lock optimization - remove 'is_internal' concept from wrappers#17859
Conversation
Codeowners resolved as |
11a6631 to
b110b2b
Compare
9f05d4f to
7532d4a
Compare
b110b2b to
865670c
Compare
316dab1 to
de23ba2
Compare
865670c to
11b9d24
Compare
de23ba2 to
348989c
Compare
ef2ec3f to
be269aa
Compare
86dfe5c to
0f71d26
Compare
be269aa to
e6762a0
Compare
This comment has been minimized.
This comment has been minimized.
9ace0ba to
2a801fe
Compare
bbb95e9 to
6ce13fe
Compare
2a801fe to
96b9239
Compare
7f7d912 to
abbacc1
Compare
|
@chatgpt-codex-connector please review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: abbacc1836
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
There was a problem hiding this comment.
Pull request overview
This PR simplifies lock profiling internals by removing _ProfiledLock.is_internal and moving stdlib internal-lock suppression to allocation time via _ALWAYS_EXCLUDED_MODULES. In the profiling codebase, that shifts stdlib lock handling from “wrap then early-exit” to “don’t wrap excluded-module locks at all”.
Changes:
- Add
_ALWAYS_EXCLUDED_MODULEShandling in the lock allocator and removeis_internalbookkeeping from_ProfiledLock. - Remove asyncio collector-specific internal-module-file configuration that is no longer needed.
- Update typing stubs and add/adjust tests around always-excluded modules and wrapper shape.
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
ddtrace/profiling/collector/_lock.pyx |
Reworks lock allocation to skip wrapping for always-excluded modules and removes internal-lock wrapper state. |
ddtrace/profiling/collector/_lock.pyi |
Syncs stubs with the wrapper/API changes. |
ddtrace/profiling/collector/asyncio.py |
Drops obsolete asyncio internal-file detection fields. |
tests/profiling/collector/test_threading.py |
Updates threading lock tests for the new native-internal-lock behavior and wrapper slots. |
tests/profiling/test_profiling_config.py |
Adds config-level regression tests for always-excluded stdlib modules. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
96b9239 to
63032bd
Compare
abbacc1 to
fd852dc
Compare
fd852dc to
c145e41
Compare
63032bd to
bed8587
Compare
92fa9ff to
bb3207a
Compare
ca10e69 to
08ee721
Compare
08ee721
into
vlad/lockprof-fix-threading-mod-cloning-bug
bed8587 to
08ee721
Compare
…rs (#17902) [< Prev PR](#16775) | ## Description Remove the `is_internal` per-lock flag from `_ProfiledLock` wrapper, since we will now have the concept of "excluded modules" (#16922). We can replaces it with `_ALWAYS_EXCLUDED_MODULES`, a list of stdlib modules implementing sync and async locks, which will never be profiled. This will effectively be the same behavior: no locks backing higher-level locks (Condition, Semaphore) will be profiled; no sample double-counting; and a perf win by removing now-unnecessary bookeeping. ### What changed **Before:** When a stdlib primitive (e.g. `threading.Semaphore`) created an internal lock, `_profiled_allocate_lock` would still wrap it as a `_ProfiledLock`, and set `is_internal=True`. The wrapper would exit early on each call to `_acquire`, `_release`, and `_flush_sample`. **After:** Internal stdlib locks are never wrapped at all thanks to `_ALWAYS_EXCLUDED_MODULES`. `_profiled_allocate_lock` checks the caller's `__name__` against the excluded modules and returns the **native** lock directly on a match. ### Why this is better | | Before (`is_internal`) | After (`_ALWAYS_EXCLUDED_MODULES`) | |---|---|---| | Wrapper created for internal lock? | Yes | **No** | | Runtime overhead per acquire/release | `self.is_internal` check on every call | **None** | | Sample suppression sites | `_acquire`, `_release`, `_flush_sample` | — | | Internal lock detection | Filename comparison (`realpath` per lock create) | Module name set lookup (O(1), done once at patch) | ## Testing * CI * 2 new unit tests ## Risks n/a > Replaces #17859 which was accidentally auto-merged with 0 commits. Co-authored-by: vlad.scherbich <vlad.scherbich@datadoghq.com>
…rs (#17902) [< Prev PR](#16775) | ## Description Remove the `is_internal` per-lock flag from `_ProfiledLock` wrapper, since we will now have the concept of "excluded modules" (#16922). We can replaces it with `_ALWAYS_EXCLUDED_MODULES`, a list of stdlib modules implementing sync and async locks, which will never be profiled. This will effectively be the same behavior: no locks backing higher-level locks (Condition, Semaphore) will be profiled; no sample double-counting; and a perf win by removing now-unnecessary bookeeping. ### What changed **Before:** When a stdlib primitive (e.g. `threading.Semaphore`) created an internal lock, `_profiled_allocate_lock` would still wrap it as a `_ProfiledLock`, and set `is_internal=True`. The wrapper would exit early on each call to `_acquire`, `_release`, and `_flush_sample`. **After:** Internal stdlib locks are never wrapped at all thanks to `_ALWAYS_EXCLUDED_MODULES`. `_profiled_allocate_lock` checks the caller's `__name__` against the excluded modules and returns the **native** lock directly on a match. ### Why this is better | | Before (`is_internal`) | After (`_ALWAYS_EXCLUDED_MODULES`) | |---|---|---| | Wrapper created for internal lock? | Yes | **No** | | Runtime overhead per acquire/release | `self.is_internal` check on every call | **None** | | Sample suppression sites | `_acquire`, `_release`, `_flush_sample` | — | | Internal lock detection | Filename comparison (`realpath` per lock create) | Module name set lookup (O(1), done once at patch) | ## Testing * CI * 2 new unit tests ## Risks n/a > Replaces #17859 which was accidentally auto-merged with 0 commits. Co-authored-by: vlad.scherbich <vlad.scherbich@datadoghq.com>
Description
Remove the
is_internalper-lock flag from_ProfiledLockwrapper, since we will now have the concept of "excluded modules" (#16922). We can replaces it with_ALWAYS_EXCLUDED_MODULES, a list of stdlib modules implementing sync and async locks, which will never be profiled. This will effectively be the same behavior: no locks backing higher-level locks (Condition, Semaphore) will be profiled; no sample double-counting; and a perf win by removing now-unnecessary bookeeping.What changed
Before: When a stdlib primitive (e.g.
threading.Semaphore) created an internal lock,_profiled_allocate_lockwould still wrap it as a_ProfiledLock, and setis_internal=True. The wrapper would exit early on each call to_acquire,_release, and_flush_sample.After: Internal stdlib locks are never wrapped at all thanks to
_ALWAYS_EXCLUDED_MODULES._profiled_allocate_lockchecks the caller's__name__against the excluded modules and returns the native lock directly on a match.Why this is better
is_internal)_ALWAYS_EXCLUDED_MODULES)self.is_internalcheck on every call_acquire,_release,_flush_samplerealpathper lock create)Testing
Risks
n/a