Replies: 3 comments
-
I have a draft: python/cpython#32340 |
Beta Was this translation helpful? Give feedback.
0 replies
-
If the loss of adaptiveness is a huge deal, maybe there's some sort of reference counting strategy that would work, where the instruction counts how many frames are currently executing the for loop, and so it can specialize if that gets down to zero. |
Beta Was this translation helpful? Give feedback.
0 replies
-
This seems complex compared to specializing Is |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
FOR_ITER already assumes that the iterator on the stack is safe for
(Py_TYPE(it)->tp_iternext)(it)
, but perhaps a specialization of FOR_ITER could assume the specific type of the iterator. This could be verified by the GET_ITER call and then the specialized FOR_ITER that follows it wouldn't have to do any checks.Since multiple frames can be running the same code at overlapping times, the only way I can think of to make sure types don't conflict is to only adapt once, and never adapt again afterward. Something like this:
The downside: it ceases to be adaptive in some situations.
The upside: very little overhead (no caches, no counters, no DEOPT_IF checks). Not much more overhead than today in almost any situation.
An aside: is there a reason why comprehensions do
comprehension_code(iter(x))
rather than doingcomprehension_code(x)
and having the GET_ITER in the comprehension's bytecode? Because if we implemented this idea, comprehensions would have to do their own GET_ITER in order to benefit from specialization.Thoughts?
Beta Was this translation helpful? Give feedback.
All reactions