-
Notifications
You must be signed in to change notification settings - Fork 624
Description
Hello, at @vyperlang we're developing a hypothesis-based fuzzer for our compiler backend (fuzzer's available at: vyperlang/vyper#4686). We use a lot of nested @st.composite
strategies.
We're hitting frequent max_choices / max_length limits which decreases the throughput. We tried increasing engine.BUFFER_SIZE
internally, which helps with the length limit, but the max_choices
cap still rejects many examples.
We're generating complex programs to exercise complex interactions between different parts of the compiler optimizer. We'd like to maintain the semantic coverage and not reduce the complexity of the examples.
We'd like to make the fuzzer-continuous and save the progress in a database.
We are ok with limiting the capabilities of the shrinker as the expected number of bugs found is low.
system info
python
: Python 3.12.9
hypothesis
: 6.135.16
questions
Is there a recommended pattern for large, structured fuzzers where high number of draws is required?
Can we easily patch some of the hypothesis internals to avoid the overruns? Or can they be made configurable?
Is the runtime overhead of increasing BUFFER_SIZE or max_choices roughly linear, or does it scale worse?
Any help or pointers would be appreciated, thank you.