Library Ergonomics #400
Replies: 4 comments 29 replies
-
I made a PR to quickly try out the first idea here: #401 As a consumer of this library who puts a lot of This is a pretty big change though so I wouldn't want to do this unilaterally. If I get some time I'd like to do a proper benchmark of this, but some initial tests show no significant performance degredation:
|
Beta Was this translation helpful? Give feedback.
-
I was looking at the dependents of the rust Z3 bindings(https://crates.io/crates/z3/reverse_dependencies) and to be fair, not many of them seem to be up-to-date/well-maintained(so not very representative). On one hand, crates like LibAFL and such might have more complex Z3 usage that ordinary users don't(But actually in looking at their code, they do everything in one function so there are no lifetime parameters involved at all.). Looking at biodivine-lib-param-bn, they don't seem to be doing anything too fancy but are representative of what you are talking about where lifetimes just infect a few data structures/functions without much benefit. They do use an iterator pattern for going through different satisfying models(Seems like, negating out different combinations of bools and getting new models) which I think will be covered by models reference counting the context. |
Beta Was this translation helpful? Give feedback.
-
I'm not sure if Step2 is more focused on the thread_local part or the bifurcating of the API. You could do the former without the latter like you mention. For thread_local versus Rc. What does that change for the user? The API has one less parameter with a thread local, some things that are currently methods on Context would need to be stand-alone functions(As in no For bifurcating the API, I think ContextBV could be cleaner since you would then be hiding away the context methods for auto-complete and such which seems nice. Though, don't most of the non-c/c++ official language bindings hide away the Z3 context without issue? Maybe it's not worth maintaining 2 API versions(or a non-trivial macro system). |
Beta Was this translation helpful? Give feedback.
-
I don't have much of an opinion for multi-threading. For a few reasons, if I want parallelism, I turn my constraints into a smtlib file and pass it off to z3 in it's own process(So clearly making some overhead sacrifices)(Note you can also get parallelism within Z3 by setting a parameter). Either way, a clear example of doing multi-threading with Z3, maybe using rayon or async(I'm not sure which would make more sense) would be cool to see. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Over the years there's been a few distinct but related ergonomics issues raised about this library:
ctx
lifetime can be unwieldy to work with (just my own observation) and tends to infect anything that has to use z3Context
? #261)All three of these prevent users from doing things that "feel like they should work" and lead to a lot of juggling of context references. I propose the following incremental plan to address these:
Step 1 : Eliminate
'ctx
with a reference-counted context handleContext
can be refactored as follows (pseudocode follows):All
Ast
andSolver
instances can now ownctx
, fully removing the'ctx
lifetime.This would be a breaking change, obviously.
I think
ContextHandle
could be left unchanged with this implementation, as it could still use a'ctx
lifetime referring to the originalContext
instance that spawned it. That is conservative (since the lifetime of each individualContext
handle is <= that of theContextInternal
) and sound.The main benefit here would be removing the
'ctx
lifetime from all theAst
structs. Reads ofRc
have ~0 overhead over&
references.Clone
andDrop
have some small overhead due to the refcount, but it should be very minimal, and I think it's worth the tradeoff.Step 2: Thread-local Default
Context
sWe should do what @waywardmonkeys recommended in #261. I personally would probably lean towards having the "default context" methods get the shorter names. So
BV::from_u64
would use the default thread-localContext
and we can introduceBV::from_u64_with_ctx
that allows providing aContext
. These could be automatically defined with a macro. Alternatively, continuing ourBV
example, maybe we introduce (also with a macro) aContextBV
type that takes a context on all-> Self
methods and isInto<BV>
to avoid cluttering theBV
type.Or maybe we make an opinionated stance and say "no actual user should be using more than one context per thread".
Not sure what approach makes the most sense here but I think the ergonomic benefit from this change would be huge. This would also allow for trait implementations like
From<bool> for Bool
andFrom<u64> for BV
.With this change, combined with Step 1, this library becomes almost identical to the Python bindings in overall ergonomics.
Step 3: Opinionated multi-threading
We make the addition I suggested in #100 (comment):
Context
stays!Send
and!Sync
.Context
can still then use anRc
.Context
gets (through traits/macros) a-Sendable
variant which isSend
but!Sync
and!Clone
. These variants would use a temporaryContext
which is inaccessible outside the type (and thus safely sendable to another thread), and internally use a translate call to copy the original C++ object for the temporaryContext
. The only methods on constructed values of these types aretranslate()
andtranslate_ctx(ctx)
, which allow for producing the original Ast/Solver struct, with the underlying Z3 object translated into the thread defaultContext
or the specifiedContext
respectively.The main drawback with this method I see is in performance of moves across threads (due to creating contexts and the 2x
translate
call). On my machine it takes a couple milliseconds to create aContext
. It would be possible to use this "improperly" and see a big slowdown (e.g. making 10000Bool
s in thread A and send them to thread B makes 10000 temporaryContext
s). However, for use-cases I anticipate as common, I think this would be minimal overhead (e.g. making 1000 assertions in a solver and sending it to another thread creates 1 temporaryContext
).This has a few advantages:
Any thoughts about all this @waywardmonkeys @Pat-Lafon?
Beta Was this translation helpful? Give feedback.
All reactions