-
Notifications
You must be signed in to change notification settings - Fork 418
lightning-liquidity
: Introduce MessageQueueNotifierGuard
type
#3981
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🎉 This PR is now ready for review! |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3981 +/- ##
==========================================
+ Coverage 88.94% 88.96% +0.01%
==========================================
Files 174 174
Lines 124201 124161 -40
Branches 124201 124161 -40
==========================================
- Hits 110472 110460 -12
+ Misses 11251 11226 -25
+ Partials 2478 2475 -3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Drafting this, as I'll add more changes here. |
399cd56
to
fd5f90a
Compare
Undrafting, added more cleanups and took a slightly different approach than originally for the |
9efe41e
to
686c921
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a few small comments. otherwise LGTM
686c921
to
3c13e5e
Compare
fixup 3cfffa8 looks good nit: sometimes the notifier is instantiated before input validation so it can create an unnecessary allocation. we could instantiate the guard once it’s certain a message will be queued (this is also happening with feel free to ignore this comment since this has such a tiny impact, but still worth mentioning. |
Actually, no, we need to make sure that the notifier is always created before the respective locks as they'll dropped in reverse order, which is why we acquire the guards first thing in the methods. I even considered requiring to hand-in the lock, but that is probably not easily done across implementations. I do however consider introducing a common |
the notifier allocation itself. but what you are saying makes sense, we need to create it at the top. thanks for the clarification |
this makes me nervous as it's super easy to get the order wrong and difficult to review and test if there is such a case. not sure if there is a standard for doing something like a debug_assert for lock ordering |
The LDK mutexes have a bunch of logic to ensure lock order inversions are caught in testing. The |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please squash :)
3c13e5e
to
edad14b
Compare
Squashed without further changes. |
Previously, when enqueuing new messages to the `MessageQueue`, we'd directly notify the BP to handle the messages, potentially causing multiple wake-ups in short succession, and risking that we'd reenter with crucial locks still held. Here, we instead introduce a `MessageQueueNotifierGuard` type that parallels our recently-introduced `EventQueueNotifierGuard`, buffers the messages, and will only append them to the message queue and notify the BP when dropped. This will allow us to remove a lot of error-prone boilerplate in the next step.
Now that we have the `MessageQueueNotifierGuard`, we can be sure that we always dropped locks before notifying. Hence, we can save *a lot* of error-prone boilerplate that we used to ensure we'd only enqueue if we dropped locks.
edad14b
to
309591a
Compare
Actually, force-pushed yet another minor fixup, ensuring we drop the queue lock before waking BP: diff --git a/lightning-liquidity/src/message_queue.rs b/lightning-liquidity/src/message_queue.rs
index fc6639b53..45b3c7f48 100644
--- a/lightning-liquidity/src/message_queue.rs
+++ b/lightning-liquidity/src/message_queue.rs
@@ -55,6 +55,5 @@ impl<'a> Drop for MessageQueueNotifierGuard<'a> {
fn drop(&mut self) {
if !self.buffer.is_empty() {
- let mut queue_lock = self.msg_queue.queue.lock().unwrap();
- queue_lock.append(&mut self.buffer);
+ self.msg_queue.queue.lock().unwrap().append(&mut self.buffer);
self.msg_queue.pending_msgs_notifier.notify();
} |
thanks for the explanation. I had no idea this existed. I wanted to check how this works so I changed the lock order on 2 methods from LSPS5 and left the other one unchanged
these 3 methods have plenty of coverage I ran the lightning-liquidity tests and no failures, then ran ./ci/ci-tests.sh (since I don't know exactly how these checks are run) and also no failures. not sure if I'm missing something I can open an issue since this is maybe unrelated to this PR? @TheBlueMatt |
I think there is a bit of a confusion here: we have checks for actual double-locks etc in |
Had the start of this still laying around as a follow-up to #3509 / as part of the liquidity persistence branch. Think it's a nice clean-up to have both queues work the same way, if nothing else.
(cc @martinsaposnic)