-
-
Notifications
You must be signed in to change notification settings - Fork 480
feat: add event and function to wait for resumption tickets #2257
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This comment was marked as outdated.
This comment was marked as outdated.
139c2c3
to
d9102bc
Compare
0042ae9
to
7f9c190
Compare
quinn-proto/src/connection/mod.rs
Outdated
/// One or more TLS session resumption tickets have been received | ||
/// | ||
/// This event is only emitted on the client, and is emitted at most once per connection. | ||
ResumptionTicketsReceived, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems sort of specific to the TLS implementation -- our crypto traits are intended to be more general than that. Maybe this should be renamed to ResumptionEnabled
?
Also, note that Event
is not non_exhaustive
, so this constitutes a semver-incompatible change. In general, this feels fairly invasive for what seems like a somewhat niche feature. Maybe there is some other way to structure this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll rename the event 👍
As to other ways to forward the info: I see these options:
- Keep the event, wait until next major release
- Add a public getter on proto::Connection for
resumption_received
or such, and check that in quinn's connection driverpoll
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WDTY - Should I keep the new event, or add a getter that is checked on each poll? Or did you have something else in mind for how to forward the event from quinn-proto
to quinn
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would like to hear what @Ralith thinks. Personally I still think an event does not make sense for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For Quinn to provide an async API, polling would need to happen on the critical path to trigger a Notify
or similar, which strikes me as almost as invasive. An event is more plumbing, but doesn't add a branch that gets checked on every poll forever. If we're going to have this, I lean slightly towards an event, but I don't feel very strongly.
/// Returns the number of TLS1.3 session resumption tickets that were received | ||
/// | ||
/// Returns `None` on the server side. | ||
fn resumption_tickets_received(&self) -> Option<u32>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here.
@Frando want to continue with this? |
Sure. It's ready from my side. There were open question and a request for further review which I can't seem to address myself. I'm on vacation for the next two weeks, will rebase afterwards. |
b675e2f
to
a202b28
Compare
I rebased, removed the patch for From my side the PR is ready. @djc wanted @Ralith's opinion on the question whether this should be an event at all. To me an event seems fine for this. The only alternative would be, I think, to check for a field on |
This adds a new connection event to quinn-proto: ResumptionEnabled. This event is emitted if TLS13 resumption tickets were received. The event is emitted at most once and only on the client side. In quinn, this adds a new function resumption_tickets_received, which returns a future that completes once the connection received TLS resumption tickets. The future will be pending forever on the server side. On the client side, it will complete immediately if tickets were already received, or complete once tickets are received, or stay pending forever if the server does not send us tickets. This new feature is useful when doing many short-lived 0rtt connections: It may happen easily that the application-level data exchange is done in a single roundtrip (eg when doing a single, small request-response) but the resumption tickets from the server are only sent in the next packet. If closing the connection after the application data exchange completes, we did not yet receive the new tickets and thus cannot issue further 0rtt connections when our initial tickets are consumed. With this new API, a client-side 0rtt connection can wait for conn.resumption_tickets_received() before closing the connection (should also apply a timeout in case the server does not send any tickets). Thus, it can issue further 0rtt connections without having to go back to 1rtt inbetween.
a202b28
to
5620185
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No strong feeling on event vs. polling, and it's an implementation detail we can easily change.
} | ||
} | ||
|
||
if !self.resumption_tickets_received && self.crypto.resumption_tickets_received() > Some(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the number of tickets received guaranteed to change at most once in the lifetime of a connection? If not, should we raise an event every time it changes?
/// Returns the number of TLS1.3 session resumption tickets that were received | ||
/// | ||
/// Returns `None` on the server side. | ||
fn resumption_tickets_received(&self) -> Option<u32>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Providing a default impl returning None
would reduce churn for this niche use case.
let endpoint = endpoint(); | ||
|
||
let endpoint2 = endpoint.clone(); | ||
tokio::spawn(async move { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to join this at the end of the test to ensure any panics are surfaced?
/// | ||
/// This should only be used on the client side. On the server side, it will | ||
/// always resolve immediately and yield `false`. | ||
pub fn resumption_tickets_received(&self) -> impl Future<Output = bool> + Send + 'static { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is anyone going to ever inspect this bool
? Seems redundant to checking side()
.
/// | ||
/// This should only be used on the client side. On the server side, it will | ||
/// always resolve immediately and yield `false`. | ||
pub fn resumption_tickets_received(&self) -> impl Future<Output = bool> + Send + 'static { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is the user meant to actually get the ticket? If the expectation is that they'll still be going through rustls's ClientSessionStore
, would it be simpler to bypass quinn entirely and use a ClientSessionStore
that notifies you directly on insert?
This adds a new connection event to quinn-proto:
ResumptionEnabled
. This event is emitted if TLS13 resumption tickets were received. The event is emitted at most once and only on the client side.In
quinn
, this adds a new functionresumption_tickets_received
, which returns a future that completes once the connection received TLS resumption tickets. The future will be pending forever on the server side. On the client side, it will complete immediately if tickets were already received, or complete once tickets are received, or stay pending forever if the server does not send us tickets.This new feature is useful when doing many short-lived 0rtt connections: It may happen easily that the application-level data exchange is done in a single roundtrip (eg when doing a single, small request-response) but the resumption tickets from the server are only sent in the next packet. If closing the connection after the application data exchange completes, we did not yet receive the new tickets and thus cannot issue further 0rtt connections when our initial tickets are consumed. With this new API, a client-side 0rtt connection can wait for
conn.resumption_tickets_received()
before closing the connection (should also apply a timeout in case the server does not send any tickets). Thus, it can issue further 0rtt connections without having to go back to 1rtt inbetween.The PR uses the functionality added in this rustls PR to expose the number of TLS13 resumption tickets that were received so far.