Skip to content

Conversation

Frando
Copy link
Contributor

@Frando Frando commented May 30, 2025

This adds a new connection event to quinn-proto: ResumptionEnabled. This event is emitted if TLS13 resumption tickets were received. The event is emitted at most once and only on the client side.

In quinn, this adds a new function resumption_tickets_received, which returns a future that completes once the connection received TLS resumption tickets. The future will be pending forever on the server side. On the client side, it will complete immediately if tickets were already received, or complete once tickets are received, or stay pending forever if the server does not send us tickets.

This new feature is useful when doing many short-lived 0rtt connections: It may happen easily that the application-level data exchange is done in a single roundtrip (eg when doing a single, small request-response) but the resumption tickets from the server are only sent in the next packet. If closing the connection after the application data exchange completes, we did not yet receive the new tickets and thus cannot issue further 0rtt connections when our initial tickets are consumed. With this new API, a client-side 0rtt connection can wait for conn.resumption_tickets_received() before closing the connection (should also apply a timeout in case the server does not send any tickets). Thus, it can issue further 0rtt connections without having to go back to 1rtt inbetween.

The PR uses the functionality added in this rustls PR to expose the number of TLS13 resumption tickets that were received so far.

@Frando

This comment was marked as outdated.

@Frando Frando force-pushed the feat/ticket-recv-event branch from 139c2c3 to d9102bc Compare June 2, 2025 09:44
@Frando Frando marked this pull request as ready for review June 2, 2025 09:47
@Frando Frando force-pushed the feat/ticket-recv-event branch from 0042ae9 to 7f9c190 Compare June 2, 2025 10:11
/// One or more TLS session resumption tickets have been received
///
/// This event is only emitted on the client, and is emitted at most once per connection.
ResumptionTicketsReceived,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems sort of specific to the TLS implementation -- our crypto traits are intended to be more general than that. Maybe this should be renamed to ResumptionEnabled?

Also, note that Event is not non_exhaustive, so this constitutes a semver-incompatible change. In general, this feels fairly invasive for what seems like a somewhat niche feature. Maybe there is some other way to structure this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll rename the event 👍

As to other ways to forward the info: I see these options:

  • Keep the event, wait until next major release
  • Add a public getter on proto::Connection for resumption_received or such, and check that in quinn's connection driver poll

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WDTY - Should I keep the new event, or add a getter that is checked on each poll? Or did you have something else in mind for how to forward the event from quinn-proto to quinn?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would like to hear what @Ralith thinks. Personally I still think an event does not make sense for this.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For Quinn to provide an async API, polling would need to happen on the critical path to trigger a Notify or similar, which strikes me as almost as invasive. An event is more plumbing, but doesn't add a branch that gets checked on every poll forever. If we're going to have this, I lean slightly towards an event, but I don't feel very strongly.

/// Returns the number of TLS1.3 session resumption tickets that were received
///
/// Returns `None` on the server side.
fn resumption_tickets_received(&self) -> Option<u32>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

@djc
Copy link
Member

djc commented Aug 12, 2025

@Frando want to continue with this?

@Frando
Copy link
Contributor Author

Frando commented Aug 12, 2025

@Frando want to continue with this?

Sure. It's ready from my side. There were open question and a request for further review which I can't seem to address myself.

I'm on vacation for the next two weeks, will rebase afterwards.

@Frando Frando force-pushed the feat/ticket-recv-event branch 5 times, most recently from b675e2f to a202b28 Compare August 28, 2025 09:02
@Frando
Copy link
Contributor Author

Frando commented Aug 28, 2025

I rebased, removed the patch for rustls (the upstream change was released) and renamed the event to ResumptionEnabled as suggested by @djc.

From my side the PR is ready. @djc wanted @Ralith's opinion on the question whether this should be an event at all. To me an event seems fine for this. The only alternative would be, I think, to check for a field on quinn_proto::Connection on each poll of the quinn::ConnectionDriver::poll. IMO an event is the cleaner way than a check on each poll. However, if the latter is preferred I can change it of course.

This adds a new connection event to quinn-proto: ResumptionEnabled. This event is emitted if TLS13 resumption tickets were received. The event is emitted at most once and only on the client side.

In quinn, this adds a new function resumption_tickets_received, which returns a future that completes once the connection received TLS resumption tickets. The future will be pending forever on the server side. On the client side, it will complete immediately if tickets were already received, or complete once tickets are received, or stay pending forever if the server does not send us tickets.

This new feature is useful when doing many short-lived 0rtt connections: It may happen easily that the application-level data exchange is done in a single roundtrip (eg when doing a single, small request-response) but the resumption tickets from the server are only sent in the next packet. If closing the connection after the application data exchange completes, we did not yet receive the new tickets and thus cannot issue further 0rtt connections when our initial tickets are consumed. With this new API, a client-side 0rtt connection can wait for conn.resumption_tickets_received() before closing the connection (should also apply a timeout in case the server does not send any tickets). Thus, it can issue further 0rtt connections without having to go back to 1rtt inbetween.
@Frando Frando force-pushed the feat/ticket-recv-event branch from a202b28 to 5620185 Compare August 28, 2025 09:34
Copy link
Collaborator

@Ralith Ralith left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No strong feeling on event vs. polling, and it's an implementation detail we can easily change.

}
}

if !self.resumption_tickets_received && self.crypto.resumption_tickets_received() > Some(0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the number of tickets received guaranteed to change at most once in the lifetime of a connection? If not, should we raise an event every time it changes?

/// Returns the number of TLS1.3 session resumption tickets that were received
///
/// Returns `None` on the server side.
fn resumption_tickets_received(&self) -> Option<u32>;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Providing a default impl returning None would reduce churn for this niche use case.

let endpoint = endpoint();

let endpoint2 = endpoint.clone();
tokio::spawn(async move {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to join this at the end of the test to ensure any panics are surfaced?

///
/// This should only be used on the client side. On the server side, it will
/// always resolve immediately and yield `false`.
pub fn resumption_tickets_received(&self) -> impl Future<Output = bool> + Send + 'static {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is anyone going to ever inspect this bool? Seems redundant to checking side().

///
/// This should only be used on the client side. On the server side, it will
/// always resolve immediately and yield `false`.
pub fn resumption_tickets_received(&self) -> impl Future<Output = bool> + Send + 'static {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is the user meant to actually get the ticket? If the expectation is that they'll still be going through rustls's ClientSessionStore, would it be simpler to bypass quinn entirely and use a ClientSessionStore that notifies you directly on insert?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants