Skip to content

Conversation

@mathieucarbou
Copy link
Member

Request Continuation is the ability to pause the processing of a request (the actual sending over the network) to be able to let another task commit the response on the network later.

This is a common supported use case amongst web servers.

A uage example is described in the following discussion:

#34

Currently, the only supported way is to use chunk encoding and return RESPONSE_TRY_AGAIN from the callback until the processing is done somewhere else, then send the response in the chunk buffer, when the processing is completed.

@mathieucarbou mathieucarbou self-assigned this Jan 30, 2025
@mathieucarbou mathieucarbou changed the title Request Continuation support feat(http) Request Continuation support Jan 31, 2025
@willmmiles
Copy link

Oh, I implemented something like this for the WLED fork!

The original goal was to implement a deferred request queue so low-memory devices wouldn't OOM trying to handle too many parallel operations. I started with a simple queue in the onClient callback, and a queue-servicing function to actually handle replies. Later, when I ran in to lock contention on a shared resource in a handler function, I added AsyncWebRequest::deferResponse() so a request could be left in the queue so it could be retried later. The queue gets pumped on the completion of any response, in poll() callbacks, and could be explicitly triggered from outside (ie. when the lock was released). I didn't bother with tracking which unblock events mapped to what requests since the queue was FIFO and it wasn't necessary for our application -- any still-blocked requests would re-block, so the queue processor would move on to the next one.

I don't know if any of that code is directly useful, unfortunately. Our fork is too far back, and I fear there are too many code size compromises in this version for us to consider switching to this one (>10kb code size increase when we're evaluating PRs by the byte to fit in existing partition maps!) -- but if there's anything I can to do help, let me know; or you find anything there useful, you're welcome to it.

@mathieucarbou
Copy link
Member Author

mathieucarbou commented Jan 31, 2025

@willmmiles : thanks! I was not planning to go that far for this implementation (I don't think I need to) but I will test the limits. With the throttling that @vortigont added in AsyncTCP I suppose it will be fine... But I suspect also that the max limit will be 16 (config from lwip esp-idf) - we'll see ;-)
In any case, if you are willing one day to try out these new libs with WLED and report back the issues, that would be helpful!

@mathieucarbou mathieucarbou force-pushed the feat/continuation branch 2 times, most recently from 305b6d1 to c638175 Compare February 1, 2025 16:46
@mathieucarbou mathieucarbou marked this pull request as ready for review February 1, 2025 16:47
@mathieucarbou mathieucarbou force-pushed the feat/continuation branch 3 times, most recently from c558083 to 70aa7fc Compare February 1, 2025 16:52
@mathieucarbou
Copy link
Member Author

mathieucarbou commented Feb 1, 2025

@mathieucarbou mathieucarbou force-pushed the feat/continuation branch 2 times, most recently from d75e7e6 to 17fa607 Compare February 4, 2025 09:32
@mathieucarbou
Copy link
Member Author

mathieucarbou commented Feb 4, 2025

@willmmiles : fyi I did some testing with this implementation and the RequestContinuationComplete example.

for i in {1..16}; do curl --connect-timeout 120 http://192.168.4.1/ & done

works perfectly, which is expected (16 concurrent slots lwip)

for i in {1..32}; do curl --connect-timeout 120 http://192.168.4.1/ & done

works also perfectly: we can see the results arriving by batches of 16, but slower

for i in {1..48}; do curl --connect-timeout 180 http://192.168.4.1/ & done

same as before, slower but all are served by batches of 16.

So I think this more than enough.
An app should not have so many paused requests like that, otherwise this is a use case for SSE or WebSocket.

**Request Continuation** is the ability to pause the processing of a request (the actual sending over the network) to be able to let another task commit the response on the network later.

This is a common supported use case amongst web servers.

A uage example is described in the following discussion:

#34

Currently, the only supported way is to use chunk encoding and return `RESPONSE_TRY_AGAIN` from the callback until the processing is done somewhere else, then send the response in the chunk buffer, when the processing is completed.
@mathieucarbou mathieucarbou merged commit c8026aa into main Feb 4, 2025
21 checks passed
@mathieucarbou mathieucarbou deleted the feat/continuation branch February 4, 2025 12:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants