-
Couldn't load subscription status.
- Fork 70
feat(http) Request Continuation support #41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
598ffc9 to
6a6eda9
Compare
6a6eda9 to
933af04
Compare
|
Oh, I implemented something like this for the WLED fork! The original goal was to implement a deferred request queue so low-memory devices wouldn't OOM trying to handle too many parallel operations. I started with a simple queue in the onClient callback, and a queue-servicing function to actually handle replies. Later, when I ran in to lock contention on a shared resource in a handler function, I added I don't know if any of that code is directly useful, unfortunately. Our fork is too far back, and I fear there are too many code size compromises in this version for us to consider switching to this one (>10kb code size increase when we're evaluating PRs by the byte to fit in existing partition maps!) -- but if there's anything I can to do help, let me know; or you find anything there useful, you're welcome to it. |
|
@willmmiles : thanks! I was not planning to go that far for this implementation (I don't think I need to) but I will test the limits. With the throttling that @vortigont added in AsyncTCP I suppose it will be fine... But I suspect also that the max limit will be 16 (config from lwip esp-idf) - we'll see ;-) |
305b6d1 to
c638175
Compare
c558083 to
70aa7fc
Compare
|
@me-no-dev @vortigont : FYI request continuation is implemented and ready for review! Added an example: https://github.com/ESP32Async/ESPAsyncWebServer/blob/feat/continuation/examples/RequestContinuation/RequestContinuation.ino Ran the perf test and some others too. |
70aa7fc to
1bb4ece
Compare
f589320 to
ae2d2f8
Compare
d75e7e6 to
17fa607
Compare
|
@willmmiles : fyi I did some testing with this implementation and the
works perfectly, which is expected (16 concurrent slots lwip)
works also perfectly: we can see the results arriving by batches of 16, but slower
same as before, slower but all are served by batches of 16. So I think this more than enough. |
17fa607 to
06c83fa
Compare
**Request Continuation** is the ability to pause the processing of a request (the actual sending over the network) to be able to let another task commit the response on the network later. This is a common supported use case amongst web servers. A uage example is described in the following discussion: #34 Currently, the only supported way is to use chunk encoding and return `RESPONSE_TRY_AGAIN` from the callback until the processing is done somewhere else, then send the response in the chunk buffer, when the processing is completed.
06c83fa to
f891381
Compare
Request Continuation is the ability to pause the processing of a request (the actual sending over the network) to be able to let another task commit the response on the network later.
This is a common supported use case amongst web servers.
A uage example is described in the following discussion:
#34
Currently, the only supported way is to use chunk encoding and return
RESPONSE_TRY_AGAINfrom the callback until the processing is done somewhere else, then send the response in the chunk buffer, when the processing is completed.