-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Add fiber-based batch loading API #3264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I added a really simple benchmark for comparing no batching / graphql-batch / graphql-dataloader. (Any suggestions for improving it?) It looks like GraphQL-Dataloader has about half the runtime overhead of GraphQL-Batch.
As for memory,
The biggest impacts are:
Those are:
It looks to me like those classes (Fiber, Hash) make up most of the overhead:
cc @swalkinshaw who expressed interest in seeing a benchmark |
I was able to reduce the overhead a bit, now dataloader's memory footprint is smaller than graphql-batch for that benchmark:
|
This could be pretty awesome. @bessey first suggested this on Twitter, and combined with a trampolining-like refactor, it might just work!
TL;DR: Use
Fiber.yield
to halt GraphQL execution in place; allow GraphQL fields toFiber.yield
and then they're resumed once every branch has reached a halt.The coolest thing is, if we can make
Interpreter
Fiber-aware, then we lay the groundwork for Ruby 3's Fiber scheduler API, and we'd get parallel IO "for free" (we have to implement a scheduler, and somehow implement that baton-passing).Goals:
lazy_resove
) etcIf this works, I'll drop #2483
TODO:
Does it need a fiber pool?I don't think so -- I think Ruby is pretty thrifty with Fibers. If we need this later, we can replacewaiting_fibers = []
with a more sophisticated blocking queue.Fiber.yield
isn't usedNullDataloader
, which runs eagerly -- it's the defaultThread.current[...]
assignments intocontext
.Try out Ruby 3's scheduler?I'm going to put this off.dataloader.yield
for manual parallelism