Skip to content

chore: properly await async monitoring requests during shutdown#8896

Merged
nflaig merged 5 commits intounstablefrom
fix-shutdown-sequence
Feb 24, 2026
Merged

chore: properly await async monitoring requests during shutdown#8896
nflaig merged 5 commits intounstablefrom
fix-shutdown-sequence

Conversation

@wemeetagain
Copy link
Member

@wemeetagain wemeetagain commented Feb 11, 2026

Summary

  • Await backfillSync.close(): Store the internal sync loop promise and await it in close(), ensuring in-flight DB writes complete before shutdown continues. Moved after network.close() so in-flight req/resp calls are terminated first. Edit nflaig: I removed that change, the code is unused
  • Await monitoring.close(): After aborting the fetch controller, await the pending HTTP request so it resolves cleanly

🤖 Generated with Claude Code

@wemeetagain wemeetagain requested a review from a team as a code owner February 11, 2026 20:43
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @wemeetagain, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the shutdown sequence of the beacon node by correctly awaiting asynchronous operations and reordering the abort signal. The changes ensure that all pending database writes and network requests are completed before components fully close, preventing potential data corruption or errors during node shutdown and improving the reliability of the shutdown process.

Highlights

  • Await backfillSync.close(): Stored the internal sync loop promise and awaited it in close(), ensuring in-flight DB writes complete before shutdown continues.
  • Await monitoring.close(): After aborting the fetch controller, awaited the pending HTTP request so it resolves cleanly.
  • Move controller.abort(): Moved controller.abort() to the top of the close sequence, ensuring components receive the abort signal before cleanup begins, preventing new work from being created during shutdown.
  • Remove DELAY_BEFORE_CLOSING_DB_MS: Removed the 500ms sleep, which was a workaround for un-awaited closes; now that async operations are properly awaited, it is no longer needed.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • packages/beacon-node/src/monitoring/service.ts
    • Updated the close method to be asynchronous and await any pending HTTP requests.
    • Modified the JSDoc for the close method to reflect the new asynchronous behavior.
  • packages/beacon-node/src/node/nodejs.ts
    • Removed the sleep utility import.
    • Deleted the DELAY_BEFORE_CLOSING_DB_MS constant.
    • Reordered the shutdown sequence in BeaconNode.close() to abort the controller earlier.
    • Changed backfillSync?.close() and monitoring.close() to be awaited.
    • Removed the explicit sleep call during shutdown.
  • packages/beacon-node/src/sync/backfill/backfill.ts
    • Introduced a syncPromise property to store the promise of the internal sync loop.
    • Ensured the close method awaits this syncPromise to guarantee completion of in-flight operations.
    • Moved network event listener cleanup to a finally block to ensure it always runs.
    • Updated the JSDoc for the close method.
Activity
  • No human activity has occurred on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6e969449ca

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request provides a solid improvement to the beacon node's shutdown sequence, making it more robust and reliable. By properly awaiting asynchronous close operations in MonitoringService and BackfillSync, and re-ordering the shutdown logic in BeaconNode, potential race conditions are eliminated. The refactoring in BackfillSync to manage its lifecycle and cleanup via a stored promise is a clean solution. Moving the controller.abort() call to the beginning of the shutdown sequence is a great change to prevent new work from being initiated during shutdown. Overall, these are excellent changes that increase the stability of the node.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 11, 2026

Performance Report

🚀🚀 Significant benchmark improvement detected

Benchmark suite Current: 2d22081 Previous: 95cf2ed Ratio
processSlot - 32 slots 2.2197 ms/op 7.6045 ms/op 0.29
getNextSyncCommitteeIndices 1000 validators 116.08 ms/op 372.99 ms/op 0.31
computeProposers - vc 250000 611.41 us/op 2.3991 ms/op 0.25
nodejs block root to RootHex using toHex 151.91 ns/op 784.40 ns/op 0.19
nodejs byteArrayEquals 1024 bytes 46.052 ns/op 147.57 ns/op 0.31
browser block root to RootHex using toHex 172.50 ns/op 632.68 ns/op 0.27
Full benchmark results
Benchmark suite Current: 2d22081 Previous: 95cf2ed Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 1.0763 ms/op 1.3158 ms/op 0.82
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 37.938 us/op 40.620 us/op 0.93
BLS verify - blst 813.33 us/op 1.0780 ms/op 0.75
BLS verifyMultipleSignatures 3 - blst 1.1565 ms/op 1.4457 ms/op 0.80
BLS verifyMultipleSignatures 8 - blst 1.6010 ms/op 2.2381 ms/op 0.72
BLS verifyMultipleSignatures 32 - blst 4.7243 ms/op 7.0451 ms/op 0.67
BLS verifyMultipleSignatures 64 - blst 8.7744 ms/op 14.476 ms/op 0.61
BLS verifyMultipleSignatures 128 - blst 16.852 ms/op 25.267 ms/op 0.67
BLS deserializing 10000 signatures 670.16 ms/op 851.22 ms/op 0.79
BLS deserializing 100000 signatures 6.7618 s/op 8.7135 s/op 0.78
BLS verifyMultipleSignatures - same message - 3 - blst 813.19 us/op 980.68 us/op 0.83
BLS verifyMultipleSignatures - same message - 8 - blst 962.20 us/op 1.0719 ms/op 0.90
BLS verifyMultipleSignatures - same message - 32 - blst 1.6687 ms/op 2.0366 ms/op 0.82
BLS verifyMultipleSignatures - same message - 64 - blst 2.5455 ms/op 2.8508 ms/op 0.89
BLS verifyMultipleSignatures - same message - 128 - blst 4.2976 ms/op 4.8453 ms/op 0.89
BLS aggregatePubkeys 32 - blst 19.012 us/op 21.845 us/op 0.87
BLS aggregatePubkeys 128 - blst 67.696 us/op 79.029 us/op 0.86
getSlashingsAndExits - default max 64.655 us/op 98.416 us/op 0.66
getSlashingsAndExits - 2k 313.11 us/op 335.43 us/op 0.93
isKnown best case - 1 super set check 190.00 ns/op 236.00 ns/op 0.81
isKnown normal case - 2 super set checks 188.00 ns/op 231.00 ns/op 0.81
isKnown worse case - 16 super set checks 188.00 ns/op 236.00 ns/op 0.80
validate api signedAggregateAndProof - struct 1.3345 ms/op 2.0519 ms/op 0.65
validate gossip signedAggregateAndProof - struct 1.3495 ms/op 1.7968 ms/op 0.75
batch validate gossip attestation - vc 640000 - chunk 32 116.72 us/op 162.82 us/op 0.72
batch validate gossip attestation - vc 640000 - chunk 64 103.93 us/op 138.82 us/op 0.75
batch validate gossip attestation - vc 640000 - chunk 128 97.512 us/op 123.09 us/op 0.79
batch validate gossip attestation - vc 640000 - chunk 256 93.794 us/op 121.94 us/op 0.77
bytes32 toHexString 347.00 ns/op 459.00 ns/op 0.76
bytes32 Buffer.toString(hex) 235.00 ns/op 347.00 ns/op 0.68
bytes32 Buffer.toString(hex) from Uint8Array 310.00 ns/op 405.00 ns/op 0.77
bytes32 Buffer.toString(hex) + 0x 237.00 ns/op 274.00 ns/op 0.86
Return object 10000 times 0.22970 ns/op 0.34050 ns/op 0.67
Throw Error 10000 times 4.0322 us/op 5.3009 us/op 0.76
toHex 135.08 ns/op 153.69 ns/op 0.88
Buffer.from 133.58 ns/op 153.72 ns/op 0.87
shared Buffer 83.206 ns/op 88.420 ns/op 0.94
fastMsgIdFn sha256 / 200 bytes 1.8200 us/op 1.9250 us/op 0.95
fastMsgIdFn h32 xxhash / 200 bytes 182.00 ns/op 249.00 ns/op 0.73
fastMsgIdFn h64 xxhash / 200 bytes 264.00 ns/op 336.00 ns/op 0.79
fastMsgIdFn sha256 / 1000 bytes 5.9880 us/op 6.9960 us/op 0.86
fastMsgIdFn h32 xxhash / 1000 bytes 283.00 ns/op 419.00 ns/op 0.68
fastMsgIdFn h64 xxhash / 1000 bytes 306.00 ns/op 406.00 ns/op 0.75
fastMsgIdFn sha256 / 10000 bytes 51.213 us/op 59.861 us/op 0.86
fastMsgIdFn h32 xxhash / 10000 bytes 1.3720 us/op 1.5750 us/op 0.87
fastMsgIdFn h64 xxhash / 10000 bytes 935.00 ns/op 1.2590 us/op 0.74
send data - 1000 256B messages 13.608 ms/op 15.827 ms/op 0.86
send data - 1000 512B messages 15.984 ms/op 20.952 ms/op 0.76
send data - 1000 1024B messages 21.436 ms/op 28.344 ms/op 0.76
send data - 1000 1200B messages 20.801 ms/op 27.500 ms/op 0.76
send data - 1000 2048B messages 21.116 ms/op 25.448 ms/op 0.83
send data - 1000 4096B messages 26.709 ms/op 27.344 ms/op 0.98
send data - 1000 16384B messages 109.39 ms/op 118.47 ms/op 0.92
send data - 1000 65536B messages 258.88 ms/op 340.49 ms/op 0.76
enrSubnets - fastDeserialize 64 bits 878.00 ns/op 998.00 ns/op 0.88
enrSubnets - ssz BitVector 64 bits 325.00 ns/op 416.00 ns/op 0.78
enrSubnets - fastDeserialize 4 bits 124.00 ns/op 168.00 ns/op 0.74
enrSubnets - ssz BitVector 4 bits 340.00 ns/op 375.00 ns/op 0.91
prioritizePeers score -10:0 att 32-0.1 sync 2-0 224.39 us/op 276.29 us/op 0.81
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 261.39 us/op 282.85 us/op 0.92
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 361.69 us/op 445.94 us/op 0.81
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 679.78 us/op 785.01 us/op 0.87
prioritizePeers score 0:0 att 64-1 sync 4-1 815.91 us/op 1.1823 ms/op 0.69
array of 16000 items push then shift 1.5740 us/op 1.8006 us/op 0.87
LinkedList of 16000 items push then shift 7.1880 ns/op 7.9120 ns/op 0.91
array of 16000 items push then pop 73.634 ns/op 83.628 ns/op 0.88
LinkedList of 16000 items push then pop 7.0290 ns/op 7.9390 ns/op 0.89
array of 24000 items push then shift 2.3257 us/op 2.5487 us/op 0.91
LinkedList of 24000 items push then shift 7.4540 ns/op 8.3800 ns/op 0.89
array of 24000 items push then pop 103.80 ns/op 115.65 ns/op 0.90
LinkedList of 24000 items push then pop 7.0850 ns/op 7.7660 ns/op 0.91
intersect bitArray bitLen 8 5.5680 ns/op 6.3340 ns/op 0.88
intersect array and set length 8 32.402 ns/op 36.244 ns/op 0.89
intersect bitArray bitLen 128 27.674 ns/op 30.027 ns/op 0.92
intersect array and set length 128 527.52 ns/op 598.77 ns/op 0.88
bitArray.getTrueBitIndexes() bitLen 128 1.0070 us/op 1.2420 us/op 0.81
bitArray.getTrueBitIndexes() bitLen 248 1.7690 us/op 2.3390 us/op 0.76
bitArray.getTrueBitIndexes() bitLen 512 3.6030 us/op 4.5190 us/op 0.80
Full columns - reconstruct all 6 blobs 301.16 us/op 238.19 us/op 1.26
Full columns - reconstruct half of the blobs out of 6 97.958 us/op 114.71 us/op 0.85
Full columns - reconstruct single blob out of 6 40.796 us/op 37.516 us/op 1.09
Half columns - reconstruct all 6 blobs 264.02 ms/op 343.12 ms/op 0.77
Half columns - reconstruct half of the blobs out of 6 131.56 ms/op 175.20 ms/op 0.75
Half columns - reconstruct single blob out of 6 47.764 ms/op 62.046 ms/op 0.77
Full columns - reconstruct all 10 blobs 323.00 us/op 363.92 us/op 0.89
Full columns - reconstruct half of the blobs out of 10 152.38 us/op 171.34 us/op 0.89
Full columns - reconstruct single blob out of 10 42.010 us/op 35.245 us/op 1.19
Half columns - reconstruct all 10 blobs 431.81 ms/op 527.04 ms/op 0.82
Half columns - reconstruct half of the blobs out of 10 222.39 ms/op 262.05 ms/op 0.85
Half columns - reconstruct single blob out of 10 48.035 ms/op 60.601 ms/op 0.79
Full columns - reconstruct all 20 blobs 576.95 us/op 653.74 us/op 0.88
Full columns - reconstruct half of the blobs out of 20 281.30 us/op 308.57 us/op 0.91
Full columns - reconstruct single blob out of 20 29.973 us/op 33.982 us/op 0.88
Half columns - reconstruct all 20 blobs 861.23 ms/op 1.0938 s/op 0.79
Half columns - reconstruct half of the blobs out of 20 433.79 ms/op 565.82 ms/op 0.77
Half columns - reconstruct single blob out of 20 48.404 ms/op 64.048 ms/op 0.76
Set add up to 64 items then delete first 1.9842 us/op 3.1465 us/op 0.63
OrderedSet add up to 64 items then delete first 2.9548 us/op 4.3937 us/op 0.67
Set add up to 64 items then delete last 2.2641 us/op 3.6526 us/op 0.62
OrderedSet add up to 64 items then delete last 3.4271 us/op 4.4455 us/op 0.77
Set add up to 64 items then delete middle 2.2477 us/op 3.1440 us/op 0.71
OrderedSet add up to 64 items then delete middle 4.8444 us/op 5.9140 us/op 0.82
Set add up to 128 items then delete first 4.6190 us/op 6.2529 us/op 0.74
OrderedSet add up to 128 items then delete first 6.6018 us/op 9.5366 us/op 0.69
Set add up to 128 items then delete last 4.5801 us/op 6.0170 us/op 0.76
OrderedSet add up to 128 items then delete last 6.7174 us/op 8.9826 us/op 0.75
Set add up to 128 items then delete middle 4.3696 us/op 6.0829 us/op 0.72
OrderedSet add up to 128 items then delete middle 12.795 us/op 17.445 us/op 0.73
Set add up to 256 items then delete first 9.7453 us/op 12.584 us/op 0.77
OrderedSet add up to 256 items then delete first 13.876 us/op 18.247 us/op 0.76
Set add up to 256 items then delete last 13.981 us/op 10.301 us/op 1.36
OrderedSet add up to 256 items then delete last 13.787 us/op 15.220 us/op 0.91
Set add up to 256 items then delete middle 9.0232 us/op 11.105 us/op 0.81
OrderedSet add up to 256 items then delete middle 39.338 us/op 51.599 us/op 0.76
pass gossip attestations to forkchoice per slot 2.4579 ms/op 3.3910 ms/op 0.72
forkChoice updateHead vc 100000 bc 64 eq 0 484.22 us/op 583.45 us/op 0.83
forkChoice updateHead vc 600000 bc 64 eq 0 2.8919 ms/op 3.4216 ms/op 0.85
forkChoice updateHead vc 1000000 bc 64 eq 0 4.8203 ms/op 5.8147 ms/op 0.83
forkChoice updateHead vc 600000 bc 320 eq 0 2.8924 ms/op 3.4548 ms/op 0.84
forkChoice updateHead vc 600000 bc 1200 eq 0 2.9495 ms/op 3.5746 ms/op 0.83
forkChoice updateHead vc 600000 bc 7200 eq 0 3.2902 ms/op 4.1680 ms/op 0.79
forkChoice updateHead vc 600000 bc 64 eq 1000 3.2947 ms/op 3.8926 ms/op 0.85
forkChoice updateHead vc 600000 bc 64 eq 10000 3.4158 ms/op 4.9023 ms/op 0.70
forkChoice updateHead vc 600000 bc 64 eq 300000 8.0756 ms/op 12.095 ms/op 0.67
computeDeltas 1400000 validators 0% inactive 14.046 ms/op 19.359 ms/op 0.73
computeDeltas 1400000 validators 10% inactive 13.157 ms/op 17.265 ms/op 0.76
computeDeltas 1400000 validators 20% inactive 12.284 ms/op 16.219 ms/op 0.76
computeDeltas 1400000 validators 50% inactive 9.6408 ms/op 12.602 ms/op 0.77
computeDeltas 2100000 validators 0% inactive 21.087 ms/op 27.464 ms/op 0.77
computeDeltas 2100000 validators 10% inactive 19.784 ms/op 26.196 ms/op 0.76
computeDeltas 2100000 validators 20% inactive 18.430 ms/op 23.670 ms/op 0.78
computeDeltas 2100000 validators 50% inactive 14.431 ms/op 18.116 ms/op 0.80
altair processAttestation - 250000 vs - 7PWei normalcase 1.8470 ms/op 2.3881 ms/op 0.77
altair processAttestation - 250000 vs - 7PWei worstcase 2.6592 ms/op 3.5233 ms/op 0.75
altair processAttestation - setStatus - 1/6 committees join 112.76 us/op 134.06 us/op 0.84
altair processAttestation - setStatus - 1/3 committees join 221.32 us/op 287.65 us/op 0.77
altair processAttestation - setStatus - 1/2 committees join 314.40 us/op 392.93 us/op 0.80
altair processAttestation - setStatus - 2/3 committees join 400.25 us/op 527.72 us/op 0.76
altair processAttestation - setStatus - 4/5 committees join 558.34 us/op 781.41 us/op 0.71
altair processAttestation - setStatus - 100% committees join 658.34 us/op 802.64 us/op 0.82
altair processBlock - 250000 vs - 7PWei normalcase 3.3606 ms/op 6.0380 ms/op 0.56
altair processBlock - 250000 vs - 7PWei normalcase hashState 15.366 ms/op 23.588 ms/op 0.65
altair processBlock - 250000 vs - 7PWei worstcase 22.198 ms/op 28.848 ms/op 0.77
altair processBlock - 250000 vs - 7PWei worstcase hashState 51.483 ms/op 69.682 ms/op 0.74
phase0 processBlock - 250000 vs - 7PWei normalcase 1.4606 ms/op 1.9808 ms/op 0.74
phase0 processBlock - 250000 vs - 7PWei worstcase 18.885 ms/op 26.152 ms/op 0.72
altair processEth1Data - 250000 vs - 7PWei normalcase 348.02 us/op 468.99 us/op 0.74
getExpectedWithdrawals 250000 eb:1,eth1:1,we:0,wn:0,smpl:16 6.0940 us/op 6.8000 us/op 0.90
getExpectedWithdrawals 250000 eb:0.95,eth1:0.1,we:0.05,wn:0,smpl:220 36.326 us/op 46.662 us/op 0.78
getExpectedWithdrawals 250000 eb:0.95,eth1:0.3,we:0.05,wn:0,smpl:43 9.9330 us/op 12.463 us/op 0.80
getExpectedWithdrawals 250000 eb:0.95,eth1:0.7,we:0.05,wn:0,smpl:19 6.5750 us/op 9.5020 us/op 0.69
getExpectedWithdrawals 250000 eb:0.1,eth1:0.1,we:0,wn:0,smpl:1021 133.74 us/op 179.71 us/op 0.74
getExpectedWithdrawals 250000 eb:0.03,eth1:0.03,we:0,wn:0,smpl:11778 1.6460 ms/op 2.3461 ms/op 0.70
getExpectedWithdrawals 250000 eb:0.01,eth1:0.01,we:0,wn:0,smpl:16384 2.1568 ms/op 3.3309 ms/op 0.65
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,smpl:16384 2.1983 ms/op 3.0881 ms/op 0.71
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,nocache,smpl:16384 4.2207 ms/op 7.2378 ms/op 0.58
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,smpl:16384 2.4936 ms/op 2.9313 ms/op 0.85
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,nocache,smpl:16384 4.6488 ms/op 6.2422 ms/op 0.74
Tree 40 250000 create 351.22 ms/op 570.91 ms/op 0.62
Tree 40 250000 get(125000) 126.87 ns/op 182.99 ns/op 0.69
Tree 40 250000 set(125000) 1.1846 us/op 1.7482 us/op 0.68
Tree 40 250000 toArray() 13.796 ms/op 19.280 ms/op 0.72
Tree 40 250000 iterate all - toArray() + loop 12.507 ms/op 20.387 ms/op 0.61
Tree 40 250000 iterate all - get(i) 43.249 ms/op 59.890 ms/op 0.72
Array 250000 create 2.3940 ms/op 3.0668 ms/op 0.78
Array 250000 clone - spread 787.87 us/op 981.30 us/op 0.80
Array 250000 get(125000) 0.33600 ns/op 0.44300 ns/op 0.76
Array 250000 set(125000) 0.34600 ns/op 0.44100 ns/op 0.78
Array 250000 iterate all - loop 59.744 us/op 81.854 us/op 0.73
phase0 afterProcessEpoch - 250000 vs - 7PWei 40.474 ms/op 53.823 ms/op 0.75
Array.fill - length 1000000 2.7913 ms/op 4.3182 ms/op 0.65
Array push - length 1000000 9.3466 ms/op 14.291 ms/op 0.65
Array.get 0.21266 ns/op 0.30297 ns/op 0.70
Uint8Array.get 0.21580 ns/op 0.33902 ns/op 0.64
phase0 beforeProcessEpoch - 250000 vs - 7PWei 12.837 ms/op 22.152 ms/op 0.58
altair processEpoch - mainnet_e81889 271.55 ms/op 392.56 ms/op 0.69
mainnet_e81889 - altair beforeProcessEpoch 17.898 ms/op 22.875 ms/op 0.78
mainnet_e81889 - altair processJustificationAndFinalization 6.7210 us/op 9.7540 us/op 0.69
mainnet_e81889 - altair processInactivityUpdates 3.8039 ms/op 8.7585 ms/op 0.43
mainnet_e81889 - altair processRewardsAndPenalties 18.413 ms/op 36.059 ms/op 0.51
mainnet_e81889 - altair processRegistryUpdates 611.00 ns/op 932.00 ns/op 0.66
mainnet_e81889 - altair processSlashings 158.00 ns/op 241.00 ns/op 0.66
mainnet_e81889 - altair processEth1DataReset 155.00 ns/op 240.00 ns/op 0.65
mainnet_e81889 - altair processEffectiveBalanceUpdates 1.5854 ms/op 5.9625 ms/op 0.27
mainnet_e81889 - altair processSlashingsReset 812.00 ns/op 1.1890 us/op 0.68
mainnet_e81889 - altair processRandaoMixesReset 1.1410 us/op 2.1680 us/op 0.53
mainnet_e81889 - altair processHistoricalRootsUpdate 155.00 ns/op 218.00 ns/op 0.71
mainnet_e81889 - altair processParticipationFlagUpdates 497.00 ns/op 653.00 ns/op 0.76
mainnet_e81889 - altair processSyncCommitteeUpdates 123.00 ns/op 189.00 ns/op 0.65
mainnet_e81889 - altair afterProcessEpoch 44.037 ms/op 57.786 ms/op 0.76
capella processEpoch - mainnet_e217614 793.09 ms/op 1.0457 s/op 0.76
mainnet_e217614 - capella beforeProcessEpoch 96.998 ms/op 92.401 ms/op 1.05
mainnet_e217614 - capella processJustificationAndFinalization 6.0550 us/op 7.6270 us/op 0.79
mainnet_e217614 - capella processInactivityUpdates 15.272 ms/op 25.586 ms/op 0.60
mainnet_e217614 - capella processRewardsAndPenalties 100.73 ms/op 137.90 ms/op 0.73
mainnet_e217614 - capella processRegistryUpdates 5.8520 us/op 7.8980 us/op 0.74
mainnet_e217614 - capella processSlashings 156.00 ns/op 264.00 ns/op 0.59
mainnet_e217614 - capella processEth1DataReset 154.00 ns/op 236.00 ns/op 0.65
mainnet_e217614 - capella processEffectiveBalanceUpdates 20.015 ms/op 18.515 ms/op 1.08
mainnet_e217614 - capella processSlashingsReset 762.00 ns/op 1.1740 us/op 0.65
mainnet_e217614 - capella processRandaoMixesReset 1.0540 us/op 1.4850 us/op 0.71
mainnet_e217614 - capella processHistoricalRootsUpdate 157.00 ns/op 234.00 ns/op 0.67
mainnet_e217614 - capella processParticipationFlagUpdates 487.00 ns/op 794.00 ns/op 0.61
mainnet_e217614 - capella afterProcessEpoch 112.70 ms/op 190.28 ms/op 0.59
phase0 processEpoch - mainnet_e58758 232.05 ms/op 386.17 ms/op 0.60
mainnet_e58758 - phase0 beforeProcessEpoch 47.062 ms/op 87.495 ms/op 0.54
mainnet_e58758 - phase0 processJustificationAndFinalization 5.6450 us/op 10.590 us/op 0.53
mainnet_e58758 - phase0 processRewardsAndPenalties 18.488 ms/op 28.697 ms/op 0.64
mainnet_e58758 - phase0 processRegistryUpdates 2.7900 us/op 4.3060 us/op 0.65
mainnet_e58758 - phase0 processSlashings 164.00 ns/op 268.00 ns/op 0.61
mainnet_e58758 - phase0 processEth1DataReset 276.00 ns/op 385.00 ns/op 0.72
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 1.2785 ms/op 2.0868 ms/op 0.61
mainnet_e58758 - phase0 processSlashingsReset 882.00 ns/op 1.5890 us/op 0.56
mainnet_e58758 - phase0 processRandaoMixesReset 1.0670 us/op 3.0350 us/op 0.35
mainnet_e58758 - phase0 processHistoricalRootsUpdate 163.00 ns/op 296.00 ns/op 0.55
mainnet_e58758 - phase0 processParticipationRecordUpdates 803.00 ns/op 1.2590 us/op 0.64
mainnet_e58758 - phase0 afterProcessEpoch 35.003 ms/op 47.764 ms/op 0.73
phase0 processEffectiveBalanceUpdates - 250000 normalcase 1.7648 ms/op 2.5447 ms/op 0.69
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 2.0246 ms/op 3.3555 ms/op 0.60
altair processInactivityUpdates - 250000 normalcase 13.013 ms/op 26.929 ms/op 0.48
altair processInactivityUpdates - 250000 worstcase 12.430 ms/op 33.586 ms/op 0.37
phase0 processRegistryUpdates - 250000 normalcase 4.4620 us/op 8.2650 us/op 0.54
phase0 processRegistryUpdates - 250000 badcase_full_deposits 235.77 us/op 440.43 us/op 0.54
phase0 processRegistryUpdates - 250000 worstcase 0.5 72.060 ms/op 135.02 ms/op 0.53
altair processRewardsAndPenalties - 250000 normalcase 19.694 ms/op 25.643 ms/op 0.77
altair processRewardsAndPenalties - 250000 worstcase 18.412 ms/op 26.075 ms/op 0.71
phase0 getAttestationDeltas - 250000 normalcase 6.9400 ms/op 12.024 ms/op 0.58
phase0 getAttestationDeltas - 250000 worstcase 6.9262 ms/op 13.379 ms/op 0.52
phase0 processSlashings - 250000 worstcase 104.72 us/op 116.26 us/op 0.90
altair processSyncCommitteeUpdates - 250000 11.946 ms/op 16.623 ms/op 0.72
BeaconState.hashTreeRoot - No change 194.00 ns/op 436.00 ns/op 0.44
BeaconState.hashTreeRoot - 1 full validator 78.350 us/op 128.26 us/op 0.61
BeaconState.hashTreeRoot - 32 full validator 991.88 us/op 1.4742 ms/op 0.67
BeaconState.hashTreeRoot - 512 full validator 7.5154 ms/op 14.358 ms/op 0.52
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 103.53 us/op 187.49 us/op 0.55
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 1.8611 ms/op 3.2390 ms/op 0.57
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 16.248 ms/op 34.901 ms/op 0.47
BeaconState.hashTreeRoot - 1 balances 78.084 us/op 178.79 us/op 0.44
BeaconState.hashTreeRoot - 32 balances 785.70 us/op 1.4811 ms/op 0.53
BeaconState.hashTreeRoot - 512 balances 6.3404 ms/op 11.179 ms/op 0.57
BeaconState.hashTreeRoot - 250000 balances 141.34 ms/op 260.30 ms/op 0.54
aggregationBits - 2048 els - zipIndexesInBitList 21.129 us/op 39.825 us/op 0.53
regular array get 100000 times 24.370 us/op 50.246 us/op 0.49
wrappedArray get 100000 times 24.204 us/op 50.465 us/op 0.48
arrayWithProxy get 100000 times 12.959 ms/op 24.547 ms/op 0.53
ssz.Root.equals 23.560 ns/op 41.031 ns/op 0.57
byteArrayEquals 23.199 ns/op 44.813 ns/op 0.52
Buffer.compare 9.9190 ns/op 18.344 ns/op 0.54
processSlot - 1 slots 10.287 us/op 19.817 us/op 0.52
processSlot - 32 slots 2.2197 ms/op 7.6045 ms/op 0.29
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 5.2962 ms/op 8.9931 ms/op 0.59
getCommitteeAssignments - req 1 vs - 250000 vc 1.8624 ms/op 4.2999 ms/op 0.43
getCommitteeAssignments - req 100 vs - 250000 vc 3.7183 ms/op 8.3694 ms/op 0.44
getCommitteeAssignments - req 1000 vs - 250000 vc 3.9418 ms/op 8.4358 ms/op 0.47
findModifiedValidators - 10000 modified validators 573.84 ms/op 1.1179 s/op 0.51
findModifiedValidators - 1000 modified validators 304.79 ms/op 783.85 ms/op 0.39
findModifiedValidators - 100 modified validators 274.31 ms/op 452.62 ms/op 0.61
findModifiedValidators - 10 modified validators 257.90 ms/op 466.00 ms/op 0.55
findModifiedValidators - 1 modified validators 135.62 ms/op 381.56 ms/op 0.36
findModifiedValidators - no difference 162.41 ms/op 357.16 ms/op 0.45
migrate state 1500000 validators, 3400 modified, 2000 new 1.0031 s/op 2.7446 s/op 0.37
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 4.0800 ns/op 11.020 ns/op 0.37
state getBlockRootAtSlot - 250000 vs - 7PWei 472.65 ns/op 1.2571 us/op 0.38
computeProposerIndex 100000 validators 1.4920 ms/op 4.0402 ms/op 0.37
getNextSyncCommitteeIndices 1000 validators 116.08 ms/op 372.99 ms/op 0.31
getNextSyncCommitteeIndices 10000 validators 120.55 ms/op 318.51 ms/op 0.38
getNextSyncCommitteeIndices 100000 validators 120.98 ms/op 356.24 ms/op 0.34
computeProposers - vc 250000 611.41 us/op 2.3991 ms/op 0.25
computeEpochShuffling - vc 250000 41.620 ms/op 101.53 ms/op 0.41
getNextSyncCommittee - vc 250000 10.335 ms/op 30.293 ms/op 0.34
nodejs block root to RootHex using toHex 151.91 ns/op 784.40 ns/op 0.19
nodejs block root to RootHex using toRootHex 83.078 ns/op 247.73 ns/op 0.34
nodejs fromHex(blob) 645.68 us/op 715.22 us/op 0.90
nodejs fromHexInto(blob) 731.45 us/op 1.8865 ms/op 0.39
nodejs block root to RootHex using the deprecated toHexString 572.68 ns/op 1.3110 us/op 0.44
nodejs byteArrayEquals 32 bytes (block root) 28.852 ns/op 77.675 ns/op 0.37
nodejs byteArrayEquals 48 bytes (pubkey) 41.105 ns/op 90.327 ns/op 0.46
nodejs byteArrayEquals 96 bytes (signature) 39.791 ns/op 83.272 ns/op 0.48
nodejs byteArrayEquals 1024 bytes 46.052 ns/op 147.57 ns/op 0.31
nodejs byteArrayEquals 131072 bytes (blob) 1.9025 us/op 4.2500 us/op 0.45
browser block root to RootHex using toHex 172.50 ns/op 632.68 ns/op 0.27
browser block root to RootHex using toRootHex 154.21 ns/op 368.89 ns/op 0.42
browser fromHex(blob) 1.1310 ms/op 2.5939 ms/op 0.44
browser fromHexInto(blob) 727.23 us/op 1.6694 ms/op 0.44
browser block root to RootHex using the deprecated toHexString 401.96 ns/op 971.77 ns/op 0.41
browser byteArrayEquals 32 bytes (block root) 32.013 ns/op 79.431 ns/op 0.40
browser byteArrayEquals 48 bytes (pubkey) 44.767 ns/op 104.01 ns/op 0.43
browser byteArrayEquals 96 bytes (signature) 87.677 ns/op 191.20 ns/op 0.46
browser byteArrayEquals 1024 bytes 825.00 ns/op 1.8093 us/op 0.46
browser byteArrayEquals 131072 bytes (blob) 102.93 us/op 269.97 us/op 0.38

by benchmarkbot/action

@wemeetagain wemeetagain force-pushed the fix-shutdown-sequence branch from 6e96944 to e7e7497 Compare February 13, 2026 14:53
- Await backfillSync.close() by storing sync loop promise and waiting
  for in-flight DB writes to complete. Moved after network.close() so
  in-flight req/resp calls are terminated first.
- Await monitoring.close() pending HTTP request after aborting fetch

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@wemeetagain wemeetagain force-pushed the fix-shutdown-sequence branch from e7e7497 to 2544481 Compare February 13, 2026 14:55
Copy link
Member

@nflaig nflaig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wemeetagain this needs to be tested on a running node, I don't see any issues when currently shutting down the node, doens't mean it's ideal, and this PR might improve things but need to make sure this doesn't make the status quo worse

Copy link
Member

@nflaig nflaig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reduced this PR to minimal useful changes, which is awaiting pending requests in monitoring service before closing it (although even that is a bit unnecessary)

the most value this PR adds is the comment in close() which clarifies why we wanna abort() last

@nflaig nflaig changed the title fix: properly await async shutdown and fix close sequence chore: properly await async monitoring requests during shutdown Feb 24, 2026
@nflaig nflaig enabled auto-merge (squash) February 24, 2026 12:25
@nflaig nflaig merged commit 95cf2ed into unstable Feb 24, 2026
20 of 21 checks passed
@nflaig nflaig deleted the fix-shutdown-sequence branch February 24, 2026 12:31
@codecov
Copy link

codecov bot commented Feb 24, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 52.34%. Comparing base (f225223) to head (0824371).
⚠️ Report is 1 commits behind head on unstable.

Additional details and impacted files
@@            Coverage Diff            @@
##           unstable    #8896   +/-   ##
=========================================
  Coverage     52.33%   52.34%           
=========================================
  Files           848      848           
  Lines         63283    63274    -9     
  Branches       4694     4694           
=========================================
- Hits          33119    33118    -1     
+ Misses        30095    30087    -8     
  Partials         69       69           
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants