Skip to content

Conversation

arnetheduck
Copy link
Member

@arnetheduck arnetheduck commented Sep 8, 2025

This preview brings Nimbus full circle to where it started all these years ago and allows running Ethereum in a single node / process, both as a wallet/web3 backend and as a validator.

Among interesting properies are:

  • Easy to set up and run - one binary, one process, no JWT and messy setup, no cross-client communication issues, timing issues etc
  • Excellent performance, of course
  • Shared database = small database
  • Can run without the legacy devp2p as long as you're reasonably synced and not using it for block production - up to 5 months of history are instead sourced from the consensus network - block production requires devp2p since that's where the public mempool comes from

Running ethereum and syncing mainnet is now as easy as:

./nimbus trustedNodeSync \
  --trusted-node-url=http://testing.mainnet.beacon-api.nimbus.team/ \
  --backfill=false

./nimbus

The consensus chain will start from a checkpoint while the execution chain will still be synced via P2P.

You need about 500GB of space in total, but if you're buying a drive today, get 2 or 4 TB anyway.

Testnets like hoodi can reasonably be synced from P2P all the way (takes a bit more than a day at the time of writing), without the checkpoint sync:

./nimbus --network:hoodi

That's it! The node can now be used both for validators and as a web3 provider. --rpc gives you the web3 backend which allows connecting wallets while --rest gives the beacon api that validator clients use.

Of course, you can run your validators in the
node
as well.

Here's a true maxi configuration that turns on (almost) everything:

./nimbus --rpc --rest --metrics

The execution chain can also be imported from era files, downloading the history from https://mainnet.era.nimbus.team/ and https://mainnet.era1.nimbus.team/ and placing them in era and era1 in the data directory as the manual suggests, then running an import - it takes a few days:

./nimbus import

If you were already running nimbus, you can reuse your existing data directory - use --data-dir:/some/path as usual with all the commands to specify where you want your data stored - if you had both eth1 and eth2 directories, just merge their contents.

To get up and running more quickly, snapshots of the mainnet execution database are maintained here:

https://eth1-db.nimbus.team/

Together with checkpoint sync, you'll have a fully synced node in no time!

In future versions, this will be replaced by snap sync or an equivalent state sync mechanism.

To build the protoype:

make update
make -j8 nimbus

In a single process binary, the beacon and execution chain are each running in their own thread, sharing data directory and common services, similar to running the two pieces separately with the same data dir.

One way to think about it is that the execution client and beacon nodes are stand-alone libraries that are being used together - this is not far from the truth and in fact, you can use either (or both!) as a library.

The binary supports the union of all functionality that nimbus_execution_client and nimbus_beacon_node offers, including all the subcommands like checkpoint
sync
and execution history
import
, simply using the nimbus command instead.

Prototype notes:

  • cross-thread communication is done using a local instance of web3 / JSON - this is nuts of course: it should simply pass objects around and convert to directly to RLP on demand without going via JSON
  • the thread pool is not shared but should be - nim-taskpools needs to learn to accept tasks from threads other than the one that created it
  • discovery is not shared - instead, each of eth1/2 runs its own discovery protocols and consequently the node has two "identities"
  • there are many efficiency opportunities to exploit, in particular on the memory usage front
  • next up is light client and portal to be added as options, to support a wide range of feature vs performance tradeoffs

This change brings Nimbus full circle to where it started all these
years ago and allows running Ethereum in a single node / process, both
as a wallet/web3 backend and as a validator.

Among interesting properies are:

* Easy to set up and run - one binary, one process, no JWT and messy
  setup, no cross-client communication issues, timing issues etc
* Excellent performance, of course
* Shared database = small database
* Can run without the legacy devp2p as long as you're reasonably synced
  and not using it for block production - up to 5 months of history are
  instead sourced from the consensus network - block production requires
  devp2p since that's where the public mempool comes from

Running ethereum and syncing mainnet is now as easy as:

```sh
./nimbus trustedNodeSync \
  --trusted-node-url=http://testing.mainnet.beacon-api.nimbus.team/ \
  --backfill=false

./nimbus
```

The consensus chain will start from a checkpoint while the execution
chain will still be synced via P2P.

You need about 500GB of space in total, but if you're buying a drive
today, get 2 or 4 TB anyway.

Testnets like `hoodi` can reasonably be synced from P2P all the way
(takes a bit more than a day at the time of writing), without the
checkpoint sync:

```nim
./nimbus --network:hoodi
```

That's it! The node can now be used both for validators and as a web3
provider. `--rpc` gives you the web3 backend which allows connecting
wallets while `--rest` gives the beacon api that validator clients use.

Of course, you can run your validators [in the
node](https://nimbus.guide/run-a-validator.html#2-import-your-validator-keys)
as well.

Here's a true maxi configuration that turns on (almost) everything:

```nim
./nimbus --rpc --rest --metrics
```

The execution chain can also be imported from era files, downloading
the history from https://mainnet.era.nimbus.team/ and
https://mainnet.era1.nimbus.team/ and placing them in `era` and `era1`
in the data directory as the [manual](https://nimbus.guide/execution-client.html#syncing-using-era-files)
suggests, then running an `import` - it takes a few days:

```sh
./nimbus import
```

If you were already running nimbus, you can reuse your existing data
directory - use `--data-dir:/some/path` as usual with all the commands
to specify where you want your data stored - if you had both eth1 and
eth2 directories, just merge their contents.

To get up and running more quickly, snapshots of the mainnet execution
database are maintained here:

https://eth1-db.nimbus.team/

Together with checkpoint sync, you'll have a fully synced node in no
time!

In future versions, this will be replaced by snap sync or an equivalent
state sync mechanism.

To build the protoype:

```sh
make update
make -j8 nimbus
```

In a single process binary, the beacon and execution chain are each
running in their own thread, sharing data directory and common services,
similar to running the two pieces separately with the same data dir.

One way to think about it is that the execution client and beacon nodes
are stand-alone libraries that are being used together - this is not far
from the truth and in fact, you can use either (or both!) as a library.

The binary supports the union of all functionality that
`nimbus_execution_client` and `nimbus_beacon_node` offers, including all
the subcommands like [checkpoint
sync](https://nimbus.guide/trusted-node-sync.html)
and [execution history
import](https://nimbus.guide/execution-client.html#import-era-files),
simply using the `nimbus` command instead.

Prototype notes:

* cross-thread communication is done using a local instance of web3 /
  JSON - this is nuts of course: it should simply pass objects around
  and convert to directly to RLP on demand without going via JSON
* the thread pool is not shared but should be - nim-taskpools needs to
  learn to accept tasks from threads other than the one that created it
* discovery is not shared - instead, each of eth1/2 runs its own
  discovery protocols and consequently the node has two "identities"
* there are many efficiency opportunities to exploit, in particular on
  the memory usage front
* next up is light client and portal to be added as options, to support
  a wide range of feature vs performance tradeoffs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant