You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This reflects the changes made in #882 and #914.
I have also adjusted the claim that Hyperion supports 170k+ players to
10k+ players. We do not currently have any benchmarks supporting the
170k+ player claim before or after the Bevy PR. I believe this number
was extrapolated from the 5k player ms/tick, where (1.42 ms/tick)/(5k
players) = (50 ms/tick)/(176k players [extrapolated]), but we do not
have a real benchmark running 170k+ players.
IS -->|" Decode & create Bevy events for each packet"|RS
20
20
RS -->|State changes| ES
21
21
ES -->|" local BytesMut "| ET
22
22
ES -->|" Next tick (50ms) "|IS
@@ -62,83 +62,37 @@ pub struct Unicast<'a> {
62
62
63
63
## Ingress
64
64
65
-
### Tokio Async Task
66
-
The tokio async ingress task creates a data structure
67
-
defined [here](https://github.com/andrewgazelka/hyperion/blob/0c1a0386548d71485c442cf5e9c9ebb2ed58142e/crates/hyperion/src/net/proxy.rs#L16-L23).
65
+
### Packet Channel
68
66
69
-
```rust
70
-
#[derive(Default)]
71
-
pubstructReceiveStateInner {
72
-
/// All players who have recently connected to the server.
73
-
pubplayer_connect:Vec<u64>,
74
-
/// All players who have recently disconnected from the server.
75
-
pubplayer_disconnect:Vec<u64>,
76
-
/// A map of stream ids to the corresponding [`BytesMut`] buffers. This represents data from the client to the server.
77
-
pubpackets:HashMap<u64, BytesMut>,
78
-
}
79
-
```
80
-
81
-
### Decoding System
82
-
83
-
Then, when it is time to run the ingress system, we lock the mutex for `ReceiveStateInner` and process the data,
84
-
decoding all the packets until we get
85
-
86
-
```rust
87
-
#[derive(Copy, Clone)]
88
-
pubstructBorrowedPacketFrame<'a> {
89
-
pubid:i32,
90
-
pubbody:&'a [u8],
91
-
}
92
-
```
67
+
The packet channel is a linked list of `Fragment`. Each fragment contains:
68
+
- an incremental fragment id
69
+
- a `Box<[u8]>` with zero or more contiguous packets with a `u32` length prefix before each packet
70
+
- a read cursor, where `0..read_cursor` is ready to read and contains whole packets
71
+
- an `ArcSwapOption<Fragment>` pointing to the next fragment if there is one
93
72
94
-
where the `'a` lifetime is the duration of the entire tick (the `BytesMut` are deallocated at the end of the tick).
73
+
As packets from the client are processed in the proxy thread, the server decodes the `VarInt` length prefix to
74
+
determine the packet size. If there is enough space remaining in the current fragment, the packet bytes are copied to
75
+
the current fragment. Otherwise, a new fragment will be allocated and appended to the linked list.
95
76
96
-
### Event Generation
77
+
The decoding system, running in separate threads, iterates through the linked list and reads packets up to the read
78
+
cursor. These packets are decoded, sent through Bevy events, and then processed by other systems which read those
79
+
events.
97
80
98
-
For each entity's packet, we have a switch statement over what we should do for each
Copy file name to clipboardExpand all lines: docs/index.md
+2-4Lines changed: 2 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -5,21 +5,19 @@ layout: home
5
5
hero:
6
6
name: "Hyperion"
7
7
text: "The most advanced Minecraft game engine built in Rust"
8
-
tagline: 170,000 players in one world at 20 TPS
8
+
tagline: 10,000 players in one world at 20 TPS
9
9
actions:
10
10
- theme: brand
11
11
text: Architecture
12
12
link: /architecture/introduction
13
13
- theme: alt
14
14
text: 10,000 Player PvP
15
-
link: /tag/introduction
15
+
link: /bedwars/introduction
16
16
17
17
features:
18
18
- title: Run massive events with confidence
19
19
details: Built in Rust, you can be highly confident your event will not crash from memory leaks or SEGFAULTS.
20
20
- title: Vertical and horizontal scalability
21
21
details: In our testing I/O is the main bottleneck in massive events. As such, we made it so I/O logic can be offloaded horizontally. The actual core game server is scaled vertically.
22
-
- title: Easy debugging profiling
23
-
details: All tasks are easily viewable in a tracing UI. All entities are viewable and modifiable from Flecs Explorer.
0 commit comments