generated from oracle/template-repo
-
Notifications
You must be signed in to change notification settings - Fork 95
Open
Description
Hi.
I notice that bpftune keeps increasing net.ipv4.tcp.rmem, even though it is beyond net.core.rmem_max. Is this correct behaviour?
Jul 22 14:43:43 e350 bpftune[55954]: bpftune works fully
Jul 22 14:43:43 e350 bpftune[55954]: bpftune supports per-netns policy (via netns cookie)
Jul 22 14:43:44 e350 bpftune[55954]: Scenario 'specify TCP congestion control algorithm' occurred for tunable 'net.ipv4.tcp_allowed_congestion_control' in global ns. To optimize TCP performance, a TCP congestion control algorithm was chosen to mimimize round-trip time and maximize delivery rate.
Jul 22 14:43:44 e350 bpftune[55954]: updating 'net.ipv4.tcp_allowed_congestion_control' to 'reno bbr cubic dctcp htcp'
Jul 22 14:44:00 e350 bpftune[55954]: Scenario 'need to increase TCP buffer size(s)' occurred for tunable 'net.ipv4.tcp_rmem' in global ns. Need to increase buffer size(s) to maximize throughput
Jul 22 14:44:00 e350 bpftune[55954]: Due to need to increase max buffer size to maximize throughput change net.ipv4.tcp_rmem(min default max) from (4096 131072 91552733) -> (4096 131072 114440916)
Jul 22 14:44:30 e350 bpftune[55954]: Scenario 'need to increase TCP buffer size(s)' occurred for tunable 'net.ipv4.tcp_rmem' in global ns. Need to increase buffer size(s) to maximize throughput
Jul 22 14:44:30 e350 bpftune[55954]: Due to need to increase max buffer size to maximize throughput change net.ipv4.tcp_rmem(min default max) from (4096 131072 114440916) -> (4096 131072 143051145)
Jul 22 14:44:30 e350 bpftune[55954]: Scenario 'need to increase TCP buffer size(s)' occurred for tunable 'net.ipv4.tcp_rmem' in global ns. Need to increase buffer size(s) to maximize throughput
Jul 22 14:44:30 e350 bpftune[55954]: Due to need to increase max buffer size to maximize throughput change net.ipv4.tcp_rmem(min default max) from (4096 131072 143051145) -> (4096 131072 178813931)
Jul 22 14:44:30 e350 bpftune[55954]: Scenario 'need to increase TCP buffer size(s)' occurred for tunable 'net.ipv4.tcp_rmem' in global ns. Need to increase buffer size(s) to maximize throughput
Jul 22 14:44:30 e350 bpftune[55954]: Due to need to increase max buffer size to maximize throughput change net.ipv4.tcp_rmem(min default max) from (4096 131072 178813931) -> (4096 131072 223517413)
Jul 22 14:44:30 e350 bpftune[55954]: Scenario 'need to increase TCP buffer size(s)' occurred for tunable 'net.ipv4.tcp_rmem' in global ns. Need to increase buffer size(s) to maximize throughput
Jul 22 14:44:30 e350 bpftune[55954]: Due to need to increase max buffer size to maximize throughput change net.ipv4.tcp_rmem(min default max) from (4096 131072 223517413) -> (4096 131072 279396766)
Jul 22 14:56:43 e350 bpftune[55954]: Scenario 'need to increase TCP buffer size(s)' occurred for tunable 'net.ipv4.tcp_rmem' in global ns. Need to increase buffer size(s) to maximize throughput
Jul 22 14:56:43 e350 bpftune[55954]: Due to need to increase max buffer size to maximize throughput change net.ipv4.tcp_rmem(min default max) from (4096 131072 279396766) -> (4096 131072 349245957)
My system has the following max set.
❯ sysctl net.core.rmem_max
net.core.rmem_max = 7500000
According to https://www.kernel.org/doc/html/latest/networking/ip-sysctl.html tcp.rmem is capped at core.rmem_max and ditto for wmem.
Metadata
Metadata
Assignees
Labels
No labels