-
Notifications
You must be signed in to change notification settings - Fork 48
Support manual bitrate mask control #81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Support manual bitrate mask control #81
Conversation
Add support for setting cfg80211_bitrate_mask via set_bitrate_mask(). This enables users to manually specify legacy bitrates using the iw, which influences the simulated transmission latency. The packet latency logic has been updated to reflect real-world behavior by making delay inversely proportional to the configured bitrate. When a valid bitrate mask is applied, the delay is calculated as: (datalen * 8 * 1000) / bitrate_kbps If no valid bitrate is set, fallback to default delay is used. This change improves fidelity of wireless emulation by making packet timing sensitive to user configuration and network rate conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consolidate corresponding test procedure.
#define VWIFI_ALL_RATES_5GHZ 0x0FF0 // bit 4~11 set | ||
|
||
static const struct ieee80211_rate vwifi_supported_rates[]; | ||
static const int vwifi_supported_rates_count = 12; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid hardcoded constants.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation, which selects the first set bit in the legacy
bitrate mask to determine the transmission rate, is not suitable for simulating manual bitrate settings. This approach is inconsistent with how real Wi-Fi drivers operate. In practice, the legacy
mask is used to filter the set of allowed legacy (non-HT) bitrates for rate control algorithms — not to enforce a fixed rate via first-fit logic.
The purpose of the legacy
field in cfg80211_bitrate_mask
is to restrict which legacy rates the device is permitted to use, typically in 802.11a/b/g mode. It does not imply which single rate must be used. Rate control algorithms like minstrel dynamically select among the allowed rates based on link conditions and performance feedback.
Moreover, vwifi
is now emulate 802.11n behavior. If the goal is to support manual bitrate selection or simulate fixed throughput, you should properly implement HT MCS handling by parsing mask->control[band].ht_mcs
, selecting the correct MCS index, and using it to simulate transmission latency. This would result in a more accurate and standard-compliant simulation of real-world Wi-Fi behavior.
Please consider refactoring the logic to support HT rates and avoid oversimplified legacy rate selection.
BTW, how do you verify the correctness of your latency simulation? You should consider implementing a script or tool to validate and measure the simulated latency for accuracy.
@@ -206,6 +214,10 @@ static int station = 2; | |||
module_param(station, int, 0444); | |||
MODULE_PARM_DESC(station, "Number of virtual interfaces running in STA mode."); | |||
|
|||
static int ap = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think dynamic AP interface creation is unnecessary for vwifi
, since we can switch an existing STA interface to AP mode when needed. Additionally, this change is unrelated to bitrate emulation and should be submitted as a separate PR for better modularity and review clarity.
@@ -3340,6 +3438,14 @@ static int __init vwifi_init(void) | |||
goto interface_add; | |||
} | |||
|
|||
for (int i = 0; i < ap; ++i) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think dynamic AP interface creation is unnecessary for vwifi
, since we can switch an existing STA interface to AP mode when needed. Additionally, this change is unrelated to bitrate emulation and should be submitted as a separate PR for better modularity and review clarity.
} | ||
} | ||
|
||
if (bitrate_kbps > 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also consider implementing latency simulation for the default bitrate (MCS=31) to ensure consistent transmission behavior when no MCS is specified.
@@ -871,6 +923,26 @@ static int __vwifi_ndo_start_xmit(struct vwifi_vif *vif, | |||
eth_hdr->h_source); | |||
} | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Latency simulation should be applied not only in the uplink path, but also in the downlink path to ensure consistent behavior in both directions.
Thank you very much for your detailed feedback. After reviewing my current implementation, I realize that you are absolutely right: Currently, I'm using the first set bit in the legacy bitrate mask to select a fixed transmission rate. However, as you pointed out, this is not how real Wi-Fi drivers are expected to behave — the legacy mask is only meant to constrain the set of allowed legacy rates, not to enforce a fixed one. To properly simulate fixed throughput or latency behavior, I should parse the mask->control[band].ht_mcs field and select an appropriate MCS index, which more accurately reflects the actual bit rate under 802.11n. As a result, I’ve decided to re-prioritize my tasks and implement MCS programmability first. This will enable the driver to support manual selection of specific MCS indices. Once that is in place, I’ll refactor the latency simulation logic to compute transmission delay based on the selected MCS, achieving more accurate and standards-compliant behavior. |
Perhaps you can just focus on implementing the simulation of transmission time in this PR. The MCS-related implementation will be addressed in PR80. You can also collaborate with @dingsen-Greenhorn to complete that PR. |
Thanks for notification ! @jychen0611 |
Hi @jychen0611 , I’d like to confirm whether my current approach for verifying the bitrate-based latency simulation is appropriate. To test the dynamic latency adjustment feature in this PR, I built an environment based on the code from PR80, where the MCS value can change. However, I noticed that in PR80, the MCS only updates when the dump command is invoked, and there is no persistent storage for the MCS state. To address this, my plan is as follows: During Extract the MCS parsing logic from In Does this approach make sense? |
Further discussion on this implementation can be found in issue82 |
Add support for setting cfg80211_bitrate_mask via set_bitrate_mask(). This enables users to manually specify legacy bitrates using the iw, which influences the simulated transmission latency.
The packet latency logic has been updated to reflect real-world behavior by making delay inversely proportional to the configured bitrate. When a valid bitrate mask is applied, the delay is calculated as:
(datalen * 8 * 1000) / bitrate_kbps
If no valid bitrate is set, fallback to default delay is used.
This change improves fidelity of wireless emulation by making packet timing sensitive to user configuration and network rate conditions.