-
Notifications
You must be signed in to change notification settings - Fork 223
Config change and restart UI #351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Amazinggg. I'll get a UI working for this! |
6e20a3e
to
d0f2903
Compare
This now has UI and is ready for review. |
Discovered in #351 where restart would hang forever. key_input.rs never properly implemented shutdown because it didn't have to do anything interesting on shutdown. Wire up oneshot channels so that it falls in line with other services. I do wonder though if there's a more clever way of handling this. For example I could just not use the task_tracker, use tokio::spawn and let the task get cancelled by tokio.
Discovered in EFForg#351 where restart would hang forever. key_input.rs never properly implemented shutdown because it didn't have to do anything interesting on shutdown. Wire up oneshot channels so that it falls in line with other services. I do wonder though if there's a more clever way of handling this. For example I could just not use the task_tracker, use tokio::spawn and let the task get cancelled by tokio.
Discovered in EFForg#351 where restart would hang forever. key_input.rs never properly implemented shutdown because it didn't have to do anything interesting on shutdown. Wire up oneshot channels so that it falls in line with other services. I do wonder though if there's a more clever way of handling this. For example I could just not use the task_tracker, use tokio::spawn and let the task get cancelled by tokio.
Discovered in EFForg#351 where restart would hang forever. key_input.rs never properly implemented shutdown because it didn't have to do anything interesting on shutdown. Wire up oneshot channels so that it falls in line with other services. I do wonder though if there's a more clever way of handling this. For example I could just not use the task_tracker, use tokio::spawn and let the task get cancelled by tokio.
Testing this out, restart worked great the first time but crashed rayhunter the second time. Logs below:
|
was able to reproduce this crash, sometimes it happens on the second config change, last time it took until the 3rd or 4th config change. Possibly a race condition? |
yes i saw this on tplink too. i worked around it with the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i like this approach a lot, thanks for all the work on it. got a couple questions/comments
bin/src/server.rs
Outdated
let config = parse_config(&state.config_path).await.map_err(|err| { | ||
( | ||
StatusCode::INTERNAL_SERVER_ERROR, | ||
format!("failed to read config file: {}", err), | ||
) | ||
})?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i guess this is a broader API question, but should GET /api/config
return the currently loaded config, or the current state of the config file (as implemented here)? i'm leaning towards the former, since that's what'd be most helpful for the user i think, but i could go either way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we say the only valid flow is "get config, set config, restart" then i'd take the currently loaded config. but i think there are valid reasons one may want to postpone the restart and instead make a change to the config, then later make another change.
but yeah it makes it hard to reason about what the currently loaded config is. i can add some marker that tells the user the currently saved config is out of sync with the daemon's loaded config. what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a marker here is right, but I also wonder if we should just have one botton that both saves the config and reloads the daemon instead of having two steps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, a dirty_config
flag or something to mark that the config is written but not active yet would be great
{:else} | ||
<div class="text-center py-4 text-red-600"> | ||
Failed to load configuration. Please try reloading the page. | ||
</div> | ||
{/if} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm indentation seems a bit wonky here
Co-authored-by: Will Greenberg <ifnspifn@gmail.com>
I removed all advanced and disabled options now to make the UI simpler. If somebody really wants to change the storage path, port or dummy analyzer mode then we can put effort into that and deal with the UI complexity then. |
@cooperq i couldn't figure out what the error is about. if it happens on orbic then i suspect we are using the diag interface wrong. i added an exponential backoff instead of static 5 second sleep, hope it helps with orbic too. |
New version works much better, I think exponential backoff was the way to go here. |
|
||
pub async fn set_config( | ||
State(state): State<Arc<ServerState>>, | ||
Json(config): Json<Config>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just to double-check my assumption, since we're saying this is a JSON-serialized Config
struct, axon handles invalid input such as config.ui_level = "foo"
right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes it'll just return a 400 or similar
I think everything is addressed. There's only just one "Save and apply" button now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems to work great, merging!
Add UI for getting config, setting config, and restarting rayhunter.
The endpoints for config expose JSON instead of TOML for ease of use from JS.
The restart endpoint does not restart the entire process, but rather all threads. In theory this is more portable than trying to restart the entire process. In practice this seems to cause some issues with the diag device, which are worked around with a 5 second sleep.
Only tested on TP-Link M7350, no other devices.