- 
          
 - 
                Notifications
    
You must be signed in to change notification settings  - Fork 67
 
Description
I discovered an issue that causes the process using isahc to crash due to a segmentation fault. After debugging the issue I figured out that the segfault happens whenever isahc::agent::AgentContext::run panics. This is a problem because there are multiple ways in which that function or the functions that it calls can panic.
Below is a list of the steps that lead to the segmentation fault:
isahc::agent::AgentContext::runpanics.runhas taken ownership ofself. Whenrunis unwoundselfis dropped. As theisahc::agent::AgentContextholds acurl::multi::Easy2Handleand that in turn holds acurl::multi::DetachGuard,curl::multi::DetachGuard::dropis called.dropcallscurl::multi::DetachGuard::detach.detachcallscurl_sys::curl_multi_remove_handle.curl_multi_remove_handleis a C function and it calls more C functions. Execution returns to Rust via thecurl::multi::Multi::_socket_function::cbcallback.cbdefines a closure and passes it tocurl::panic::catch.catchinvokes the closure.- The closure executes unsafe code which creates a reference to another closure 
fand callsfusing the reference. A segmentation fault occurs.fis defined inisahc::agent::AgentContext::new. 
It is easy to reproduce this issue by adding a panic! macro invocation to isahc::agent::AgentContext::run method below the line self.poll()?;.
A similar segmentation fault can occur due to unsafe code in a closure defined in curl::multi::Multi::_timer_function::cb. That closure, too, creates a reference to another closure f and calls it. To reproduce this segfault case, move the panic! invocation upwards in the run method below the line self.poll_messages()?;.
Adding a panic! macro invocation to the run method is of course artifical and perhaps that alone would not warrant the panic handling design to be reconsidered and the segmentation fault to be fixed. However, the segmentation fault due to a panic in run can also occur organically in certain conditions. That is how I originally came across this issue and started my investigation. I created another related issue #460 and a pull request #461 that explain and reproduce an organic panic and the subsequent segmentation fault.
Overall I think there are two issues in isahc that should be looked into:
- panic handling should be improved so that a panic does not cause a segmentation fault,
 - and 
isahc::agent::AgentContext::pollshould not callunwrap. 
System and version information
I have been able to successfully reproduce the segmentation fault using a panic! macro invocation on a laptop and inside a container in a CI environment. Here are some of their info.
Laptop
$ grep PRETTY_NAME /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
$ cat /etc/debian_version
12.11
$ uname -a
Linux <redacted for privacy> 6.1.0-37-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.140-1 (2025-05-22) x86_64 GNU/Linux
$ rustup show active-toolchain
1.87.0-x86_64-unknown-linux-gnu
$ grep isahc Cargo.toml
isahc = { version = "1.7.2", default-features = false, features = ["http2"] }Container in CI environment
$ grep PRETTY_NAME /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
$ cat /etc/debian_version
12.11
$ uname -a
Linux <redacted for privacy> 6.1.79 #1 SMP Wed Apr  9 00:59:22 UTC 2025 x86_64 GNU/Linux
$ rustup show active-toolchain
1.87.0-x86_64-unknown-linux-gnu
$ grep isahc Cargo.toml
isahc = { version = "1.7.2", default-features = false, features = ["http2"] }