Skip to content

Conversation

catenacyber
Copy link
Contributor

#446 with additional commit.

Waiting for QA, but looks good locally (sames hashes than libhtp.rs)

Reasons for patching libhtp.c for each commit :

  • decompressors: do not take data after end :
    flate2.rs does not accept data after end cf https://docs.rs/flate2/latest/flate2/write/struct.DeflateDecoder.html#method.try_finish for instance and this seems right. libhtp.c used to restart new, fail, and switch to passthrough.
    Cases happen because of (not perfectly handled) gaps

  • response: end decompressors in chunked content
    libhtp.rs (by rust chained decoder construction) does not do the buffering up to 8192 bytes before flushing to C caller.
    libhtp.c failed to flush some (gzipped) data in these cases (end of stream before file was complete)

  • chunks: abort asap on invalid chunk length and chunks: probe validity if data was not buffered
    libhtp.rs looks more sane :

  1. do not wait for 8 bytes to probe chunk length line validity in case we did not get a full line
  2. if data was buffered, do not probe validity as if it just started
    Cases happen because of (not perfectly handled) gaps
    There may be more to do here in C but it looks minimal change to get same behavior for QA
  • response: do not error on gap finishing content-length
    libhtp.rs is just right doing in response_body_data the following
                // Send data buffer to the decompressor if it exists
                if resp.response_decompressor.is_none() && data.is_none() {
                    return Ok(());
                }

so I changed libhtp.c. To change libhtp.rs behavior instead, just remove the mentioned code and let it go through let mut decompressor = resp.response_decompressor.take().ok_or(HtpStatus::ERROR)?;

When a response ends in the middle of a chunk, signal
it to the decompressors to let them finalize the data
they got so far, as is done with content-length.

Do this for all states of chunked content.
If a chunk length line was split in two packets, we buffered
the end of the first packet/beginning of the line, and checked
its validity.
So, do not check further on second packet.
Passing a gap to a decompressor closes it, but if we also finish
known content-length, we try to reclose decompressor and fail
@catenacyber
Copy link
Contributor Author

I do not think there should be a libhtp release for 7.0.9

Should we have tickets for these + SV tests ?
For SV tests, there is a problem that many examples come from the mishandling of gaps in chunked transfer, so they will not test/highlight any longer the problem they now fix once this gap handling for chunks is fixed itself...

Fixing this means a bigger QA change (see #442 )
So, I think we want more libhtp.rs than fixing this libhtp.c behavior

@catenacyber
Copy link
Contributor Author

Passes UT + SV

@suricata-qa
Copy link

Information:

ERROR: QA failed on SURI_TLPW1_files_sha256.

field baseline test %
SURI_TLPW1_stats_chk
.app_layer.error.http.parser 47 45 95.74%
SURI_TLPR1_stats_chk
.app_layer.error.http.parser 574 536 93.38%

Pipeline 25103

@catenacyber
Copy link
Contributor Author

Decrease in .app_layer.error.http.parser is expected from commit response: do not error on gap finishing content-length

@catenacyber
Copy link
Contributor Author

TL;DR LGTM

sha files diff look good with same behavior as old OISF/suricata#12765 except for
a21105bb6f4381958a3300e1fdfadfc3ad07c8dbaeaafde9bac80708bac5f716 - d04d95357cd2677d9933856cf60b85a2dc519e1c101fe7eebfe31913bf63355f
and new OISF/suricata#12767 should fix it with OISF/suricata@e392a00

@catenacyber
Copy link
Contributor Author

I confirm it is good

QA result for OISF/suricata#12767 show same hashes

@victorjulien
Copy link
Member

I confirm it is good

QA result for OISF/suricata#12767 show same hashes

Staging is running.

@jasonish
Copy link
Member

Merged. Thanks!

@jasonish jasonish closed this Mar 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

4 participants