Skip to content

aws-chunked encoding issue while using ingress-nginx as proxy #13589

@philippebi

Description

@philippebi

What happened:

Uploading a file to an S3 bucket via an Envoy proxy, exposed by an RKE2 Ingress does not work because of some malformed headers.

The complete path:
aws cli ==<TCP/443>==> RKE2 Ingress Nginx ==TCP/80 ==> Envoy (80) ==> S3

The result:

$ aws s3 cp toto.txt s3://mybucket/blablabla.txt --endpoint-url https://my-s3-proxy.example.com --ca-bundle tls.crt
upload failed: ./toto.txt to s3://mybucket/blablabla.txt An error occurred (MalformedTrailerError) when calling the PutObject operation: The request contained trailing data that was not well-formed or did not conform to our published schema.

The headers received by Envoy, i.e. sent by the Ingress Nginx proxy:

[2025-07-01 09:02:05.891][19][debug][router] [source/common/router/router.cc:738] [Tags: "ConnectionId":"34","StreamId":"9456506396688543425"] router decoding headers:
':authority', 's3.xx.amazonaws.com'
':path', '/mybucket/blablabla.txt'
':method', 'PUT'
':scheme', 'https'
'x-request-id', '118e81ac9b0f333aabcde9cf66bc22e7'
'x-real-ip', '192.168.69.100'
'x-forwarded-for', '192.168.69.100'
'x-forwarded-host', 'my-s3-proxy.example.com'
'x-forwarded-port', '443'
'x-forwarded-proto', 'https'
'x-forwarded-scheme', 'https'
'x-scheme', 'https'
'content-length', '73'
'accept-encoding', 'identity'
'x-amz-sdk-checksum-algorithm', 'CRC64NVME'
'content-type', 'text/plain'
'user-agent', 'aws-cli/2.27.38 md/awscrt#0.26.1 ua/2.1 os/linux#6.12.0-160000.16-default md/arch#x86_64 lang/python#3.13.4 md/pyimpl#CPython m/G,N cfg/retry-mode#standard md/installer#exe md/distrib#sles.16 md/prompt#off md/command#s3.cp'
'content-encoding', 'aws-chunked'
'x-amz-trailer', 'x-amz-checksum-crc64nvme'
'x-amz-decoded-content-length', '23'
'x-envoy-internal', 'true'
'x-amz-content-sha256', '1e44f06ee9b122766eb87b9b9bc47f3995b1aa0c863ba45908'
'x-amz-security-token', 'long_truncated_token'
'x-amz-date', '20250701T090200Z'
'authorization', 'AWS4-HMAC-SHA256 Credential=ASIA274CBMIDHGFY4SIZ/20250701/aws_region-1/s3/aws4_request, SignedHeaders=accept-encoding;content-encoding;content-length;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length;x-amz-sdk-checksum-algorithm;x-amz-security-token;x-amz-trailer;x-real-ip;x-request-id;x-scheme, Signature=cafdc9ac275a0482b2ca2cabb7beb8719e011d94'

This same command works with no TLS termination in the Ingress controller, i.e.:
aws cli ==<TCP/80>==> RKE2 Ingress Nginx ==TCP/80 ==> Envoy (80) ==> S3

And note, that with TLS termination on Ingress and curl it also works:

curl -v \
  --aws-sigv4 "aws:amz:${S3_REGION}:${S3_SERVICE}" \
  -H "Content-Type: ${CONTENT_TYPE}" \
  -T "${LOCAL_FILE}" \
  ${CURL_URL}`

curl obviously does not use aws-chunked but only standard chunked Transfer-Encoding.

What you expected to happen:

I expect the upload to work with ingress-nginx TLS termination.

However, it seems that with TLS, aws cli forces the use of multi-part download streaming with the use of aws-chunked content-encoding, which is not correctly stripped by Nginx.

So, it seems that aws-chunked encoding is the issue here. To be honest, I’m not sure whether this is an actual problem or simply a case of unsupported functionality, as aws-chunked is not standard, but I couldn’t find any clear information about it. It’s also possible that I’m missing some configuration on the Ingress side? I’ve tried adjusting the proxy buffering settings (on/off), but that didn’t help either.

NGINX Ingress controller version (exec into the pod and run /nginx-ingress-controller --version):

$ kubectl exec -it rke2-ingress-nginx-controller-pj9dd -n kube-system -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.10.5-hardened4
  Build:         git-0eced1a4a
  Repository:    https://github.com/rancher/ingress-nginx
  nginx version: nginx/1.25.5

-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

$ kubectl version
Client Version: v1.29.11+rke2r1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.11+rke2r1

Environment:

> kubectl get nodes -owide
NAME   STATUS   ROLES                       AGE   VERSION           INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                            KERNEL-VERSION             CONTAINER-RUNTIME
nano   Ready    control-plane,etcd,master   8d    v1.29.11+rke2r1   192.168.69.100   <none>        SUSE Linux Enterprise Server 16.0   6.12.0-160000.16-default   containerd://1.7.23-k3s2
  • How was the ingress-nginx-controller installed:

Bundled with RKE2

How to reproduce this issue:

1- Create a simple S3 bucket in S3

2 - Create a dummy self signed certificate in your RKE2/K8s node:

$ cat my-s3-proxy.example.com_openssl.cnf 
[ req ] 
default_bits = 2048 
prompt = no 
default_md = sha256 
distinguished_name = dn 

[ dn ] 
C=US 
ST=CA 
L=SanFrancisco 
O=MyOrg 
OU=IT 
CN = my-s3-proxy.example.com

[ v3_ext ] 
# Extensions for a server certificate 
basicConstraints = CA:FALSE 
keyUsage = nonRepudiation, digitalSignature, keyEncipherment 
subjectAltName = @alt_names # Points to the [alt_names] section 

[ alt_names ] 
DNS.1 = my-s3-proxy.example.com
$ openssl req -x509 -newkey rsa:2048 -keyout tls.key -out tls.crt -days 365 -nodes -config my-s3-proxy.example.com_openssl.cnf -extensions v3_ext

3 - Create envoy namespace and secret with the tls key

kubectl create ns envoy
kubectl create secret tls s3-proxy-tls --cert=tls.crt --key=tls.key -n envoy

4 - Apply the enclosed file s3_envoy_8080.yaml, don't forget to customized it before, i.e. in envoy, add the correct region and the aws key/secret/token.

s3_envoy_8080.txt

5 - Install aws cli, and add in ~/.aws/credentials the aws key/secret/token.

6 - Test to 'ls' a file first in S3 with no endpoint to be sure it works. Test next with the endpoint to go through envoy. '–debug' flag can be used to get more traces. in the envoy deployment, I have already added the 'trace' log debug, so the headers can be consulted with 'kubectl logs envoy-s3-proxy-XXX -nenvoy -f'

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.needs-priorityneeds-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.

    Type

    No type

    Projects

    Status

    No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions