Description
Checkboxes for prior research
- I've gone through Developer Guide and API reference
- I've checked AWS Forums and StackOverflow.
- I've searched for previous similar issues and didn't find any solution.
Describe the bug
s3 GetObject
calls to s3 object lambda access points are failing with 403 ERR_BAD_REQUEST
starting with v3.729.0 due to event.getObjectContext.inputS3Url
having a X-Amz-SignedHeaders
value of host%3Bx-amz-checksum-mode
. Similar calls to the same s3 object lambda access points succeed when made using getSignedUrl
from @aws-sdk/s3-request-presigner
with the same GetObjectCommand
because X-Amz-SignedHeaders
still has a value of host
.
Regression Issue
- Select this option if this issue appears to be a regression.
SDK version number
@aws-sdk/client-s3@3.729.0
Which JavaScript Runtime is this issue in?
Node.js
Details of the browser/Node.js/ReactNative version
v20.19.0
Reproduction Steps
- Create an s3 object lambda access point that processes
inputS3Url
(e.g. with axios) - Make a
GetObject
call to valid object for the access point inputS3Url
will haveX-Amz-SignedHeaders
value ofhost%3Bx-amz-checksum-mode
- The axios call should fail with
403 ERR_BAD_REQUEST
// s3 object lambda access point code
const { S3Client, WriteGetObjectResponseCommand } = require('@aws-sdk/client-s3');
const axios = require('axios').default;
const s3 = new S3Client();
exports.handler = async (event) => {
const fileUrl = event.getObjectContext.inputS3Url;
const requestRoute = event.getObjectContext.outputRoute;
const requestToken = event.getObjectContext.outputToken;
try {
await s3.send(new WriteGetObjectResponseCommand({
RequestRoute: requestRoute,
RequestToken: requestToken,
Body: (await axios.get(fileUrl, { responseType: 'stream' })).data,
ContentType: 'application/json'
}));
} catch (e) {
console.log(error);
throw error;
}
};
// calling code
const { S3Client, GetObjectCommand } = require('@aws-sdk/client-s3');
const response = await s3.send(new GetObjectCommand({
Bucket: s3ObjectLambdaAccessPointArn,
Key: key
}));
Observed Behavior
In object lambda access point, fails with:
{
"message": "Request failed with status code 403",
"name": "AxiosError",
"stack": "AxiosError: Request failed with status code 403\n at settle (/var/task/index.js:50819:16)\n at IncomingMessage.handleStreamEnd (/var/task/index.js:51636:15)\n at IncomingMessage.emit (node:events:536:35)\n at endReadableNT (node:internal/streams/readable:1698:12)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21)\n at Axios.request (/var/task/index.js:52427:45)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async exports.handler (/var/task/index.js:63609:26)",
"config": {
"transitional": {
"silentJSONParsing": true,
"forcedJSONParsing": true,
"clarifyTimeoutError": false
},
"adapter": [
"xhr",
"http",
"fetch"
],
"transformRequest": [
null
],
"transformResponse": [
null
],
"timeout": 0,
"xsrfCookieName": "XSRF-TOKEN",
"xsrfHeaderName": "X-XSRF-TOKEN",
"maxContentLength": -1,
"maxBodyLength": -1,
"env": {},
"headers": {
"Accept": "application/json, text/plain, */*",
"range": "bytes=0-0",
"User-Agent": "axios/1.8.2",
"Accept-Encoding": "gzip, compress, deflate, br"
},
"responseType": "arraybuffer",
"method": "get",
"url": "https://<access-point>.s3-accesspoint.us-east-2.amazonaws.com/<file-name>?X-Amz-Security-Token=<security-token>&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20250623T211144Z&X-Amz-SignedHeaders=host%3Bx-amz-checksum-mode&X-Amz-Expires=61&X-Amz-Credential=<key>%2F20250623%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Signature=<signature>",
"allowAbsoluteUrls": true
},
"code": "ERR_BAD_REQUEST",
"status": 403
}
Expected Behavior
We expected the calls to succeed as they did before this update.
Possible Solution
The internal call to the s3 object lambda access point should be made with a presigned url with the original X-Amz-SignedHeaders
value of host
.
Additional Information/Context
Calls do succeed when s3 is initialized with responseChecksumValidation
set to 'WHEN_REQUIRED'
, but it seems like this shouldn't be a required workaround for working with a standard AWS service such as s3 object lambda access point.