Skip to content

[Bug]: Template build v2 fails on AWS S3: SDK sends Transfer-Encoding: chunked on presigned PUT uploads #1235

@mi-subbotin

Description

@mi-subbotin

Sandbox ID or Build ID

No response

Environment

  • Self-hosted E2B on AWS (deployed from e2b-dev/infra main branch)
    • Storage: AWS S3 (im-e2b-fc-build-cache.s3.us-east-1.amazonaws.com)
    • SDK: e2b@2.17.0
    • Node.js: v25.8.2
    • OS: macOS Darwin 25.4.0

Timestamp of the issue

2026-03-25 17:00 UTC

Frequency

Happens every time

Expected behavior

SDK should set Content-Length header when uploading files to S3 presigned URLs, so the upload succeeds.

Actual behavior

SDK uses Transfer-Encoding: chunked for some file uploads. AWS S3 does not support chunked encoding on presigned PUT URLs and returns:

NotImplemented A header you provided implies functionality that is not implemented Transfer-Encoding

Small files upload successfully (201), but larger files that use streaming fail (501).

Issue reproduction:

import { Template, defaultBuildLogger } from "e2b";

const tmpl = Template()
.fromTemplate("my-template")
.copy("entrypoint.sh", "/entrypoint.sh")
.runCmd("echo done");

await Template.build(tmpl, {
alias: "my-sandbox",
onBuildLogs: defaultBuildLogger(),
cpuCount: 4,
memoryMB: 4096,
});

// FileUploadError: Failed to upload file: Not Implemented

Root cause:
Node.js fetch sends Transfer-Encoding: chunked when body is a ReadableStream without known length. GCS (used by E2B Cloud) supports this, but S3 does not.

Workaround:
Monkey-patching fetch to buffer streams and set Content-Length before S3 PUT:

const originalFetch = globalThis.fetch;
globalThis.fetch = async (input, init) => {
const url = typeof input === "string" ? input : input?.url || "";
const method = init?.method || input?.method || "GET";
if (method === "PUT" && url.includes(".s3.") && init?.body instanceof ReadableStream) {
const reader = init.body.getReader();
const chunks = [];
while (true) {
const { done, value } = await reader.read();
if (done) break;
if (value) chunks.push(value);
}
const buffer = new Uint8Array(chunks.reduce((a, c) => a + c.length, 0));
let offset = 0;
for (const chunk of chunks) { buffer.set(chunk, offset); offset += chunk.length; }
return originalFetch(url, { method: "PUT", body: buffer, headers: { "Content-Length": String(buffer.length) } });
}
return originalFetch(input, init);
};

Issue reproduction

build template

Additional context

  • Self-hosted E2B on AWS is officially supported via https://github.com/aws-samples/sample-e2b-on-aws
  • The API correctly returns presigned S3 URLs (GET /templates/{id}/files/{hash} → 201)
  • Direct curl -X PUT with Content-Length to the same presigned URL works (200)
  • Only the SDK's streaming upload fails

Suggested fix:
In the SDK's file upload logic, buffer the ReadableStream body and set Content-Length before calling fetch with S3 presigned URLs. Alternatively, use ArrayBuffer instead of
ReadableStream for presigned PUT uploads.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions