Upload Large Files in Parts with Multiple Params
Uploading very large files reliably is a challenge in modern web applications. Network interruptions, memory limits, and browser upload restrictions can easily break a single large request. A proven solution is chunked uploading—splitting a large file into smaller parts, uploading each chunk separately, and merging them on the server.
This guide provides an improved chunk-uploading strategy with:
- resumable uploads
- progress tracking
- failure retries
- configurable chunk sizes
- optional hashing
- server merge support
Step 1: Slice the File into Chunks
Use Blob.prototype.slice() to split a large file into smaller parts:
function createFileChunks(file, size = 5 * 1024 * 1024) {
const result = [];
let index = 0;
let offset = 0;
while (offset < file.size) {
const chunk = file.slice(offset, offset + size);
result.push({
index,
file: chunk,
hash: `${file.name}-chunk-${index}`,
});
offset += size;
index++;
}
return result;
}Improvements
- Predictable
hashformat - Configurable chunk size
- Stable ordering via
index
Step 2: Build the HTTP Request for Each Chunk
Each chunk should be wrapped in FormData with metadata:
function createChunkRequest(chunk, uploadUrl, token) {
const formData = new FormData();
formData.append("chunk", chunk.file);
formData.append("hash", chunk.hash);
formData.append("index", chunk.index);
return async () => {
const res = await fetch(uploadUrl, {
method: "POST",
headers: { Authorization: token },
body: formData,
});
if (!res.ok) {
throw new Error(`Chunk upload failed: ${res.statusText}`);
}
return res.json();
};
}Why multiple params?
You can include:
-
hash: unique ID -
index: correct ordering -
parentId: multi-file linking -
session: resumable batch uploads
Step 3: Upload Chunks with Retry & Progress Support
async function uploadChunks(chunkRequests, onProgress) {
let uploaded = 0;
for (const sendChunk of chunkRequests) {
try {
await sendChunk();
uploaded++;
onProgress(uploaded, chunkRequests.length);
} catch (err) {
console.error("Chunk upload failed", err);
throw err;
}
}
}Optional: Parallel Uploads
async function uploadParallel(chunks, limit = 3) {
const pool = [];
const results = [];
for (const fn of chunks) {
const promise = fn().then(res => {
pool.splice(pool.indexOf(promise), 1);
return res;
});
pool.push(promise);
if (pool.length >= limit) {
await Promise.race(pool);
}
}
return Promise.all(pool);
}Parallel uploading greatly improves speed on fast networks.
Step 4: Pause & Resume Uploads
let paused = false;
function pauseUpload() {
paused = true;
}
async function resumeUpload(pendingChunks) {
paused = false;
for (const chunk of pendingChunks) {
if (paused) break;
await chunk();
}
}Pause is implemented by interrupting the upload loop.
Resume simply continues with unfinished chunks.
Step 5: Send Merge Request to Server
After all chunks upload successfully:
async function mergeChunks(apiUrl, fileName, token) {
const res = await fetch(`${apiUrl}/merge`, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: token,
},
body: JSON.stringify({ fileName }),
});
if (!res.ok) throw new Error("Merge failed");
return res.json();
}On backend, /merge typically:
- sorts chunks by index
- concatenates them
- removes temp files
Optional Enhancements
Use Real Hashing (MD5 / SHA-1 / SHA-256)
Instead of:
file.name + indexUse spark-md5:
import SparkMD5 from "spark-md5";
async function getChunkHash(chunk) {
return SparkMD5.ArrayBuffer.hash(await chunk.file.arrayBuffer());
}This allows:
- deduplication
- resume by hash
- server-side skip checks
Support Resumable Uploads
Keep track of uploaded chunks:
const uploaded = await fetch("/uploaded-list?file=" + fileName).then(r => r.json());
const remaining = allChunks.filter(c => !uploaded.includes(c.hash));This avoids re-uploading already-complete chunks.
Server-Side Example (Node.js + Express)
app.post("/upload", upload.single("chunk"), (req, res) => {
const { hash, index } = req.body;
const chunkPath = path.join(TMP_DIR, `${hash}-${index}`);
fs.renameSync(req.file.path, chunkPath);
res.json({ received: true });
});Merge route:
app.post("/merge", async (req, res) => {
const { fileName } = req.body;
const finalPath = path.join(UPLOAD_DIR, fileName);
const chunkFiles = fs.readdirSync(TMP_DIR)
.filter(name => name.startsWith(fileName))
.sort((a, b) => parseInt(a.split("-").pop()) - parseInt(b.split("-").pop()));
const writeStream = fs.createWriteStream(finalPath);
for (const file of chunkFiles) {
writeStream.write(fs.readFileSync(path.join(TMP_DIR, file)));
}
writeStream.end();
res.json({ merged: true });
});Final Notes
Chunked uploading is:
- reliable
- resumable
- scalable
- suitable for multi-GB files
- network-interruption–resistant
With hashing, retries, progress bars, and server-side merging, this becomes a production-grade file-uploading pipeline.