-
-
Save shiawuen/1534477 to your computer and use it in GitHub Desktop.
<!DOCTYPE HTML> | |
<html lang="en-US"> | |
<head> | |
<meta charset="UTF-8"> | |
<title>test upload by chunk</title> | |
</head> | |
<body> | |
<input type="file" id="f" /> | |
<script src="script.js"></script> | |
</body> | |
</html> |
(function() { | |
var f = document.getElementById('f'); | |
if (f.files.length) | |
processFile(); | |
f.addEventListener('change', processFile, false); | |
function processFile(e) { | |
var file = f.files[0]; | |
var size = file.size; | |
var sliceSize = 256; | |
var start = 0; | |
setTimeout(loop, 1); | |
function loop() { | |
var end = start + sliceSize; | |
if (size - end < 0) { | |
end = size; | |
} | |
var s = slice(file, start, end); | |
send(s, start, end); | |
if (end < size) { | |
start += sliceSize; | |
setTimeout(loop, 1); | |
} | |
} | |
} | |
function send(piece, start, end) { | |
var formdata = new FormData(); | |
var xhr = new XMLHttpRequest(); | |
xhr.open('POST', '/upload', true); | |
formdata.append('start', start); | |
formdata.append('end', end); | |
formdata.append('file', piece); | |
xhr.send(formdata); | |
} | |
/** | |
* Formalize file.slice | |
*/ | |
function slice(file, start, end) { | |
var slice = file.mozSlice ? file.mozSlice : | |
file.webkitSlice ? file.webkitSlice : | |
file.slice ? file.slice : noop; | |
return slice.bind(file)(start, end); | |
} | |
function noop() { | |
} | |
})(); |
marmz: sending the data itself is asynchronous. Uploading the data will start shortly after the xhr.send(...) call while processing the JS goes on at once.
If you send the chunks in a for loop then all chunks are sent at once. Which opens N streams (N=file.size/chunksize). This would flood the server with all the pieces at once making the whole thing pointless.
The main reason for chunked upload is that the server does not need to store the whole file in memory - which is also possible to work around on the server side when data is streamed to file directly, the second reason is to make big file upload resumable in case the TCP stream breaks. Opening more TCP streams at the same time to the same server will not make things better in most cases so it is pointless.
Starting chunks in each second (just as in this example) is also not a good solution. You should send each chunk after the previous one has finished. To achieve this you have to use the load event to go on to the next chunk and other events to show error to user:
xhr.addEventListener("progress", updateProgress);
xhr.addEventListener("load", transferComplete); // Handler has to send the next chunk
xhr.addEventListener("error", transferFailed);
xhr.addEventListener("abort", transferCanceled);
See: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Using_XMLHttpRequest
Well working example! :)
Thanks so much for this! As noted above this has the possibility to overload the server so I have a variant that will only send once the current chunk is complete:
function send(file, start, end) {
var formdata = new FormData();
var xhr = new XMLHttpRequest();
if (size - end < 0) { // Uses a closure on size here you could pass this as a param
end = size;
}
if (end < size) {
xhr.onreadystatechange = function () {
if (xhr.readyState == XMLHttpRequest.DONE) {
console.log('Done Sending Chunk');
send(file, start + sliceSize, start + (sliceSize * 2))
}
}
} else {
console.log('Upload complete');
}
xhr.open('POST', '/uploadchunk', true);
var slicedPart = slice(file, start, end);
formdata.append('start', start);
formdata.append('end', end);
formdata.append('file', slicedPart);
console.log('Sending Chunk (Start - End): ' + start + ' ' + end);
xhr.send(formdata);
}
You can invoke it like so:
var file = $files[0]; // This is your file object
var size = file.size;
var sliceSize = 10000000; // Send 10MB Chunks
var start = 0;
console.log('Sending File of Size: ' + size);
send(file, 0, sliceSize);
I'm using this to send large files to a server without overloading the memory. I found that 10MB chunks seemed to work quite well the 256 byte chunks in the original were far too slow.
@Dave3of5 Do you know how to apply this to multiple files?
@inaod568 The code I've written is only designed for a single file upload.
To get it to work with multiple file you would have to send the filename with the formdata and then invoke send for each file starting at 0. In terms of keeping the memory footprint down on the server you should send the chunks one at a time which is done in my example. So you would need to wait until the send is fully complete before moving on to the next file.
Probably the best way to to wrap up the send in a promise. That's going to be a bit complicated given that send only completes when all the Ajax requests have completed.
Hello,
How can I merge or compile into 1 file in Php?
I have this snippet but it doesn't seem to work.
$num = $_POST['num']; // 1,2,3,4
$tmp_name = $_FILES['upload']['tmp_name'];
$upload = $_SERVER["DOCUMENT_ROOT"] . '/chunks/upload/';
$filename = $_FILES['upload']['name'];
$new_file_name = 'chunk-' . $num . $filename; // pattern : chunk-1myfilename.jpg, chunk-2myfilename.jpg,
$target_file = $upload . $new_file_name;
move_uploaded_file( $tmp_name, $target_file );
for ( $i = 1; $i <= $total_chunks; $i++ ) {
$file = fopen( 'http://example/chunks/upload/' . 'chunk-' . $i . $filename, 'rb');
$buff = fread( $file, 1048576);
fclose( $file );
$final = fopen( $upload . $filename, 'ab' );
$write = fwrite($final, $buff);
fclose( $final );
unlink( $upload . 'chunk-' . $i . $filename );
}
Thanks.
Glen
Nice. working. just wondering if this needs to be taken care of from the backend as well? Do I need to combine those chunks or is it a multipart upload? I see form to which you are attaching the data and sending. Looking forward to some explanation.
@thapliyalshivam It needs to be taken care from backend ?? or only frontend will work.
Hello Guys,
chunks are ok, but I cant re create file on backend, I'm using php and I'm trying to upload an image, any tips?
Nice function, but from my tests, the setTimeout with 1ms is too fast and the chunks DO NOT ARRIVE IN ORDER in my server.
I had to boost it to 50ms (40ms was still failing). I guess this really depends on what connectivity you have, one could also augment the slice size I suppose.
So, just be aware of that when you implement this function in your code...
As noted by others, the snippet doesn't take into fact whether the last request has completed or not. It also does not handle network failures. I recommend using an out of the box solution or putting in some work on this proof of concept to get something usable.
The snippet from this comment https://gist.github.com/shiawuen/1534477#gistcomment-3017438 fixes it so that it doesn't overload the server. The server has to stitch the file back together for each piece that is sent.
I recommend using a library to handle this like http://www.resumablejs.com/
Hello Guys,
chunks are ok, but I cant re create file on backend, I'm using php and I'm trying to upload an image, any tips?
Hi, I Hope to Remember your question until I have some free time to post a solution that I code my own in PHP It work with test file size untill 7 GB the Whole process (chunk and merge) are handled in PHP server side.
Hello Guys,
chunks are ok, but I cant re create file on backend, I'm using php and I'm trying to upload an image, any tips?Hi, I Hope to Remember your question until I have some free time to post a solution that I code my own in PHP It work with test file size untill 7 GB the Whole process (chunk and merge) are handled in PHP server side.
Hi @woodchucker Can you let me know about your solution? Now i can upload file size until 250M (5M records)
Thanks.
I open a new repo to share my solution: https://github.com/woodchucker/SplitMergePHP.git
Hello,
I have reviewed your work, thank you, but a more practical solution is needed.
index.html
js.js
upload.php
Consisting of only 3 files like,
chunk-size value can be adjusted,
Working with XMLHttpRequest,
Can you suggest a workable code that can send 7GB files without additional things such as composer?
Nice job :)
Can i ask why do you send the chunks in a
setTimeout()
loop, instead of a "common"for()
iterator?Maybe there is some reason i don't get..??