Skip to content

Instantly share code, notes, and snippets.

@tomsaffell
Last active December 31, 2015 12:27
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tomsaffell/9235905 to your computer and use it in GitHub Desktop.
Save tomsaffell/9235905 to your computer and use it in GitHub Desktop.
Safari has an upload problem, we found a work around

#Safari Upload Issue

tldr: Safari on OSX has an issue that causes uploads to silently fail. We implemented a workaround in EvaporateJS.

###The Issue At videopixie we use EvaporateJS to upload a lot large files (20GB+) directly to AWS S3, and we do lots of logging to make sure it's all running smooth.

We started to see an increase in failed uploads. The logs revealed a few interesting things:

  1. The failures were occurring when the progress of an XMLHttpRequest (xhr) was silently stalling.
  2. When the parts stalled, their last-reported progress was 131072 bytes (2^17)
  3. The failure was only ever observed on Safari on OSX, no other browsers.
  4. We have anecdotal evidence that this is also affecting resumable.js

As background: EvaporateJS splits a file into 5MB chunks using Blob.slice(), and then sends each of those blobs via xhr.send(). It's not unusual for the .send() to fail, but when it does it should call either its 'onreadystatechange' or its 'onerror' callback. In this case the failure was silent, no callbacks. This has been logged as a Safari bug, number 16136393

###The Work Around

We've implemented a work-around in EvaporateJS. Every 2 minutes the progress of every part is checked. If the 'bytes loaded' of a part is not greater than value observed at the previous check then the part is aborted. The abortion causes the error detection to kick-in, and the part is then automatically restarted. It's working well in our tests, and it's going into production today. See code here: https://github.com/TTLabs/EvaporateJS/blob/master/evaporate.js#L516

###The Gripe

We filed the bug with Apple (number 16136393), including steps to recreate, and they have acknowledged it. They asked that we provide a public demonstration page where they can see it an action(1). I get why they want that - I would want that if I was the engineer assigned to look at this. But my issue is that we're a two man team with a very long todo list, and Apple has $137,000,000,000 in the bank. Why are they asking us to make this page, when they could hire more engineers, and have them work on it? Safari has millions of users who are probably suffering from this. Why is a fix for them gated by my time? Of course, if Apple need more details on how to recreate this bug I'd be very happy to spend time helping, I just dont see why I should have to do it all.

[1] Of course, they're very welcome to test it at videopixie.com, which they can do without signing up. But then their test would be wrapped up in the particulars of how AWS S3 works, how EvaporateJS works, and how videopixie.com works, and that's way more complex than they need. They just need a minimalist test rig, and it should be part of their ongoing test suite.

@bikeath1337
Copy link

We at Krossover Intelligence think this is a great project and we're banging the s_@#t out of it as I write -- to great results! The latest three PRs are included in that intensive testing. We're actually focusing on memory usage, concurrency and what happens when the browser goes offline for a long time. This is how we found the memory leak, which, *incidentally is related to the workaround for this report_.

Anyway, if you can still reproduce this, I would like to help. After reviewing the project code and with the three latest PR enhancements, I'd like to see if the recent improvements do away with this problem.

Regarding our contributions, given that AWS MD5 checksums work beautifully, we're committed to making EvaporateJS integral to our video upload pipeline.

We want to give back to the community, please let me know know how we could expedite review and integration of these suggestions.

And thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment