Ref T7149. This is just a proof-of-concept in case we ever want to return here, but didn't pan out.
This computes the SHA1 hashes of files in JS before we upload them. This would let us do resumes in a general way and skip uploading data when the data is already known to the server.
However, the hash rate on my machine is only about 3MB/s, which implies a 6 minute hashing step at the beginning of uploading a 1GB file. This feels too long to me. The browser is also fairly unresponsive during this time.
We can do large file resumes without needing sha1 by finding files by the same author, which are partially uploaded, with the same length and name, and prompting the user to resume them. I think this will be a better experience overall than spending 6 minutes per gig doing requisite pre-hashing.
The other thing we get -- skipping data uploads for small files -- doesn't seem very important to me.
So I'm just going to throw this away and not pursue it. It does technically work fine.