Problem: I upload a big file into some high-latency backend, like S3 (or in my case, the B2 Backblaze API). This sits under the chunked storage engine, which will write small individual chunks of the file into separate entities, and recombobulate everything on command.
Now, I want to delete that file. In my case, I have a 200mb file I want to delete which has been chunked into several files. I click on 'Delete' in the UI, and this pops up a Modal dialog.
The moment you click Delete, the UI will grey out, and the UI thread you're using will begin issuing synchronous deleteFile commands, one-by-one, for each individual file chunk. It took so long for me that once I saw all the chunks deleted in my bucket, it still didn't properly redirect. Naturally this is going to take a long time, especially because you can upload arbitrary-sized files, which is a nice feature!
Steps to reproduce: Upload a large file that's a few hundred megabytes into S3 (drag and drop in the UI, etc). Then try to delete, and go get coffee while the UI thread issues dozens of deletion commands (each requiring several HTTP requests) for each individual chunk.
Solution: The best thing to do, as @epriestley noted, was to instead simply have the Delete button shuffle off the commands into a queue for the daemons to handle.