See T10828 for details.
Description
Revisions and Commits
Related Objects
- Mentioned In
- D17723: Switch File deletion to use ModularTransactions
- Mentioned Here
- T10828: File deletion should be queued up and run by daemons
Event Timeline
@epriestley should it be possible to un-delete a File, the same way it's possible to archive and then un-archive a Paste? The only benefit I can see is that in an emergency where someone accidentally kicked off a delete for a bunch of files they didn't intend to, someone could stop the daemons and perform an undelete operation before restarting the daemons (instead of flushing the worker queue). I actually mostly coded it this way already by following along with PhabricatorPasteStatusTransaction.
Or maybe this would help in some hypothetical scenario where a bunch of deletes get started against a chunk store like S3, which ends up hitting an S3 rate limit or otherwise interfering with normal S3 operations?
My gut is to wait for users to hit those cases before building solutions for them since I think it's unlikely any of that will ever turn into a real problem.
In normal cases, I think the daemons will probably nuke stuff too fast for users to reasonably have a chance to stop them.
If you queue up a lot of stuff with the API in the future, you could stop the daemons and come ask us for help to fix things and we could walk you through clearing the flags in the DB and using bin/worker cancel --class to purge the queue even without building any kind of "cancel delete" right now.
It looks like the S3 rate limits start around 100 DELETE/sec and 300 GET/sec which seem pretty hard to hit for realistic installs that aren't doing something goofy.
(You could structure the transaction like Status and just have it throw in validation if the new value is "false": throw new Exception(pht('You can not cancel deletion of a file.')); to sort of allow room for that operation eventually.)
We'd also have to add locking to prevent a race like this:
- Daemon loads the file.
- Daemon starts deleting chunks out of it.
- User cancels the deletion.
- We tell the user that we cancelled the deletion 100% successfully and they don't have to worry, their file is safe.
- Daemon completes the deletion and the file is gone forever.
That's not too hard, but it's extra complexity and a pain to test.