The stalled transactions on this host published after I deployed the update.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jul 21 2021
This happens when a recipient list includes an Owners package which has been destroyed. Specifically, we'll exit this section of PhabricatorMetaMTAMemberQuery with out the PHID in $package_map, and then fail to return it:
Mar 27 2021
Jul 21 2020
May 4 2020
Nov 8 2019
Oct 17 2019
Jun 22 2019
Jun 20 2019
Jun 19 2019
I think that might be everything? Not entirely sure, but haven't seen any more since the last deploy.
Jun 17 2019
I deployed that last round of things to secure. Not totally confident I got everything, but hopefully we're in better shape now.
uhoh looks like I'm a dum dum
This is doing somewhat better now, but I've still seen:
May 30 2019
D20563 probably fixes mail. I don't think (?) it will fix notifications but haven't hunted that down yet.
May 23 2019
May 22 2019
Per D20533, the major query this UI uses is currently unkeyed (no dateCreated key on transaction tables).
May 21 2019
May 20 2019
May 9 2019
My expectation is that this is effectively resolved by T13277. Since "Autoclose" is no longer a separate permission from "Publish", commits should now always mention + close or never mention + close.
Apr 22 2019
Apr 15 2019
Feb 26 2019
quack
Feb 25 2019
Currently, project tag changes and subscriber changes don't expose data in "transaction.search" because there's no "obviously correct" way to represent these changes in a future-proof way.
To my surprise, it also seems the method does not output any project tag changes that happened on a task?
Feb 24 2019
Thanks for the quick reply!
Haven't played with webhooks yet so I did not know/realize it's transaction PHIDs only - "PHID" sounded generic enough that I incorrectly expected any kinds of PHIDs to be accepted as input. So this task is somewhere between either "clarify documentation that only transaction PHIDs are supported" or a "support other PHIDs" low-prio enhancement request.
Feb 23 2019
The phids constraints matches transactions with specific PHIDs (PHID-XACT-...). Usually, you'll have a list of transaction PHIDs when you're responding to a webhook callback, so the phids constraint is most useful for webhooks.
Feb 20 2019
Probably a good idea, but not worth keeping a task around for.
Feb 15 2019
I believe we haven't seen more of this in two years, and "make the worker always exit in less than 2 hours" is a more-or-less reasonable remedy. Getting one extra email every two hours also isn't a huge problem even if we do get this wrong.
Jan 21 2019
I believe D19969 has now fixed this, although I'm not entirely certain, since it was never reliably reproducible in production so there's no way to really test or verify it.
Jan 14 2019
This isn't trivial to resolve. The inverse transaction goes through standard "old value / new value" logic, so if we just move the entire "apply inverse transactions" block to later on, the transactions automatically no-op themselves: they do nothing by the time we apply them.
I made this edit:
Unsurprisingly, I think this is a race condition.
Jan 2 2019
This is the most upsetting bug in the software.
Nov 21 2018
I do have a real-life example where the prose diff engine rendered a suboptimal diff. I was able to fix it by changing the maximum length of the edit distance matrix from 128 to 256.
Oct 5 2018
Jun 5 2018
May 31 2018
See also long chatter in https://discourse.phabricator-community.org/t/request-less-obtrusive-status-updates/1530.
Feb 15 2018
Feb 11 2018
Feb 8 2018
Feb 7 2018
Jan 31 2018
See T13056 for followup.
Jan 30 2018
I just let that run for a while but it finished at some point:
... OPTIMIZE Optimizing table "<instance>_audit"."audit_transaction"... DONE Compacted table by 139 GB in 910,219ms. ...
The compaction completed overnight. I'm optimizing the tables now.
Pool is full again, repo is upgrading, edges are compacting on the instance shard.
web004 is deploying now.
web004 died abruptly so I'm going to fix that and deploy these changes at the same time.
Jan 29 2018
Even on our fairly normal data, the effect was a little bit more dramatic than I'd expected:
I'm going to optimize + probe secure001 now and see if any of the tables above shrunk. I'm expecting a very modest effect combined with zero user visible changes in the UI despite throwing away a bunch of data.
Took like 3-ish minutes and did this:
I double checked that our backups are working.
Editing some edges on the new code as a sanity check before I compact things.
(Pushing this to secure, stuff might be funky for a minute while I gently massage the database.)
Jan 27 2018
My plan is to pick those to stable, then compact-edges here on secure, then compact-edges on the affected 130GB instance. There's some value in doing this sooner rather than later because the backups for 130GB of edge data are having some issues. The instance is a free test instance so this isn't a huge concern, but I'd sleep better if it was running smoothly. If you don't run compact-edges I think the worst those changes could really do is cause some kind of temporary display bug with new transactions, so the risk should be pretty small.
Bad news: data still has one reader/writer in the Asana-to-Revision linking implementation. So we can't completely get rid of that yet.
Jan 26 2018
T4675 survives this, because rendering policy controls on the client is very complicated.