Running Phabricator services on multiple hosts.
Jun 1 2021
Since observed repositories version differently today, this strategy won't work -- but I can't come up with any valid reason to ever put a repository into a "write maintenance" mode anyway. I do imagine making observed repositories "replay" fetches into the push log (as though they were pushes) in the future, but that still won't make "write maintenance" on an observed repository meaningful, so it seems fine to just prevent putting non-hosted repositories into this mode.
A minor issue on the way to this is that calling synchronizeWorkingCopyBeforeWrite() with an omnipotent viewer will write to the WorkingCopyVersion table with a null userPHID, which shows as "Unknown Object" in the UI.
A useful maintenance operation for staging area repositories is to remove out-of-date staging refs: old diffs which have already landed. This is of some particular importance for large installs, since Git has a significant per-ref overhead for many operations until protocol v2: by the time a repository has ~50K refs, interacting with it in basically any way has become slow and cumbersome.
Mar 16 2021
Mar 11 2021
Feb 25 2021
Feb 24 2021
Feb 19 2021
Feb 18 2021
Sep 3 2019
Also remaining is to extend this behavior to the HTTP pathway (and to Mercurial/SVN, eventually).
- if we have already retried 3 times, do not retry;
we'll reduce silly client-visible behavior where you request /tourtle.git instead of /turtle.git and the server seems confused...
Aug 29 2019
Not necessarily applicable in the general case, but see also T13393.
Jul 12 2019
I assume this is being done already in the Phacility cluster on some level when repositories get really large, but I'm not particularly sure how to perform this migration.
May 10 2019
Apr 15 2019
Feb 1 2019
Jan 31 2019
See PHI1015 for a slightly meatier explanation of this issue.
Dec 13 2018
Yeah, that's T10769.
I am seeing a similar issue on our install:
Nov 21 2018
Oct 8 2018
Oct 5 2018
Sep 6 2018
See PHI860 and T13111. In the future, repository nodes may automatically gc/prune/repack. If they do, it may make sense to sort them to the bottom of the list so traffic is sent to them only if no other nodes are available, in order to minimize the impact that gc/prune/repack have on other activity.