SyncWorkflow also depends on creatorPHID to synchronize the initial administrator account.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Sep 1 2019
In D18651#227868, @epriestley wrote:If users are actually building changes that depend on one anothers' unlanded changes, we could revisit this rule once we're more confident the simpler cases work.
Aug 31 2019
The primary Week 34 / Week 35 deployment has completed without apparent issues. I'm going to deploy some followup changes for T13393 later, but it looks like we're out of the woods on the bulk of outbound changes.
Clicking "Pay Now" from landing page fatals in "PhortuneCartCheckoutController.php:104"; "Call to undefined method PhortuneCartCheckoutController::buildCartContentTable()"
Pacts have a bad URI for billing accounts on the "Billing" tab.
Instances also have a bad URI for billing accounts on the "Billing " tab. Maybe the handle is using the wrong URI?
The sync worker is failing on new instance launch for lack of credentials.
The sync worker is failing on new instance launch for lack of credentials.
bin/host restart does not start no-daemon services.
daemon behaviors
Aug 30 2019
Provisioning was once close-ish to automated. Is this close enough to automate?
I think this leaves us with:
Adjacent is the older instances.queryinstances API method. This is still used by service synchronization.
It is also used to cache InstancesManageCapability::CAPABILITY but this can easily just be cached in the request cache instead.
The operation in PHI1329 (against a ~8GB export) went through cleanly. Remaining work here is:
The "Instance" almanac service type can be destroyed.
Aug 29 2019
Can we get rid of the instance-specific services completely now, after changes connected to T11413?
Mostly-promising answers on much of the rest of this:
For posterity, bleugh:
I have a patch for this, but I'm not thrilled about the retry model. Maybe better would be for the caller retry the actual upload operation (which will automatically resume) and bail out while retaining the temporary file. Even if we retry on 504, we lose a lot of progress if there's a service interruption for longer than we're willing to sit in a retry loop.
I'm hoping this is a reasonable excuse to find a way forward [on >2GB downloads] here.
Also an example how "per-push notification" is implemented in Github events/webhooks:
I actually found my way here from discourse where the need for this was discussed:
This is something of an aside, but it would be nice to formalize PhutilConsoleProgressBar into a generic progress sink. A lot of bin/storage dump-related stuff could use this and bin/host download could obviously use it, and we likely have some use cases for reporting progress to the web via the API, but PhutilConsoleProgressBar lacks an indirection layer to really make this work cleanly.
My tentative plan is to add methods for sending the output to disk to HTTPSFuture, then go down the new parser pathway only if we're writing to disk. This should limit the amount of surface area exposed on the new parser.
This task covers a lot of ground and many of the issues have been resolved. There are two remaining issues which are more narrowly covered by these followups:
Is the 2GB HTTP stuff in T12907 realistic to fix?
Not necessarily applicable in the general case, but see also T13393.
Aug 28 2019
A possible issue is that letting cURL pick a protocol might lead to it selecting HTTP/1.0 in some cases (how/when could it possibly do this?
According to curl/symbols-in-versions (this is a text file in the repository):
A possible issue is that letting cURL pick a protocol might lead to it selecting HTTP/1.0 in some cases (how/when could it possibly do this? Only by hard-coding known-broken hostnames, I think?), and forcing it to use HTTP/1.1 could break those cases, so maybe I'll go spelunking here. I also can't immediately find a date of introduction for CURL_HTTP_VERSION_1_1 from the documentation.
Another variation of this is "add more documentation", although I think the pattern around this one is more rarely a sort of "problem domain / solution domain mismatch" sort of issue and more often a "human communication" issue, usually with one of these two templates:
An adjacent issue is that PhabricatorMarkupCache is not currently marked as having cache persistence