Provisioning was once close-ish to automated. Is this close enough to automate?
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Aug 30 2019
I think this leaves us with:
Adjacent is the older instances.queryinstances API method. This is still used by service synchronization.
It is also used to cache InstancesManageCapability::CAPABILITY but this can easily just be cached in the request cache instead.
The operation in PHI1329 (against a ~8GB export) went through cleanly. Remaining work here is:
The "Instance" almanac service type can be destroyed.
Aug 29 2019
Can we get rid of the instance-specific services completely now, after changes connected to T11413?
Mostly-promising answers on much of the rest of this:
For posterity, bleugh:
I have a patch for this, but I'm not thrilled about the retry model. Maybe better would be for the caller retry the actual upload operation (which will automatically resume) and bail out while retaining the temporary file. Even if we retry on 504, we lose a lot of progress if there's a service interruption for longer than we're willing to sit in a retry loop.
I'm hoping this is a reasonable excuse to find a way forward [on >2GB downloads] here.
Also an example how "per-push notification" is implemented in Github events/webhooks:
I actually found my way here from discourse where the need for this was discussed:
This is something of an aside, but it would be nice to formalize PhutilConsoleProgressBar into a generic progress sink. A lot of bin/storage dump-related stuff could use this and bin/host download could obviously use it, and we likely have some use cases for reporting progress to the web via the API, but PhutilConsoleProgressBar lacks an indirection layer to really make this work cleanly.
My tentative plan is to add methods for sending the output to disk to HTTPSFuture, then go down the new parser pathway only if we're writing to disk. This should limit the amount of surface area exposed on the new parser.
This task covers a lot of ground and many of the issues have been resolved. There are two remaining issues which are more narrowly covered by these followups:
Is the 2GB HTTP stuff in T12907 realistic to fix?
Not necessarily applicable in the general case, but see also T13393.
Aug 28 2019
A possible issue is that letting cURL pick a protocol might lead to it selecting HTTP/1.0 in some cases (how/when could it possibly do this?
According to curl/symbols-in-versions (this is a text file in the repository):
A possible issue is that letting cURL pick a protocol might lead to it selecting HTTP/1.0 in some cases (how/when could it possibly do this? Only by hard-coding known-broken hostnames, I think?), and forcing it to use HTTP/1.1 could break those cases, so maybe I'll go spelunking here. I also can't immediately find a date of introduction for CURL_HTTP_VERSION_1_1 from the documentation.
Another variation of this is "add more documentation", although I think the pattern around this one is more rarely a sort of "problem domain / solution domain mismatch" sort of issue and more often a "human communication" issue, usually with one of these two templates:
An adjacent issue is that PhabricatorMarkupCache is not currently marked as having cache persistence
Aug 27 2019
loading a dump with missing data into a replica and then starting replication really causes issues
Use case from the thread is far off the beaten path ("The ideal version of the site on devices is a tiny version of the site on desktop"), so I'm not planning to put the time into investigating this unless substantially more interest emerges. I doubt the root issue is strictly a bug in Phabricator.
I'd like to understand why this option is useful before making any changes here (that is, why wasn't the user happy with the site as it appeared by default on their device)?
I'm guessing it's a media-query being calculated on some metric that isn't being scaled by the browser - "inches" maybe?