Pagination isn't quite as simple as I'd remembered. Specifically, we do not expose internal pagination cursors to viewers today because doing so permits an information discovery attack that goes like this:
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Feb 27 2017
Feb 26 2017
Feb 2 2017
Jan 18 2017
Dec 8 2016
Dec 6 2016
Nov 22 2016
Nov 21 2016
Oct 26 2016
Oct 20 2016
Oct 19 2016
I think this behavior is very good
This is also a potential problem in the general case, although the general case of this is very rare.
Broadly, most applications can execute Spaces queries cheaply by querying ... AND spacePHID IN (<list of spaces the viewer can see>) when Spaces are configured, which is efficient until some install decides to create 30,000 Spaces for some reason.
Oct 8 2016
This helped! Thanks for another fast fix.
Oct 7 2016
I believe this should now be fixed in HEAD of master. It should promote to stable within 24 hours.
Sep 29 2016
Sep 23 2016
Sep 21 2016
I've banged on this a reasonable amount locally without issues and the originating instance reports that this seems to have calmed things down in production with these patches, so it seems like this pretty much just worked.
Sep 20 2016
Sep 2 2016
For people who suffer from this: Note that D16483 might have actually made things a little worse, by searching the DB for old messages (Not loading them though); I tested with 1M messages on an SSD drive, and I didn't see any changes, but YMMV.
Aug 29 2016
Aug 19 2016
Aug 5 2016
Jul 31 2016
I deployed this stuff here. I neglected to get a production timing beforehand so I can't really claim it's actually faster, but it seems fast-ish now. Let me know if things look better and/or I ruined everything when you get around to deploying it.
I don't immediately have an attack on the JIRA thing (it should be less expensive in normal cases), and don't want to do the Auth stuff without more planning, and nothing else leaps out at me as having a similar sort of value for the cost.
There's still quite a bit of room here. Locally, I see a 220ms call for differential.revision.search. Here are savings which look fairly easy to capture:
That sort of leaves you in trouble with custom edge-based fields like the one above, but a reasonable short term approach is the one you've already taken:
Ah, there are really a couple of issues here.
That looks broadly correct to me, I just want to try to remove the assumption that the first object's first field's storage is universally the right storage in fixing this upstream. It's always true today and for the foreseeable future, just obviously not a Fundamental Axiom of the System. Let me review D16346 and I'll spend 10 minutes on this, I'm pretty sure it's straightforward, not a weird snowbally mess, and that the upstreamable patch is only cosmetically different from that one.
This is brittle but it works if your objects all use the native storage.
I think the issue is that there's no callable thing for bulk-loading custom fields right now, which is why SearchEngineExtension doesn't have a hook for it. Other SearchEngine extensions don't need it since none currently load any data.
Jul 10 2016
Jul 9 2016
(Only unique queries are saved, so this should be a small amount of data in most cases, but the Phacility setup tends to mean we issue a lot of unique queries.)
Using an extended policy sort of fixes this, except that extended policies currently only strengthen policies, so I also have to weaken the default View policy. This potentially makes the UI misleading, since it suggests that "all users" can view an instance, which isn't true.
One possible approach is to cache policy checks (when we've determined that user X can see object Y, cache that for the remainder of the request), but I don't want to do this idly since it has far-reaching implications, and there are cases where this cache could potentially produce the wrong result (for example, when we predict the effects of making a policy change during an edit, we evaluate objects as though they had a different policy than they really have).
Jul 4 2016
Jun 5 2016
Jun 1 2016
D16001 likely provides an appropriate general-purpose mechanism for the user cache pool.
May 23 2016
The following queries are full table scans:
May 13 2016
Apr 6 2016
Mar 23 2016
In the extreme case where you have, say, a million tests, I'd expect there is probably little value in reporting each test into Harbormaster as a "pass", and your harness might want to summarize all passes into "999,998 additional tests passed" and submit two failures and one aggregate-pass.
T9704 also discusses slowness on inserting the tests. That may be unnecessarily slow right now, but I don't expect clients to submit unlimited numbers of tests in one API call. Instead, submit tests in chunks (say, of 1K tests per page or whatever) by calling harbormaster.sendmessage repeatedly with a work status.
We don't currently have a sortable column on the unit message table, since pass, fail, etc., aren't naturally sortable by any key MySQL can construct.
Mar 21 2016
Mar 7 2016
See also T2225 for a concrete issue caused by showing enormous numbers of paths (unbearably slow page loads because of the raw wire filesize).
Mar 4 2016
@paladox: Your last question was already answered before in T8612#163215
Mar 3 2016
Oh ok, is there a way to set it to unlimited so that a user can choose to do it per file but yes it is unlikely a user will review a million changes but it is always good to check.
You can manually adjust the $hard_limit in your local installation here: https://secure.phabricator.com/diffusion/P/browse/master/src/applications/diffusion/controller/DiffusionCommitController.php;ac729278328ed9679229d11ba1eefdad784b59e2$148
I don't intend to change that limit. I believe there is very little value in reviewing 1000+ file changes from the web UI, and that the 1K file limit is a reasonable one.
Yes that's what I would like to see please, please allow it to show that over large commits please.
That commit is changing more than 1,000 files.
@epriestley oh because on https://phabricator.wikimedia.org/rAPAW76c6444cfb64e872b8b15688a8999d96af84a406 it dosent do that yet it is only chaning a 100 files.
In T8612#163197, @paladox wrote:What about per file diff viewing it would hide them by default but add an option to each file that says show diff.