Page MenuHomePhabricator

Reindex dashboards and panels (allow migrations to queue a job to queue other indexing jobs)

Authored by epriestley on Apr 12 2019, 9:27 PM.
Referenced Files
F13317208: D20412.diff
Thu, Jun 13, 9:10 AM
F13314279: D20412.id48705.diff
Tue, Jun 11, 6:46 PM
F13291448: D20412.id48755.diff
Tue, Jun 4, 11:38 PM
F13290071: D20412.id48755.diff
Tue, Jun 4, 4:08 PM
F13281753: D20412.diff
Sun, Jun 2, 11:10 AM
F13266253: D20412.id48705.diff
Tue, May 28, 11:21 AM
F13264858: D20412.id48755.diff
Mon, May 27, 10:47 PM
F13263213: D20412.id48704.diff
Mon, May 27, 7:57 AM
Restricted Owners Package



Depends on D20411. Ref T13272. Dashboards and panels have new indexes (Ferret and usage edges) that need a rebuild.

For large datasets like commits we have the "activity" flow in T11932, but realistically these rebuilds won't take more than a few minutes on any realistic install so we should be able to just queue them up as migrations.

Let migrations insert a job to basically run bin/search index --type SomeObjectType, then do that for dashboards and panels.

(I'll do Herald rules in a followup too, but I want to tweak one indexing thing there.)

Test Plan

Ran the migration, ran bin/phd debug task, saw everything get indexed with no manual intervention.

Diff Detail

rP Phabricator
Lint Passed
Tests Passed
Build Status
Buildable 22575
Build 30922: Run Core Tests
Build 30921: arc lint + arc unit

Event Timeline

Owners added a subscriber: Restricted Owners Package.Apr 12 2019, 9:27 PM
Harbormaster returned this revision to the author for changes because remote builds failed.Apr 12 2019, 9:29 PM
Harbormaster failed remote builds in B22574: Diff 48704!
  • Make the worker unit tests truncate the table before they run; otherwise, they get confused by the tasks inserted by these migrations.
amckinley added inline comments.

Maybe put these in a try/catch so users don't end up with a failed migration if this doesn't work for some reason?

This revision is now accepted and ready to land.Apr 17 2019, 5:37 PM

If there's some reason this may fail I'd like to learn about it and fix it, ideally. If someone reports something dumb that we can't really fix (or detect?) we could try/catch our way around it, but these shouldn't fail for any legitimate reason I'm aware of.

This revision was automatically updated to reflect the committed changes.

(And the reindex is safe/idempotent, so some reasonable failures like "disk full" would be bad to try/catch our way through, since the correct remedy is to free some space and retry the migration.)

Or "db002, which has the worker database, is being super flaky, even though db001 with every other database is up" or something, although that may not actually lead to a possible state where the INSERT can fail and the patch can still be marked as successful.