Page MenuHomePhabricator

Don't report search indexing errors to the daemon log except from "bin/search index"

Authored by epriestley on Feb 15 2019, 1:30 PM.



Depends on D20177. Fixes T12425. See

Search indexing currently reports failures to load objects to the log. This log is noisy, not concerning, not actionable, and not locally debuggable (it depends on the reporting user's entire state).

I think one common, fully legitimate case of this is indexing temporary files: they may fully legitimately be deleted by the time the indexer runs.

Instead of sending these errors to the log, eat them. If users don't notice the indexes aren't working: no harm, no foul. If users do notice, we'll run or have them run bin/search index as a first diagnostic step anyway, which will now report an actionable/reproducible error.

Test Plan
  • Faked errors in both cases (initial load, re-load inside the locked section).
  • Ran indexes in strict/non-strict mode.
  • Got exception reports from both branches in strict mode.
  • Got task success without errors in both cases in non-strict mode.

Diff Detail

rP Phabricator
Lint Not Applicable
Tests Not Applicable

Event Timeline

amckinley added inline comments.

It looks like the implementation of PhabricatorGlobalLock::newLock() will result in a new format for these lock names. Renaming this lock will break, uh, stale unstopped daemons that are running concurrently with updated daemons? Or something? And when I say "break" I mean "might index an object twice" or be otherwise pretty harmless or at least recoverable by doing bin/search index --all.

This revision is now accepted and ready to land.Feb 19 2019, 7:08 PM

Yeah, I think you'd have to: have a queue of indexing tasks; which includes the same object two or more times; and keep running the old daemons during an upgrade; and be upgrading only a little bit so running the old code on the new schema doesn't break anything else; and then start some new daemons without stopping the old ones; and happen to have an old daemon and a new daemon grab tasks which index the same object at the same time.

At that point, I think the object can end up with a maybe-partially-inconsistent index if you continue getting unlucky, that you'd have to fix by updating it again or with bin/search index.

This revision was automatically updated to reflect the committed changes.