Page MenuHomePhabricator

Explicitly shuffle nodes before selecting one for cluster sync
ClosedPublic

Authored by epriestley on Oct 5 2018, 9:01 PM.
Tags
None
Referenced Files
Unknown Object (File)
Thu, Apr 25, 1:19 AM
Unknown Object (File)
Sat, Apr 20, 4:23 PM
Unknown Object (File)
Fri, Apr 19, 8:02 PM
Unknown Object (File)
Sat, Apr 6, 3:00 AM
Unknown Object (File)
Sat, Apr 6, 12:48 AM
Unknown Object (File)
Tue, Apr 2, 11:15 AM
Unknown Object (File)
Tue, Apr 2, 8:48 AM
Unknown Object (File)
Sun, Mar 31, 9:11 PM
Subscribers
None

Details

Summary

Depends on D19734. Ref T13202. Ref T13109. Ref T10884. See PHI905. See PHI889. We currently rank cluster nodes in three cases:

  1. when performing a write, we can go to any node (D19734 should make our ranking good);
  2. when performing a read, we can go to any node (currently random, but T10884 discusses ideas to improve our ranking);
  3. when performing an internal synchronization before a read or a write, we must go to an up-to-date node.

Currently, case (3) is not-exactly-deterministic but not random, and we won't spread intracluster traffic acrosss the cluster evenly if, say, half of it is up to date and half of it is still synchronizing. For a given write, I believe all nodes will tend to synchronize from whichever node first received the write today.

Instead, shuffle the list and synchronize from any up-to-date node.

(I think we could improve upon this only by knowing which nodes actually have load and selecting the least-loaded -- doable, but not trivial.)

Test Plan

Poked at it locally, will deploy to secure. This is hard to measure/test terribly convincingly.

Diff Detail

Repository
rP Phabricator
Lint
Lint Not Applicable
Unit
Tests Not Applicable