HomePhabricator

(stable) Limit the read buffer size in `bin/storage dump`

Description

(stable) Limit the read buffer size in bin/storage dump

Summary:
Ref T13328. Currently, we read from mysqldump something like this:

until (done) {
  for (100 ms) {
    mysqldump > in-memory-buffer;
  }

  in-memory-buffer > disk;
}

This general structure isn't great. In this use case, where we're streaming a large amount of data from a source to a sink, we'd prefer to have a "select()"-like way to interact with futures, so our code is called after every read (or maybe once some small buffer fills up, if we want to do the writes in larger chunks).

We don't currently have this (FutureIterator can wake up every X milliseconds, or on future exit, but, today, can not wake for readable futures), so we may buffer an arbitrary amount of data into memory (however much data mysqldump can write in 100ms).

Reduce the update frequency from 100ms to 10ms, and limit the buffer size to 32MB. This effectively imposes an artificial 3,200MB/sec limit on throughput, but hopefully that's fast enough that we'll have a "wake on readable" mechanism by the time it's a problem.

Test Plan:

  • Replaced mysqldump with cat /dev/zero as the source command, to get fast input.
  • Ran bin/storage dump with var_dump() on the buffer size.
  • Before change: saw arbitrarily large buffers (300MB+).
  • After change: saw consistent maximum buffer size of 32MB.

Reviewers: amckinley

Reviewed By: amckinley

Maniphest Tasks: T13328

Differential Revision: https://secure.phabricator.com/D20617