Page MenuHomePhabricator

Limit the read buffer size in `bin/storage dump`
ClosedPublic

Authored by epriestley on Jun 25 2019, 12:25 PM.
Tags
None
Referenced Files
Unknown Object (File)
Mon, Nov 18, 8:57 PM
Unknown Object (File)
Oct 20 2024, 9:18 PM
Unknown Object (File)
Oct 20 2024, 3:39 AM
Unknown Object (File)
Oct 17 2024, 4:16 PM
Unknown Object (File)
Sep 12 2024, 9:26 AM
Unknown Object (File)
Sep 4 2024, 9:25 AM
Unknown Object (File)
Aug 28 2024, 9:12 PM
Unknown Object (File)
Aug 21 2024, 7:35 PM
Subscribers
None

Details

Summary

Ref T13328. Currently, we read from mysqldump something like this:

until (done) {
  for (100 ms) {
    mysqldump > in-memory-buffer;
  }

  in-memory-buffer > disk;
}

This general structure isn't great. In this use case, where we're streaming a large amount of data from a source to a sink, we'd prefer to have a "select()"-like way to interact with futures, so our code is called after every read (or maybe once some small buffer fills up, if we want to do the writes in larger chunks).

We don't currently have this (FutureIterator can wake up every X milliseconds, or on future exit, but, today, can not wake for readable futures), so we may buffer an arbitrary amount of data into memory (however much data mysqldump can write in 100ms).

Reduce the update frequency from 100ms to 10ms, and limit the buffer size to 32MB. This effectively imposes an artificial 3,200MB/sec limit on throughput, but hopefully that's fast enough that we'll have a "wake on readable" mechanism by the time it's a problem.

Test Plan
  • Replaced mysqldump with cat /dev/zero as the source command, to get fast input.
  • Ran bin/storage dump with var_dump() on the buffer size.
  • Before change: saw arbitrarily large buffers (300MB+).
  • After change: saw consistent maximum buffer size of 32MB.

Diff Detail

Repository
rP Phabricator
Lint
Lint Not Applicable
Unit
Tests Not Applicable