When acquiring a GlobalLock, put good connections that just got unlucky back in the pool
Summary:
See PHI1794, which describes a connection exhaustion issue with a large number of webhook tasks in queue.
The "GlobalLock" mechanism manages a separate connection pool from the main pool, and webhook workers immediately try to grab a webhook lock with a 0-second wait when they start. So far, this is fine.
Prior to this change, good connections which fail to acqiure a lock are discarded. This can lead to connection exhaustion as the worker rapidly cycles through lock attempts: the connections will remain open for at least 60 seconds (since D16389) in an effort to avoid outbound port exhaustion, but they're effectively orphaned because they aren't part of the main pool and aren't part of the lock pool. We're basically leaking a connection every time we fail to lock.
Failing to lock doesn't mean we need to discard the connection: it's a completely suitable connection for reuse. Instead of dropping it on the floor, put it into the lock pool.
Test Plan:
- Used "bin/webhook call ... --count 10000 --background" to queue a large number of webhook calls against a slow ("sleep(15);") webhook.
- Used "bin/phd launch 32 taskmaster" to start taskmasters.
- Observed MySQL connection behavior:
- Before change: 2048 configured connections immediately exhausted.
- After change: connections stable at ~160ish.
- Ran queue for a while, saw expected single-threaded calls to webhook.
Differential Revision: https://secure.phabricator.com/D21369