Page MenuHomePhabricator

arc file upload fails with 502 error from Nginx reverse proxy
Closed, ResolvedPublic

Description

$ arc --trace upload ios-release-2.2.1-2034.zip 
libphutil loaded from '/Users/pol/Automatic/libphutil/src'.
arcanist loaded from '/Users/pol/Automatic/arcanist/src'.
Config: Reading user configuration file "/Users/pol/.arcrc"...
Config: Did not find system configuration at "/etc/arcconfig".
Working Copy: No candidate locations for .arcconfig from this working directory.
Working Copy: Path "/Users/pol/Library/Developer/Xcode/Archives/2014-08-31" is not in any working copy.
>>> [0] <conduit> conduit.connect() <bytes = 545>
>>> [1] <http> https://<REDACTED>/api/conduit.connect
<<< [1] <http> 724,081 us
<<< [0] <conduit> 724,421 us
Uploading 'ios-release-2.2.1-2034.zip'...
>>> [2] <conduit> file.upload() <bytes = 30799262>
>>> [3] <http> https://<REDACTED>/api/file.upload
<<< [3] <http> 22,257,774 us
<<< [2] <conduit> 22,258,048 us

[2014-09-01 09:32:59] EXCEPTION: (HTTPFutureHTTPResponseStatus) [HTTP/502] 
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.4.7</center>
</body>
</html> at [<phutil>/src/future/http/BaseHTTPFuture.php:337]
  #0 BaseHTTPFuture::parseRawHTTPResponse(string) called at [<phutil>/src/future/http/HTTPSFuture.php:387]
  #1 HTTPSFuture::isReady() called at [<phutil>/src/future/Future.php:39]
  #2 Future::resolve(NULL) called at [<phutil>/src/future/FutureProxy.php:36]
  #3 FutureProxy::resolve() called at [<phutil>/src/conduit/ConduitClient.php:24]
  #4 ConduitClient::callMethodSynchronous(string, array) called at [<arcanist>/src/workflow/ArcanistUploadWorkflow.php:82]
  #5 ArcanistUploadWorkflow::run() called at [<arcanist>/scripts/arcanist.php:338]

From /var/log/nginx/error.log:

2014/09/01 16:30:56 [error] 20808#0: *68538 upstream sent unexpected FastCGI record: 3 while reading response header from upstream, client: 98.210.228.233, server: <REDACTED>, request: "POST /api/file.upload HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "<REDACTED>"

From /var/log/php-fpm/www-error.log:

[01-Sep-2014 16:37:09 UTC] PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 28006068 bytes) in Unknown on line 0

Note that arc is up-to-date and that uploading the same file using the Phabricator /file app in Chrome works fine. The file is 20.7 MB and the upload limit should be 1 GB across the entire system.

Event Timeline

swisspol raised the priority of this task from to Needs Triage.
swisspol updated the task description. (Show Details)
swisspol added a subscriber: rfergu.
swisspol added a subscriber: swisspol.

This only gets a brief mention here:

https://secure.phabricator.com/book/phabricator/article/configuring_file_upload_limits/

...and I don't think we have a config check for it (although we could), but this is probably the issue:

  • memory_limit: For some uploads, file data will be read into memory before Phabricator can adjust the memory limit. If you exceed this, PHP may give you a useful error, depending on your configuration.

Specifically, set memory_limit to -1 in your php.ini (or some large value, like 512MB) to permit large drag-and-drop uploads.

The way drag-and-drop uploads work requires us to buffer the whole file into memory. We may be able to get around this some day, but it's complex under the PHP model and we likely need to divert these requests through Node, which is a lot of work. Until then, increasing the memory limit to allow processes to temporarily buffer these large uploads is the easiest workaround.

Ah that explains why we saw some files failing to upload both under command line and drag & drop in Phriction / Maniphest but working fine from the /file app.

In any case, you are mentioning drag & drop uploads in your explanation requiring in-memory buffering, but why is it failing for command line as well? Since it's your "API" you could perform the upload in the same way it's done by the /file app?

The API sends the file as a JSON blob, not as a multipart file upload, so it's essentially similar to the Ajax mechanism. We do this for simplicity, and because almost all files users upload are small (20MB files should normally be fine with this scheme -- 1GB files may not be, but that's a rare use case and not one we've put much time into supporting).

We can add at least an approximate config check for this:

  • If storage.upload-size-limit is set; and
  • ini_get('memory_limit') is positive; and
  • ini_get('memory_limit') - memory_get_usage() is less than the upload limit at config check time;
  • raise a warning about the effective capped size of Ajax/API uploads.

It won't be byte-for-byte perfect but should catch big mismatches like 1GB configured vs 20MB effective.

chad triaged this task as Low priority.Sep 2 2014, 8:36 PM
chad added projects: Arcanist, Files.