For the migration, let me just publish "run this command if you want to search existing files" in the change log for now, and then we can do the real migration if we see confusion about it. I bet 99% of the time you want to search for something you recently uploaded so the index will be functionally useful in a week even if you miss the guidance in the changelog.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Apr 18 2017
To make the indexing work for new stuff, I think you can do this:
@epriestley I just landed D17702, without really tackling the problem of files being created in non-standard code paths. I searched the codebase for PhabricatorFile::initializeNewFile() and didn't find anything scary-looking; do you have any specific examples that might need updating?
Apr 17 2017
Apr 12 2017
Apr 10 2017
This seems to have fixed the issue. Thanks for the quick fix.
T12531: Unable to upload file: failed to read 4583864320 bytes after offset 0 discusses one side effect of these changes, it should be fixed in HEAD of master and stable now.
Thanks for that catch in D17651, let me know if you're still seeing issues after updating. I'll cherry-pick that to stable too.
Yeah, that looks like it's consistent with the hypothesis above. I expect:
Not sure if this is useful:
Yeah, pretty sure what's happening is:
I think the first error is causing (maybe?) the chunk engine to fail, so we're falling back to the "upload the entire file in one shot" engine, which ain't great on 4GB files.
Alright, let me take a look. That exception is pretty specific and it may be obvious from the code.
Hmm, I got a different error with a 9MB file. But this looks like a problem on our end:
I originally tried a 1GB file and wasn't able to reproduce...
(Actually maybe you need a 9MB file.)
Does this reproduce with a 5MB file?
From the server-side error logs:
Okay, I managed to reproduce it with non-sensitive data:
Apr 9 2017
We ultimately built an "ngram index" for this task, see T9979 for discussion.
- When you click "Delete File", we currently delete the file in the web process. Since we've supported enormous files and pluggable storage backends for a while, this could take an arbitrarily long amount of time to complete.
- Instead, we want to flag the file as "deleted", hide it in the web UI, and queue up a task in the daemons to actually get rid of the data.
Apr 6 2017
See T12515 for followup guidance.
Apr 5 2017
The HackerOne issue for this is here:
The security part of this is fixed by D17625, but I'm planning to write a support script like bin/files integrity or similar to do things like "compute hashes for existing files" and "check that a file or set of files match their integrity hashes".
F60036 now plays correctly for me in Safari.
The video in T12078 now plays correctly for me so I think this is, in fact, resolved.
Apr 4 2017
After D17614, arc download uses file.search to retrieve a URI it can GET, then does a normal HTTP GET to that URI, retrieving the file content in the HTTP response body.
Apr 3 2017
Apr 2 2017
Some feedback on this for if it's ever prioritised and picked up again, we've been using this internally for an Unreal Engine 4 based game project's binary assets for about 7 months now, we haven't run into any issues. Works great for us.
Mar 30 2017
Files now render with a download link, which I think was the core issue here:
Sounds like this is resolved.
Mar 29 2017
Mar 27 2017
Mar 20 2017
We don't plan to take any other upstream actions here, but let us know if anyone has further questions.
Mar 18 2017
So our only option is to convert all our LFS objects into normal Git objects, increasing both our download times, and increasing your storage and bandwidth costs? Like our Git LFS objects for the entire repository total around 800MB, and without Git LFS, our build server will have to download that from Phacility every single time it does a build.
We have no plans to move this forward or enable it in the Phacility cluster in the near future.
Is this going to be enabled in Phacility soon? We just realised that Phacility doesn't appear to have it on, and this is a major blocker for us moving from our own instance to Phacility:
Mar 16 2017
(You could also pipe the list into bin/remove destroy --force, equivalently.)
You can re-run the migration explicitly with: