I think that "forking" is the right answer here.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Mar 30 2016
Mar 29 2016
It's not rewriting an existing published branch, we don't permit that, it's just publishing a branch that you've accumulated a bunch of things on as you've been iterating for some period of time. You're not done with it by and stretch of the imagination, and it's been sitting on your machine too long for you to not push it somewhere.
Mar 6 2016
So that was our "foolproof" method of totally nuking and resetting the contents of a workspace before leasing it out again, and ironically, it does work all of the time, unless there is nothing to change about it.
we've been wondering for about a month now why on earth this would happen when no error output is produced
IT EXITS WITH A 1
Mar 3 2016
./bin/search index --force --all fixed this,.but i'm confused how some random tasks weren't indexed and some were
Feb 26 2016
Feb 25 2016
Probably a really quick partial remedy to the fallout (not root cause) is just exploring ways to make it super obvious to people that they shouldn't be interacting with disabled accounts.
To explain the hairiness of this a little better in our case, the actual "migration" procedure looks like this:
I'm following these because I've frankensteined the existing working-copy and hosts blueprints into one megablueprint, mainly because we run one build per machine, and we run a lot of them.
Feb 24 2016
Feb 17 2016
Normally I would say that I want to do it myself, but if you're willing to clean it up right now, go for it.
Feb 16 2016
ft_stopword_file = /opt/phabricator/phabricator/resources/sql/stopwords.txt ft_min_word_len = 3 ft_boolean_syntax=' |-><()~*:""&^'
ok so that was the exact phrase that when searched didn't show up, it works fine here
Notes on setting up Vagrant with libvirt support
I think we were using strings that were too long when checking this out on our production install.
I can't close this task to spite myself.
Feb 15 2016
(apologies if this recently changed, I'm a few weeks behind master)
mysql> show create table owners_package\G *************************** 1. row *************************** Table: owners_package Create Table: CREATE TABLE `owners_package` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `phid` varbinary(64) NOT NULL, `name` varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL, `originalName` varchar(255) COLLATE utf8mb4_bin NOT NULL, `description` longtext COLLATE utf8mb4_bin NOT NULL, `primaryOwnerPHID` varbinary(64) DEFAULT NULL, `auditingEnabled` tinyint(1) NOT NULL DEFAULT '0', `mailKey` binary(20) NOT NULL, `status` varchar(32) COLLATE utf8mb4_bin NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `key_phid` (`phid`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin 1 row in set (0.00 sec)
Yeah, we have some packages which are generated by another application, and some which are generated by users, I wanted to lock the field only when the package was generated by the other application, this is a simplified version of the actual extension in place.
Feb 5 2016
Can I do anything to make this more appealing or land-able or attention worthy?
Can I get clarification on philosophical opposition to this? Is it leaking abstraction of the "icon" interface to the end user?
Feb 2 2016
Moving bar rendering to a specialized coverage bar view
Feb 1 2016
Jan 25 2016
Jan 10 2016
The error output was not from the script, it was just errors. I think I remember looking into this a long while ago and it's because the migrate script (incl dry run) actually reads all of the files. The act of reading the file is what generated the error. I should have described this a little better than "dry run shows some files are going to fail", and instead described it as "dry run shows errors for some of the files".
Jan 6 2016
Dec 18 2015
FWIW, we have a script that uses arc patch --nobranch to "graft" someone else's working tree into your own working tree. We suppressed arc patch output entirely after enough people complained about "all of the branches its making".
Dec 17 2015
This isn't quite done, I'm putting it up to ask for some ideas. I've got all of the data that we would need funneled into DiffusionLastModifiedController:: renderColumns but had one particular question about how to proceed. Should we render coverage for each file in this same column? If so, how can we differentiate per-file coverage UI elements from per-directory UI elements?
Dec 14 2015
Nov 23 2015
Nov 19 2015
We can leave this at "reduced expected incidence rate to < 1/3k" and I'll come back here if it happens again if you don't have any more obvious ideas.
Oh I lied, checked the old logs, this error was present the last two times it occurred, but not present when it happened this evening.
This perfectly recreates the symptoms, except it generates an error in the daemon log:
DAO's prefixed CISubmitQueue are in their own database. There has never been any problems with referential integrity / never seen anything in the database that would indicate we'd written records incorrectly. But you may be right about the tasks being written before the plan is written. I suspect that would cause errors though, I can try to reproduce that first in order to rule out our specifics being the problem.
Let me post the whole block, it didn't seem relevant:
Nov 10 2015
Nov 8 2015
Nov 6 2015
Oct 30 2015
Oct 26 2015
Oct 25 2015
Yeah it's busted by default including newly created steps
if you have a many-minutes-long build process anyway maybe shaving startup time from 2 minutes to 2 seconds doesn't really matter
I did do at least this to try to get this to disappear:
Oct 23 2015
I'll put them into rotation and report back on A/B results.
I replaced the existing logo with the one you provided some time ago. It has not been met with universal fanfare as I would have expected given the level of artistry displayed.
Oct 22 2015
I thought Phabricator was an adventure game, that sounds an awful lot like RTS.
"out of the box" support would look something like: if you triggered the action that created this build, only you have control over stopping it (or admins). If an application started the build (eg, global herald rule in response to a commit), only admins have control.
We're already waist deep into modifying DB records in Harbormaster that we probably shouldn't be, any expanded control over policies there at all should enable our other custom apps to make it do what we like.
Oct 20 2015
I'm ~1 week away from actually having our ducks in a row so we can merge upstream, would expect to land this right about then. Not sure if workboard implies some kind of scheduling?
Oct 18 2015
Oct 15 2015
Yeah it works for us to just kill it in arc unit for now, we're stuck a few weeks behind master for the time being so an upstream fix wouldn't help us immediately anyways.
So I guess yes? You're saying implement this in the test engine?
We have our own build step which just runs arc unit --everything --coverage --ugly in a working copy and then deals with the exit code and json returned.