Page MenuHomePhabricator

Provide tools to combat and recover from abuse
Open, LowPublic

Description

This is an umbrella task for collecting "abuse" cases and discussing responses to them. Abuse is broadly when users use the software to be jerks: posting commercial messages, making a mess, vandalizing things, or generally causing trouble.

Currently, abuse cases are largely hypothetical: we have seen little real abuse in the wild. Our stance today is roughly that we will take steps to anticipate and prevent serious damage (i.e., the software should not have features which would allow an abusive user to cause real harm which would be substantially difficult to repair) but it is not a priority to proactively build tools to detect, prevent, or repair all possible abuse scenarios we can come up with. When abuse does occur, we'll help repair it within reason and consider improving tools to make protection or repair easier if the cost of repairing the abuse was out of line with the cost of causing it.

In general, users can't actually destroy anything and nearly all changes are logged and revertible (although not always trivially), so it is generally difficult for abusive users to cause much real damage. This problem is also almost entirely exclusive to open source installs.

Abuse we have seen firsthand:

  • "Security Research": A small number of enthusiastic security researchers occasionally register accounts to create some tasks titled "'></textarea><img onerror="evil() and such. This was more prevalent when our HackerOne program first launched, but has waned over time. We've responded by disabling these users and closing or deleting the objects they created. The cost of this disruption is currently very small, and existing tools are sufficient to manage it.
  • SEO Spam: A small number of users have registered accounts purely to post commercial messages with links to third-party sites. Presumably, their goal is to gain Google search ranking. We could add nofollow to outbound links to further discourage this, although I suspect these users are already wasting their time (they're filling out a CAPTCHA and navigating a whole registration process to post one link which is usually taken down within a few minutes). We've responded by deleting the objects they created. The cost of this disruption is currently very small.
  • Testing: A moderate number of users treat this install as a test install and create test tasks, etc.

Abuse we've heard of actually happening on other installs:

  • A user reported via HackerOne that someone else used "Commandeer Revision" to take over some revisions they didn't like and then lock them down. Since this is very easy to repair with bin/policy unlock and unquestionably implicates the user, I don't currently think it's a problem that needs a specific response.

Hypothetical abuse:

  • (T7593) Since we support storage of arbitrarily large files, an enterprising user might upload "l33t w4r3z" and use a Phabricator install to distribute them.
  • (T4909) This task discusses concern over users deciding to leave a community and deleting all the comments they've ever made before they go. Comment history is always retained internally, so this does not cause permanent damage, but might not be easy to repair.

Event Timeline

epriestley added a project: Abuse.
epriestley moved this task from Backlog to Monitor on the Abuse board.

One specific thing we're starting to see more of is test users signing up and bulk-associating or bulk-merging large numbers of tasks. There doesn't seem to be any specific intent behind this, but I think these UIs are shiny and make it easy to click a lot of buttons and have a large effect. Here's one such user on this install, who has registered three accounts just because they like clicking buttons so much:

This user doesn't seem to be malicious or actually abusive, just possess a child-like curiosity about what clicking buttons does.

WMF has seen this too (one user recently signed up and immediately merged about 50 tasks), although this was a week or two ago and I don't have a link handy.

Possibly worth considering a Can Use Shiny Buttons policy to prevent use of the Associate/Merge dialogs in the UI -- or building a one-at-a-time version of these workflows, and providing the bulk flows only for users with Can Bulk Edit permission. These users would almost certainly do much less damage if they had to click more.

We probably bear the brunt of this particular behavior and it's not too bad right now, and I'm hesitant to add an option which essentially only serves this install since we routinely reject unique features other installs are interested in, but maybe worth thinking about.

I think it would be preferable to keep a log of bulk edits that were performed in a global(?) history, and allow them to be reverted easily? I think that would mitigate the issue described in the comment above, and provide value in the event someone unintentionally bulk edits the incorrect set of tasks / objects.

The past few days we've had a lot of spam on the Blender phabricator instance. These spammers are also assigning tasks to random users, who then get emailed. See here for examples:
https://developer.blender.org/maniphest/query/all/

There's some short term things we can do, use manual account approval or detect specific keywords in tasks titles and descriptions and reject based on that, but we'd like to solve this in a more reliable and automatic way.

What sort of reliable and automatic solution are you hoping for? How could the system reliably, automatically detect that a user is a spammer or that a task assignment is unwanted?

Nuance (Phabricator Help Desk) is the only reasonable way forward here I can think of, which puts new tasks into a private queue). Anything else is cat & mouse with spammers and that's just a huge time sink for us with no obvious benefit (99% of installs are private).

We can also let installs send us all their data, we'll decide if it's spam or not, then we remotely delete any data that we feel like deleting. But we'd have to charge like a gorillion dollars per message to make this sustainable today.

Of course there's no totally automatic and reliable system, we're just trying to find something better than manually removing dozens of spam tasks every day.

Some ideas would be:

  • Akismet integration like Gitlab has
  • Support reCAPTCHA v2 (assuming it's significantly better)
  • Custom Blender specific captcha or question that bots can't answer
  • Maniphest batch edit operation to remove or clear contents of selected tasks
  • A policy to disallow newly created users assigning tasks or adding subscribers, so we're not sending spam mail to our users
  • Custom Blender specific captcha or question that bots can't answer

Bots and vandals are often distinct. It depends which type of user you're trying to mitigate against. MediaWiki has an extension that provides custom CAPTCHAs: https://www.mediawiki.org/wiki/Extension:QuestyCaptcha. Phabricator could have something similar, probably pretty easily. This might fend off bots, but likely wouldn't deter vandals.

  • A policy to disallow newly created users assigning tasks or adding subscribers, so we're not sending spam mail to our users

MediaWiki has this in the form of "autoconfirmed" users. On a per-wiki/per-installation basis, admins can set a threshold (e.g., 10 edits and 4 days) that a user must meet before being able to take certain actions.

Requiring a confirmed/validated e-mail address can also help against spammers. I can't remember if Phabricator already requires a confirmed e-mail address to do anything.

I can't remember if Phabricator already requires a confirmed e-mail address to do anything.

It's configurable (auth.require-email-verification).

From what we've seen on this install, the "printer fax support" spammers are humans willing to go to significant lengths to overcome access barriers (they fill out Captchas, register and link GitHub/Google accounts, validate email addresses, successfully navigate workflow changes, originate from different remote addresses, and take actions slowly), so I suspect no automated system designed to deter bots will be effective against them. My best guess is that they're being recruited through Mechanical Turk or some similar system.

From what we've seen on this install, the "printer fax support" spammers are humans willing to go to significant lengths to overcome access barriers (they fill out Captchas, register and link GitHub/Google accounts, validate email addresses, successfully navigate workflow changes, originate from different remote addresses, and take actions slowly), so I suspect no automated system designed to deter bots will be effective against them. My best guess is that they're being recruited through Mechanical Turk or some similar system.

Totally agree, that’s why I’d say the last two points in @brechtvl list would be the most helpful in that situation:

  • Being able to wipe out clean a set of posts (reports, but also patches, pastes, etc.) in a single click (replace title with canned one, remove content, comments, subscribers, assignee, etc.).
  • Having option to prevent, during a few days, new users to assign reviewers/subscribers/assignee - or maybe better and simpler, not send any mail from new user's actions. That would totally prevent random user being spammed through mail, which is probably the most annoying part of our current issue…

That is quite hard to protect against real people dedicating their time on spamming projects. As @epriestley mentioned, there are paid systems for that.

Throwing ideas:

  • Add option to black list IPs and prevent activity from them (we currently do that in BSD's firewall, but sure enough we can't add access to it to all guys from moderator team)
  • Add option to black list registration from certain mail domains or wildcards. Currently it's other way around with auth.email-domains.
  • Add an optional "Why do you want to join Phabricator" field to registration form which then gets sent to administrators so they have easier time figuring out if it's a legit user or not. For example, if reasoning is "Want to submit bug about branched path tracing not giving correct results" will be a reason to join, but "have something important to say" is not.
  • "Moderation lists" similar to how different blogs do it: for until user gained enough "reputation" his submission are only visible to moderation team. That would make phab much less attractive for spammers because then they wouldn't have any way to show they did indeed spam the system (so they are not getting paid).

A while ago on some Phab instance I experienced people uploading copyrighted material as either files or Pholio mockups, then creating custom panels embedding those files plus creating a dashboard. Very creative and convenient. :) My guts also tell me that Conpherence rooms only accessible to specific users were involved to communicate/coordinate, but as admins are not all-powerful no-one could prove, I'm afraid.

fs.png (626×672 px, 19 KB)

"Recent Activity" on /p/username/ seems to not display a user's panel + dashboard creations/edits, even if I had rights to access those items. This might be something to reconsider?

We can also let installs send us all their data, we'll decide if it's spam or not, then we remotely delete any data that we feel like deleting. But we'd have to charge like a gorillion dollars per message to make this sustainable today.

https://azure.microsoft.com/en-us/pricing/details/cognitive-services/content-moderator/

As a small step towards a more general solution I think it would be very helpful to allow the admins to easily revert changes, where revert means that there will be no traces left of the vandal action after the revert.

In addition, there are actions that are not revertible by an admin as of now, I experienced these ones but others might exist of course:

  • archive/delete files: T7593
  • re-add watchers that were deleted from a project

Anecdotally, Disqus uses Akismet and the hit rate isn't great (I've observed both a high false positive rate, and a high false negative rate).