Page MenuHomePhabricator

Surface lint failures in reviews for a given repository
Open, Needs TriagePublic

Description

I'm working to reduce the lint failures in our repository. I've been manually categorizing existing failures and either disabling rules / fixing existing code. Currently, I find it challenging to find existing failures. I've resorted to manually linting our most popular files as proxy for most popular lint errors (with some success), but it's not ideal.

I would like a way to surface the lint errors my team hits most often so I can understand our progress in codebase cleanup and prioritize parts of our codebase which are popular.

Possible things that might help solution might make it

  • Easy to find a list of revisions which fail lint
  • Easy to find a list of lint failures sorted by how common they are

Event Timeline

Currently, I find it challenging to find existing failures.

Are you using --everything?

There are many existing failures in our codebase. Linting all files would give copious (unhelpful) output. That being said, I haven't tried it because it would (probably) take hours to run. Should I give it a try?

I seek to fix existing code, but would like to prioritize files / errors that are actually punching us in the face today.
I was thinking that a reasonable definition of "punching us in the face today" is "coming up on differential revisions today", since many of our files aren't actively developed right now. Even though they may have some lint errors, they're probably not worth fixing until we actually start touching them again.

Our issues arise from having an old codebase to which we're trying to add reasonable linting going forward. I'm trying to ease the process of prioritizing back-fixing issues.

The workflow I used when introducing linting was:

  • Temporarily enable a large/default sent of rules, run with --everything and get the lay of the land, maybe make a list of common errors or get a list of all options from the linter itself
  • Decided that rule X looked reasonable ("no trailing whitespace" or something like that)
  • Send a diff out for review that fixed X and enabled the X linter rule
  • Repeat.

This builds trust in the linting system at each step since it's never giving people results they can ignore, or fuzzy "maybe this file is okay but this is not".

(I've made a private note about this elsewhere, if you work with @nipunn.)