We have benchmarks we can perform on code and it's somewhat ambiguous where to store and surface this. Ideally there would be some database that tracks some performance metric against commits or diffs. My current thinking is to run these as a commit hook on the staging area and just add a message to diffs about the results. Phabricator has lots of applications. Would this make a reasonable addition/feature request: a database to store performance metrics tagged to commits or diffs and then a visualization of these over time or per diff/commit?
Does there exist open source software that can do this currently?
If you just upload your benchmarks as "unit tests" with the cost as the "duration", I think that should be a mostly-reasonable way to store the data. You'd have to write custom charting on top of it today, though.
I think charting is a reasonable thing for the upstream to eventually do (even for non-benchmark tests, it's reasonable to show how much each test costs over time, and how the total cost of the test suite is changing) but charting stuff is generally a major project entangled with Facts / T1562.
This would also let you "fail" a benchmark if it took too long, and have that failure directly show up on revisions before they landed to warn authors/reviewers about performance regressions.