Prior to building Facts, I'd like to put Dashboards/Panels on firmer footing. This is mostly fixing bugs and making minor improvements, although I'd also like to build "Portals" while we're here.
- See T13272 for general improvements to existing dashboards and panels.
- See T13275 for "Portals". A portal is a page with a list of dashboards and other resources in a menu.
Facts and Charts
PHI203 describes some specific use cases, but I think the corresponding toolset to build for it is just the straightforward "agile" tools.
In one use case here, a team wants to be better at predicting whether they'll hit a relatively near-term deadline or not ("we often end up in situations where we are 75% of the way through the sprint, but have only done 25% of the planned commitments"). Conceptually, I'd like to imagine an experienced project manager (and, really, every member of an experienced technical team) should probably "know" this with more accuracy than a tool can provide, although this may be unfair. But even if the human understanding of this is better than the chart understanding, I think having a chart can make communication about velocity much easier, and may be a useful tool in working with engineers to convince them that their planning skills could use improvement while framing it as a challenge to overcome rather than a nagging confrontational mess. If nothing else, a chart can be a reality check against the temptation to overpromise.
T12459 asks for forecasting. I don't plan to pursue this initially because I believe forecasting is very difficult and I'm hesitant to ship a product which makes claims about the future without having any kind of feedback loop so we can verify that the forecasts are accurate or work to correct them. If charting sees significant use, we may be able to pursue this in the future. Simple forecasts, like "will this project ever complete", tend to be obvious from the shape of the chart anyway since you can look at it and see that the trend line is headed up or down. Since part of Facts will also just be API access to datapoints, installs could build this externally in the short term.
T12410 asks for some relatively specific reports, including a not-exactly-time-series report. I don't expect to pursue these use cases in the near term. Other strategies exist for finding long-surviving tasks within a project (query open tasks ordered by date created).
T12403 asks for a specific chart based on relatively straightforward data. We probably won't be able to produce this chart for now, but I expect to be able to produce the data.
T12177 is a different request and asks for charts around review lifecycles. The general goal here is to encourage good engineering behavior (like small, well-separated changes) by communicating that you'll get changes reviewed faster if you do the legwork to set them up properly (maybe this isn't true universally, of course). Another goal is to put social pressure (and, perhaps, performance review pressure) on teams and individuals with poor review participation and highlight teams and individuals who are responsive.
The latter goal is somewhat dangerous -- "Hall of Heroes" essentially led to open revolt -- but I suspect we can ease into this with little contention.
An upstream goal is to replace the awful Maniphest → Reports view with something maintainable.
Generally, I expect to focus on building a fairly traditional "burndown" chart first. This will let us replace the Maniphest reports view, generally speaks to most use cases we've seen, and should generally cover most of the basic types of fact extraction and reporting that other charts will eventually need.
The facts we need to extract to render a burndown chart are:
- Scope changes: tasks created into a project, later tagged with a project, or removed from a project. (For users, assignment changes.)
- Estimation changes: point value changes on tasks. (For users, point value changes for assigned tasks).
- Completions/resurrections: tasks moved to a "closed" state from an "open" state, or to an "open" state from a "closed" state.
- The datapoint generation workflow should check for (and reject) duplicate datapoints since these are likely errors, at least until we encounter a case where they aren't errors.