Page MenuHomePhabricator

Provide evidence-based forecasting of projects
Open, Needs TriagePublic

Description

This is a followup to the Conpherence discussion about documenting WMF reporting use cases; it is intended to support Phacility's research into implementing reporting.

Use Cases

  • As a Product Manager, I want to know if the next Milestone is likely to be delivered on time, so that I can re-prioritize/re-triage/re-plan accordingly.
  • As a Dev Manager, I want to know if a Milestone or project is stalled (because nothing's getting resolved, or because tasks are being added faster than they are being resolved), so that I can make planning decisions.
  • As a Product Manager, I want to know when a hypothetical project would get finished, so I can make a more realistic long-term roadmap.

Sample Solution

col_tranche17_burnup_count.png (700×1 px, 15 KB)

Category                Weeks until completion
                    By Points	             By Count
                Pess.   Nominal	Opt.	Pess.	Nominal	Opt.
Feeds           Never	42	7	Never	40	20
Navigation	Never	Never	Never	Never	Never	Never
Descriptions    Never	2	Never	Never	Never	4
UX-Debt	        Never	25	12	None	Never	29
Offline      	Never	Never	5	Never	36	12
Bugs      	Never	Never	23	None	Never	30

Design and Implementation Issues

  • There are many ways to forecast. Velocity-based aka "drawing lines on a burnup chart"-based forecasting is probably the simplest, but still requires tracking history and tracking scope growth. Monte carlo and other methods are also common.
  • How much uncertainty should be incorporated into the forecasts?
    • Our experiments have used separate uncertainty ranges for velocity and scope growth. E.g., optimistic is the 3 best weeks out of the last 3 months, average is the mean for just the last 3 weeks, etc full notes. This seems to balance simplicity with accuracy, but still provides many nonsensical forecasts.
    • If accuracy is defined as having the real answer within the range of uncertainty, then users tend to be very uncomfortable with genuinely accurate forecasts, because both the algorithmically suggested and anecdotally appropriate ranges of uncertainty tend to be comically large.
  • Measuring growth and velocity per week is misleading for teams that close tasks on a bi-weekly or longer schedule, e.g., classic Scrum.
  • The definition of a project or milestone for forecasting can be tricky if it's not exactly 1 Milestone or 1 Project.
  • Teams have expressed discomfort with attempting detailed forecasting of future projects based on historical velocities, over fears that the data would be meaningless, since velocity tends to be very specific to particular developers and particular projects and contexts.