I’ve written before about why averages are such a dangerous way to reason about uncertain systems. In Monte Carlo Simulation for Projections and Estimates, the main point was simple: when the inputs are uncertain, a single output number hides the real shape of the problem.
That same issue shows up constantly in software delivery.
Teams often start with a project plan that is already trying to express something important:
- which tasks can happen in parallel
- which tasks block downstream work
- which stages are likely to blow up in variability
- which parts of the plan are gated by a scarce person or specialty
But then all of that structure gets flattened into a single date.
Once you do that, you lose the thing that matters most: the dependency graph itself.
A Better Starting Point
The new Project Estimation Workbench starts from a different premise.
Instead of asking for one giant estimate, it asks for a dependency graph made of individually assignable tasks. The format is intentionally simple: write a Mermaid flowchart that captures what depends on what, paste it in, and let the tool do the rest.
If you just want to see the idea quickly, the page opens with a built-in sample workflow. You can run the simulation immediately, add engineers, override assignments, and watch the completion-date distribution move as the structure of the work changes.
That first-run demo matters. A blank canvas is usually the worst way to explain a tool like this. Seeing a concrete release plan with parallel frontend and backend work, a staging gate, testing, review, and deployment makes the concept legible much faster than documentation alone.
What the Workbench Models
The estimator is built around a few ideas:
- Dependencies matter. A task that looks small in isolation can still dominate the schedule if a lot of work fans into or out of it.
- People matter. Adding an engineer only helps where the graph actually allows parallel work.
- Uncertainty matters. Testing, bug fixing, stabilization, and release review usually deserve a wider tail than straightforward implementation work.
- Dates should be distributions. A P50 date and a P90 date tell you much more than a single “target” date ever could.
That is where the Monte Carlo piece becomes useful. Each task gets a tunable duration and an optimistic-to-pessimistic bias. The simulator samples those task durations repeatedly, respects dependencies, respects engineer availability, and then produces a histogram of likely project completion dates.
Why the Overlapping Task View Matters
The overall completion histogram is useful, but it is not the whole story.
The more interesting question is often: what is driving the tail?
So the workbench also renders a task-completion heatmap with a strong waterfall feel. The goal is to make bottlenecks visually obvious:
- tasks whose completion window is wide
- tasks whose finish windows stack up behind shared gates
- tasks that keep pulling the downstream work to the right
That gives you something actionable. You can narrow scope, split a task, add a person, or change sequencing and then immediately see whether the risk actually moved.
The Real Goal
I’m not especially interested in making project estimates feel more “precise.” The goal is almost the opposite.
I want estimates to be honest about uncertainty while still being useful for planning conversations. The graph should stay visible. The risk should stay visible. The impact of staffing and sequencing decisions should stay visible.
That’s what this new page is trying to make easier.
If you want to try it, start with the sample on /estimation/, then replace it with your own Mermaid workflow and see how the distribution changes as you tune the plan.