Dealing with estimates and embracing uncertainty
My heuristic to work around target dates and started being more comfortable with the uncomfortable.
Why estimates?
I’ve stumbled over this tweet from
a weeks ago:This tweet resonated so much with me. I was a former believer that estimating was a total waste of time. I have gone through the phases of:
Panic: “It’s impossible to give an exact date. My head will be ripped-off if I fail it.”
Lottery: “Hmm … we can give you this in 3 days!” - 3 days later - “It might take a bit longer perhaps tomorrow at 05:47 PM”
Blamer: “But that was dependent on the other team!”
Apostate: “Well, next time if we cut the testing …”.
Repentant: “Clearly, cut the testing did not work. In fact, just delayed even more due to bugs in the go-live day”.
Far-conservative: “I think that this will take us 2 months - to change the color of a button”.
Reasonable: “Let me get back to you later on, once we do high-level research on this topic”.
The last one is where I’m currently at. Been here for a while now. And it is what I believe to be the most pragmatic, trust-earning, team-backed approach of all.
This issue of
will be about distilling the reasonable approach to estimates I’ve been doing both as a Software Engineer back in the day and now as an Engineering Manager.Before we start, what about #NoEstimates?
I see this as a counterpoint, not a contradiction.
Allan Holub argues that estimates are a waste and goes against agile manifesto principles. He says that engineering teams should follow an approach where user stories are so small that, over time, they tend to have similar sizes so that teams can have a projection of the work to be done in the future based on historical data and story maps. While I agree with some of these arguments, and the tactics for work breakdown, estimates are way more than just giving a date.
In fact, estimates shouldn’t be focused on accuracy first, but instead on creating awareness and alignment of the work that can/needs to be done between multiple streams for an expected target date, as well as a technical and organizational landscape of the challenges ahead. Udi Dahan says that perfectly here: “estimates are an important part of the information that needs to flow around the organization to help quantify and mitigate risk”.
How to estimate?
In its essence, estimating is about dealing with the unknown. And due to this primary fact, there’s no such thing as accurate or precise estimates, and to rely on this expectation will certainly lead to frustration. The paradox here is: even though they are mostly wrong when it comes to exact dates; they are not useless. Neither should be discarded.
The approach presented here focus on trying to eliminate guesses and, when not possible, state the uncertainties loud and clear. It assumes that the What and the Why of the given initiative are already clear, presented in the last issue of the newsletter. The process consists of three major milestones:
Modelling → Refining → Communicating
We’ll deep dive into each of these steps below with prompts that I usually bring to my teams to facilitate the discussions.
Modelling
First, Zoom-out
Do we have any dependencies on external teams or services? What roll-out plan could be aligned between us to unblock each other?
If so, do we depend on available resources or on work that will still be delivered?
Does our current architecture have all the required capabilities to deliver this initiative?
Then, Zoom-in
Is there any tech debt that must be removed or will block our work?
Is there any tech debt that we could remove, leveraging the fact we’ll be already in that part of the code?
Does our current architecture support the requirements? If not, what are the possible impacts (introduction of new infrastructure components, external services, data migrations, API versioning, etc)?
Are we confident that we have all the knowledge necessary to deal with the changes?
What would be the major features need to be implemented to achieve our goal?
What I expect to have here:
A dependency map - if any - alongside possible risks like others’ WIP.
A clear architectural landscape - both as-is and to-be views.
A high-level task breakdown - like a first draft of a story map.
What I don’t expect to have here:
A super granular and refined task breakdown. This will take time, and at this point what we want to achieve is a bird’s-eye view.
An exact measurement of each task - not like a story map ready to work on.
Outcomes before next step
With this boiled down, we can then start giving measurements to the mapped work using a technique called t-shirt sizing.
Avoiding the pitfalls of t-shirt sizing
My major tips to leverage the T-shirt Sizing technique into your estimations: stick to fewer sizes, make sizes standard across your project teams so that they reflect periods of time that coincide with the teams’ workflows. For instance, you could use the size S for one iteration cycle, M for two iteration cycles and L for more than two iteration cycles, where an iteration cycle might last 2 weeks.
L-sized estimates are usually a warning sign that indicates the work either can be broken down into smaller pieces or has a lot of uncertainties. Take this into account when dealing with such estimates.
Refining
After modelling, you’ll usually have a very rough vision of the work that needs to be done. The refinement occurs in two steps: your team’s business-as-usual and avoiding overengineering.
For the first step, you’ll probably have either an expected delivery date - that’s negotiable - or an expected start date. That will help you and your team with the following prompts:
Assuming we start the work on week X, after finishing our current work. Do we have any vacations or out-of-office planned within our team around this period?
Are we really assuming we have our WIP limited to this initiative?
Is a dedicated monitoring and stabilization phase required after the rollout? Do we have a disaster recovery plan?
Are we considering our quality assurance procedures within the task scopes (like automated or manual tests)?
Which capacity constraints do we need to consider due to business-as-usual duties, such as: team cerimonies, on-call rotations, incident management, monitoring, observability, etc?
Surely there might be a lot more to ask in your own context, but here you can adapt. The point here is that this refinement step is critical as we usually tend to miss things like people’s normal lives, focus and BAU tasks. Therefore, all of these must be considered within your estimates as they are a natural part of teams’ iterations.
For the second step it is important to have in mind that for those who ask you an estimate, the shorter, the better. That’s why the team should challenge itself to understand:
Which capabilities we already have that can be leveraged?
Do we really need this much work for a successful delivery?
How can we make the implementation simpler?
Which assumptions would make the work simpler if removed? Are they confirmed? Do they fall into a must-leveled requirement?
What I expect to have here:
A possible difference in the t-shirt sizing output achieved in the Modelling step.
Awareness of the team’s duties besides the initiative tasks.
Quality assurance processes embedded within the estimates given - we should be way over the extreme go-horse phase by now. A dedicated testing phase might also be acceptable in some scenarios like distributed ones.
Solutions that leverage the team’s current knowledge the most with little work as possible.
What I don’t expect to have here:
Estimates that do not match with the team iterations - if your team measures cycles in weeks and you estimate in days, for instance.
Separate estimates for testing and development.
Overengineered solutions.
Outcomes before next step
By now you should already have a high-level overview of the project with dependencies, risks, high-level tasks, and their refined t-shirt sizes. This all added up should be able to give you a final t-shirt size for the whole project in order to estimate how many iteration cycles the team expects to take to deliver a given capability.
Communicating
Setting the expectations properly is better than estimating with accuracy. Leverage informed decisions to provided alignment over uncertainty. They will be your ally when communicating estimates. After the modelling and the refinements are concluded it’s important to clearly state to your audience:
What you know and what you don’t know at that point in time. Just like the clear requirements mentioned in the last post, confirmed and unconfirmed assumptions will also add a heavy weight to your degree of confidence.
If there’s something yet to be confirmed, assume the worst-case scenario. It’s a win-win for everyone. By clearly stating to your stakeholders that there’s something blurred in front of you that requires confirmation, it will be everyone’s will to remove this so that less risk is added to the project.
Communicate progress amidst imperfect horizons. Over the course of a project, the landscape might change. That’s while it’s important to have a constant pace of status updates shared amongst your project’s audience. This will make sure everyone is aligned with the evolution of the project, as well as the current risks. Here’s a framework I use to communicate every beginning of an iteration cycle:
Highlights: outcomes achieved during the last iteration.
Lowlights: challenges, roadblocks and risks added during the last iteration.
What’s next: high-level plan and expected outcomes of the next iteration.
Informed decisions will also help you with scope changes. Who haven’t felt in their bones the pain of an abrupt change of scope in the middle of the project? While that might be a symptom of an even deeper issue - like not having the what and why clear and aligned from the start -, this kind of situation happens more often that what we would like. Therefore, it is important not to panic but to act accordingly and bring consensus to the room.
First, it is important to understand if a scope change reflects in a change of the expected workload. Sometimes a confirmed assumption has changed due to external forces, but that doesn’t necessarily must reflect into a change of requirements. Only then, if a change of requirements really takes place, an evaluation must happen to define if it adds extra workload to the project. Clearly defining what was in and out by the time of the estimate was giving will make you have more room to either negotiate more time or to cut another part of scope to deliver in the same previously agreed time.
Keep promises and deliver
I’ve always felt bad with the sentence “underpromise and overdeliver” because, honestly, it feels to me like cheating. While I understand part of its rationale, it might reflect a culture of mistrust and no error tolerance, which is what we want neither to achieve nor to foster.
I felt I didn’t have much of an alternative until I found this text from
that address this issue with precision:"Realistic” is acknowledging that there is variability in the system. “Reality” isn’t creating a false sense of predictability by padding goals to give the impression that the team is regularly exceeding expectations.
As Engineering Managers, it’s our responsibility to foster a culture of trust and learning between the execution stakeholders of a tech project. Over uncertainty, there’s a high probability of assumptions being wrong. But that shouldn’t be an issue if there’s an alignment with the uncertainties and risks made clear and that there’s a willing of learning while progressing. Informed decisions are a great tool for that.
Wrapping-up
In summary, my process for an effective estimation process is:
First breadth. Identify the topology, dependencies, assumptions, and risks.
Then depth. With a clear horizon, try to have a deeper understanding of the work ahead.
Calibrate. Add all the weights necessary to add balance. What’s under your control will improve your degree of confidence over the given estimate. Reassure the importance of quality.
Take informed decisions. Given the why and the what, showcase the how and how long would it take based the overall knowledge, open questions and risks.
Communicate and evaluate regularly. Clear checkpoints will set the proper expectations and help to redirect whenever it’s needed.
Retrospectives should not be taken for granted. And they can be done internally with the execution teams but also on a broader scope with the project team. I find the first more useful to fine-tune technical process and team rituals and the latter more useful to create a workgroup culture between many streams. In both, there are some prompts that might help us to reflect to transform findings in learnings:
For successful estimates
What made us hit the target this time, so that we keep doing it in the future?
For failed estimates:
What do we know now that we didn’t know by the time the estimate was right?
How can we improve the risks visibility and mitigation?
For both:
What do we know now that could make us anticipate any roadblocks in a next time?
In the end, I tend to think about the process of estimating as one thing
said in one of his posts in : "we need to accept our ignorance, and experiment to learn. And get really good at it.”.Just as with anything else, estimates are improved by practice. The process shared here it’s what has been working so far for me and the teams I’ve been working with. It is prone to errors and may be subject to further enhancements once we find anything new.
Thanks for reading this issue of the newsletter. But before you go, I’d like to know from you: how does your estimate process differ from mine here? Which points here resonate the most with you? Let’s chat in the comment section below.
See you next month!