Product Engineering: Intentional, Impactful and Smart Work
Working at the right things at the right time.
In constraining times such as the current ones where the market forced companies to pivot from growth to profitability, the buzzwords have all been around efficiency. Major product tech companies have been forced to shift their focus overnight - which directly impacts their ways of working and the day-to-day of multi-disciplinary product teams.
This post aims to provide some practices from my own experience as previously a Software Engineer and now an Engineering Manager in product companies, on how Engineers can maximize their impact with an intentional and intense collaboration in cross-functional product teams in contexts with a clearly defined product strategy.
Business-driven contexts
It’s all about the money, right? If in a growth context, we talked a lot about experimenting with multiple bets and measuring their results, now we’re much more profit-oriented, which means we should aim to be as lean as possible and avoid waste. One tentative to do so is through a magic expression called business case. While there’s no rough formula or template on how to define a business case, it usually comes down to trying to achieve a set of metrics for a target, usually related to a certain amount of either direct profit or cost savings. This business case will then eventually generate a product initiative that will have a certain amount of effort estimated to deliver it.
We will break down some of the concepts above, but primarily it is important to understand that: tech product development has costs and we, as software makers, must provide the company with a value that makes that cost worthy. To build this mutual understanding, I usually like to breakdown any major development that it’s done in the teams I work with into the following questions: why, what, how and when. Each of these has a role of who’s a driver and when the other part is not stated as a driver, it is a collaborator.
Question - Answer - Driver
Why? Goal and Success Metrics - Product
What? - Functional Requirements and Out-Of-Scope - Product and Engineering
How? - Non-Functional Requirements and Estimates - Engineering
When? - Prioritization - Product and Engineering
I’ll try to break down each one of the above following a “given, when, then” mental model so that we can understand: the expected variables that might be considered when answering such questions as well as the expected outputs for each one of them.
Why: Goal and Success Metrics
Given a potential product initiative, when evaluating its business success, then a success metric is a measurable indicator of how well the initiative meets its goals and fulfils the business case.
The Product Manager is the person which navigates the problem space to find opportunities and converge on outcomes to solve them. They are the ones who must know better than no one the impact of their proposals and how this impact can be measured. Therefore, I consider that they should be the driver in answering this question. Product engineers should collaborate with them and, sometimes with data analysts, to understand what metrics are available and how can they be measured from a data standpoint. And since we like to stay technical sometimes, let’s get it to our language: it all starts with querying a data source.
What: Assumptions, Requirements and Out-Of-Scope
Given a product initiative, when defining its requirements, then functional requirements describe the specific features, functionalities, and behaviours that the product must have to meet its users' needs and enable business goals.
This one is where the collaboration is most intense, and I’ll try to break it down into three equally important levels.
Always (in)validate your assumptions
Assumptions are super important in the discovery and brainstorming phase. They are anything we can’t prove by evidence, but a representation of beliefs that we have at a point in time. The problem with assumptions is that: they are biased. Just like any other belief, they represent our leaning towards something. Also, they aren’t facts until proven otherwise. And since companies do not want to spend money on uncertainties, they can’t spend money on open assumptions.
On the other hand, assumptions can be a powerful tool to understand if a given feature is feasible. They can be used for brainstorming to represent constraints that there might exist such as time and dependencies that will impact the development phase.
My advice to my teams usually is: before starting the developments, ensure all assumptions are (in)validated. Write them down and share them with your team, but always confirm if they are either a fact or simply a belief. And if bets are not allowed, stick with the facts, and discard the unconfirmed beliefs before moving on.
Requirement levels and why they matter as a natural language
Requirements are the outputs of the final laps into the problem space before kicking off the solution space. They usually are defined by the product team as an output of an action performed by an actor, and usually, come with a certain degree of necessity - what I like to call requirement levels.
You may have heard of the MoSCoW method, where a product uses four categories of requirements to prioritise work: must-have, should-have, could-have and won’t have. To remove the ambiguity of these words, I like to follow RFC 2119, which can relate to each one of these requirement levels usually leveraged by Product teams and keep both the development teams and stakeholders aligned on what will be considered for the solution scope.
If you’re curious about understanding the history behind the RFC as I was, there’s this interesting thread in Quora explaining the context of its creation.
My suggestions in constrained contexts such as the current macro one are:
Stick to the “must-haves” and discard all the rest. We need to be lean and maximize the available capacity in order to increase impact. Therefore, anything that’s not strictly necessary is ideally put aside.
Challenge the requirement levels. Debate within your team - Product Manager inclusive - whether a given requirement should be really levelled into the must category. By having these sometimes-challenging conversations, we can refine requirements to the essential only and move faster, and better and increase the scope of impact of the solutions that are going to be delivered by the engineering team.
Leverage the as-is as much as possible. A lot of functional requirements can be achieved by simply adapting and rethinking existing processes using the existing tech features as support. Part of the requirement challenges is to think:
Are new developments really needed for this?
Can we achieve this goal with the functionality that we already have available?
Challenging requirements are especially important in larger organizational structures when the scope of a given initiative goes beyond a single team, since dependencies tend to increase the complexity, and therefore cost, of the developments. Nonetheless, these conversations also need to be taken with care so that over-discussions are avoided, and the destination turns out to be nowhere.
What’s out is just as important as what’s in
Finally, after confirming assumptions and levelling the requirements it is super important to clearly state what’s out of scope. For the team, it ensures that everyone goes towards the same direction and avoids deviations over what’s unsaid. For the stakeholders, it ensures proper expectation management of the delivery outputs and ensures that the outcome is what really matters. As mentioned above, with hardened constraints of either time or cost, the scope is one of the variables that can be cut to improve efficiency.
How: Non-Functional Requirements
Given a product initiative, when requirements are made clear and defined, then non-functional requirements specify how the system should behave in terms of characteristics such as architecture, performance, reliability, usability, security, and scalability.
This is the part of the process where Engineering is the core driver. NFRs involve all the technicalities that we as Software Engineers love to talk about. From my experience, I find it effective to divide these non-functional requirements into two main categories:
High-Level NFRs: the big picture. Use them to map any dependencies that the initiative might have and have a zoomed-out architectural view of the solution.
Low-Level NFRs: the loupe view into a specific requirement. Imagine, for instance, that you have a product requirement categorized as a must which states the following: “The users must be able to see timestamps converted to their local time zones”. This statement indicates that you’ll need to leverage some time zone handling library or API if you do not have this knowledge in your codebase: it indicates you might have to introduce a dependency to your solution. I like to divide Low-Level NFRs into two subcategories:
Implicit. Usually, your solution might have a preestablished architecture. For instance: it is a Web API with CQRS architectural style where the testing strategy is to leverage integration tests against an in-memory HTTP server and a real database via a pipeline disposable container. If you are going to create a new API endpoint, the implicit NFR here would be to follow the same strategy to test this new endpoint. Implicit NFRs can usually be ensured through pair programming, static code analysis and quality gates, architecture unit tests or code review cycles. Even so, they always have the danger of not being followed, since there’s usually no strict enforcement on them, but rather informal agreements and understandings.
Explicit. Following the above requirement statement, this new endpoint might have a dependency on an external API or library to do the time zone conversion and handling. The explicit mapping for this dependency should have ideally happened in the High-Level NFRs. Understanding which of the Low-Level NFRs it affects might be helpful to organize the teamwork and avoid potential roadblocks. For instance, if a given requirement A has a dependency on another team’s API which is still WIP, that might have an impact not only on this requirement A development but also on a potential requirement B that depends on requirement A.
Ok, that might be hard. But how do we create NFRs? Is there a format to be followed? From my perspective, these are usually identified in the initiative's engineering discovery and planning process, where the team breaks down the work to be done. And this might be, sometimes, where engineering teams spent most of their time: planning the work to be done instead of coding alone.
Think before doing: RFCs, ADL, Design Docs
A lot of buzzwords here, but RCFs, ADL, Design Docs, Architecture Diagrams, whatsoever, are really powerful tools for an agreement on NFRs before the execution and to create a shared understanding of dependencies and potential bottlenecks in the development. One of my favourite tech blogs, The Pragmatic Engineer, written by
, has two nice articles about the subject: a free one and another paid one, and I highly recommend reading both for practical examples of how these can be implemented.Another great advantage point of designing your solution before implementing it is to think about whether the proposed architecture incurs extra tech costs due to:
Need to upgrade subscription plans of as-a-service dependencies.
Increase of computational resources required to support the feature.
Increase in monitoring metrics digestion and processing.
Increased maintenance due to solution complexity.
Those are all points that the engineering team must take into consideration when designing a feature so that the development and release costs do not swallow the expectable return on investment of the given initiative.
When: Prioritization
Given a set of initiatives, when the why, what and how are understood, a ranking of now, next and later for Engineering work can be done based on the business case, effort, and available capacity.
The software engineering horror: estimates
So, we’ve agreed on the functional requirements, mapped the non-functional ones and created an RFC and now the product asks: “ok since you know what needs to be done, how long would it take and when can we start?” and an Engineer almost dies on the inside when listening to the question.
However, I like to see the glass half-full over here and understand that estimates are a rather good exercise for real-world scenarios. The reality of product companies is that a lot of the time you will have an estimated date for something: a commercial agreement, a new client onboarding, or a market campaign. And by providing estimates you can impact the company by creating awareness of what’s bounded for the deliveries of your team.
The bottleneck that I mostly see here is that teams underestimate for many reasons, one of them being the pressure to deliver. Don’t get me wrong here, the pressure will always be there. But it is a shared responsibility of the team to have honest feedback on what’s expected of them and, if needed, negotiate. Possible options to negotiate are:
Challenging the requirement levels mentioned above to cut the scope and lower the engineering effort.
Add extra capacity - controversial and with a lot of side-effects, IMO, but still.
Extend timelines.
While none of these choices is perfect, with trade-offs to be thoroughly analyzed, they're constraints the same way a hard date is. Leverage this to understand the feasibility of things and manage expectations properly.
Overestimation shouldn’t really be an issue in environments where there’s space for continuous learning mindsets. If Engineers overestimate a lot in your context, you might have other deeper issues such as lack of trust which need to be fixed first, since a process with corrupted principles or no principles at all tends to fail.
Finally, there is a lot of ground to cover on estimates, but that’s a really another topic. My main advice here would be to consider in your next estimations:
BAU-scoped work such as active monitoring and incident handling.
Not only the coding, but also the planning, development, testing, release, and monitoring cycles.
Non-technical factors such as holidays and vacations.
Past estimates to understand what’s worked and what could be improved this time.
Estimates are hard. And often not accurate. But they’re not meant to be accurate, by definition. However, we can leverage feedback loops and iterations are what make them better, faster, and more efficient.
Wrapping-up
Finally, the business case was set, the requirements were clarified, and the effort is now known. With this triad, the Product Managers can prioritize and align with the business what will be the initiatives tackled either now or next. With that set, it’s time to work and after a while … the delivery happens! Everyone’s aware of what it took and what it was supposed to achieve. And now what?
Remember the why? By leveraging the success metrics, the initiative can be validated against the expected results defined in the business case. With that, the business can also have accountability for the prioritization process as it defines the variables required to balance the importance and urgency of developments.
This allows a solid feedback loop on whether the product teams are really being leveraged as a valuable partnership to provide company value, as the results give feedback on whether the agreed strategy and overall company objectives are aligned towards the right direction.
Understanding how this context was set, thought, and measured, brings not only awareness to the engineering team but also provides to it:
The sense of ownership, for having participated not only in part of the discovery but also in the work definition process.
A clear view of the business impact of a given delivery and how to measure it.
Autonomy to think, create and deliver the best solution for a given problem.
How they effectively contributed to the company to win or save more money.
It’s a win-win for everyone involved as it helps involved Engineers who show interest in understanding this to have a real impact on the company. This impact, in its turn, allows the team to showcase their work and reach the next levels in compensation and their career tracks - hint: leverage brag documents proposed by Julia Evans for this. A last remark is that this is only possible when the “business” sees Engineering as a real partner in revenue generation. It's a two-way street.
Final thoughts
Especially in constrained times such as these, it is important to understand that maximising impact and avoiding waste go hand-in-hand. While such practices shouldn’t be followed only in contexts such as the current macro one, it is important to understand working smart is a win-win for everyone. Working smart means working on what matters the most. Sometimes that work might be hard. But a clear understanding of the business impacts allows the teams to have the necessary autonomy to properly discover, define, prioritize, and propose the work to cooperate to deliver.
Appendix
Supporting Docs
As a documentation freak and async-first advocate, I like to have a shared document where everyone in a team can collaborate to answer the questions above and create a mutual understanding of both the problem and the solution space. I found this Confluence Template interesting to start and tailor according to your team’s needs.
Thanks
As the first publication of this newsletter, I’d like to thank some people who helped me to put this out:
My wife Rebecca for all the love, and support and for encouraging me to try new things even when I was afraid of doing so.
My Product peers Gustavo, Mafalda, and Carolina, for their feedback with their laser-focus-product-perspective.
My friend William and my leader Bruno for reviews with an engineering perspective.