Leading vs lagging indicators (how product teams move fast)

Trying a Mattress

Product teams are rightly told to focus on outcome over output. An output is a product update. The outcome is a business result that’s a consequence of the product update. 

Most product updates should have a clear outcome to get prioritized. The output is meant to “create a change in user behavior that leads to a business outcome,” as Josh Seiden put it.

He uses a mattress store to exemplify his point: 

A mattress store wants to sell more mattresses. To achieve this outcome, they need customers to come into the store. They need customers to try the mattresses as nobody buys a mattress without trying it first. 

To make that happen, they space the mattresses nicely so customers can easily access them. They cover part of the bed in plastic to keep them clean and hang signs telling the customers to try the mattresses. These are the outputs that are meant to drive the outcome.

Output: Make it easier to try mattresses.

Outcome: Sell more mattresses. 

If you’re in B2B SaaS, a more relevant example might be a product with seat-based pricing. As a business, you want to sell more seats. To do that, you need outputs that drive collaboration and create the need for customers to invite their coworkers.

Focusing on outcomes over outputs is a great framework to ensure that product teams think about delivering value, not just features. If you can’t tie a business outcome to a feature, you may want to reconsider its priority. 

There are exceptions, of course, like refactoring stuff or delighting users to increase customer satisfaction and retention. However, for most product updates, you want to have a clear outcome in mind.

But, there’s a problem with outcome-driven product development: The feedback loop is slow.

Outcomes are lagging indicators that change very slowly. When a product team releases a product update to increase collaboration, it takes a long time to measure and evaluate its impact on the average number of seats per account. 

This leads to two problems:

First, the collaboration feature may be frustrating or confusing for customers. The release could not actually address customers' collaboration needs or the UX could be too confusing. While the product team is waiting on the lagging metrics, customers are becoming increasingly dissatisfied with the feature update. In fact, more than 50% of features fail to have any customer impact in their first iteration!

Second, while the feature they just shipped is being evaluated, the product team is reassigned to new projects while the previous product updates fade from their minds. The impact of this is underappreciated.

Once a product team is on to the next thing, the importance of the past release decreases and the technical context is greatly reduced. Bringing that importance and contextual code knowledge back to the product team, later on, is a slow process, making future iterations more expensive as time passes.

Predicting the future with leading indicators

When a product release goes out to customers, there are tons of signals that product teams can use to gauge satisfaction. 

Are customers adopting it? Do they keep using it? What do they say about it? Are they satisfied with it?

That’s where leading indicators come in. In the context of a product team, a leading indicator is a data point that immediately signals the likelihood of feature success and thereby the associated business outcomes.

Going back to our example, if customers are dissatisfied with a collaboration feature, it’s unlikely that they’ll invite co-workers, which was the desired outcome. Catching such leading indicators in the days after the initial release gives product teams an opportunity to convert dissatisfied customers into satisfied customers by using those indicators to iterate fast and flip unhappy customers into happy ones.

Acting fast has several benefits:

  • Product teams still have the momentum and context to do so, meaning iteration times are much shorter. 
  • The customers are heard and impressed by the quick turnaround. This builds trust in the product and the team behind it, increasing retention over time. 
  • The product team has gained 27 days by catching and fixing issues on day 3 rather than on day 30 when the lagging metrics become available.
  • The product team avoids the potential false positive of seeing an uptake in collaboration in the metrics while being unaware of widespread dissatisfaction. Unaddressed, this can lead to account churn due to the bad experience with the product.

Leading indicators to track

So, what leading indicators should product teams be tracking? 

Besides technical performance indicators, here are six leading key performance indicators to track:

Indicator Description Type
Awareness Are customers aware that the feature exists? Quantitative
Adoption Once they’re aware, are customers trying the feature? Quantitative
Engagement Once adopted, are customers using the feature as you intended? For example, are they using the feature alone or with co-workers? Quantitative
Satisfaction After adopting the feature, are they satisfied with it? Do they want to continue using it? Quantitative
Feedback If customers are unsatisfied, how can it be improved? If satisfied, how can it be further improved? Qualitative
Screen recordings Are customers using the features as intended or can the UX be changed to improve adoption and engagement? Qualitative

The bottom line is with these indicators, product teams can greatly improve how they work. These indicators empower them to take action sooner instead of waiting for the lagging metrics to surface. They get the insights and context they need to take advantage of the momentum they already have to increase iteration velocity and improve customer satisfaction.

Happy iterating!

P.S. Be sure to read Tim Herbig’s excellent piece on this topic.