Seven principles for
evaluating features

Shipping new features is the greatest feeling. As an industry, we’ve gotten really good at delivering features at a high pace and with technical confidence. 

However, we just might have forgotten the most important part of shipping features: Our customers. Software only becomes magical when it’s actually used and loved by the people we make it for. 

Here’s our seven fundamentals for making sure to ship features that matter to our customers.

1. Don’t forget about it

This probably comes off as self-evident, but it happens over and over. We completely forget about the feature after shipping it! You’ve probably experienced shipping a feature and wondered months after: Did customers even like it? 

It’s understandable, though. Releasing any significant feature is a big lift. Once it’s out, we want to take a breather from it.

Our tools aren’t helping either. For developing and shipping features, the tools and workflows have evolved greatly over the past decade: We now have structured code reviews, automated testing, push to deploy, and so on.

But, once a feature is deployed and is with the customers - which objectively is the most important part of the feature cycle - our existing tooling workflow takes us no further and we drop attention at the most crucial time. It’s pretty nuts! 

Step one; don’t forget about it. 

2. Track it

Actually tracking a feature is rarely as easy as it sounds. When you get into the details, hard questions start to pop up: What exactly should we be tracking? What exact questions do we want to answer? Where do we implement the tracking code?

Most analytics tools or CDP’s let you track events or account attributes. They’re powerful for tracking different types of features:

Events are powerful for tracking features where frequency is important to measure. For example, “how often do accounts export to Slack?”. Firing off a tracking event whenever the export happens will provide you with answers to that question.

Attributes are powerful for tracking account-level progress. For example, “how many accounts have authenticated with Slack?”. To track this, you have to query the database and send the response as an attribute value, e.g. “slack_authed: true/false”.

The downside to attribute-based tracking is that you need to make that data available from your database. The big plus, though, is that attributes work for historic data and is always up to date. If you were to track Slack authentication with events, you’d need to track two events: When the account turns it on and when the account turns it off - and it wouldn’t work for any accounts that authenticated before you added event tracking. 

Lastly, a time-consuming aspect of tracking features is often naming of the events or attributes. We recommend following Segment’s guide for naming, so you don’t spend time on this.

Figure out exactly what questions you want to answer, and how to get there.

3. Define a target audience

As soon as you start on the specification of a new feature, remember to also include who the feature is for. Having a clear internal understanding of the feature’s target audience can help a lot in terms of shaping the feature, designing it, communicating it, prioritizing it and later measuring the success of it. 

For example, an onboarding feature might only be relevant for trial accounts. An export to CSV feature might only be relevant for enterprise customers. And so on.

Know who you’re designing and building for.

4. Seek qualitative feedback early on

In the initial weeks after shipping, you won’t have any statistical significance in the metrics. Until you have more data, focus on a simple goal, like getting 10 accounts to use your feature. These are your early feature adopters. Reach out to them and ask what they think about the feature. Maybe you can make some quick tweaks that will greatly help with the adoption and engagement of the feature going forward.

Talk to customers as early as possible.

5. Measure adoption states

Many tools let you see the count of event interactions per account. While that tells you something, the devil is in the detail. For example, to understand feature retention, you want to measure retention on active users of the feature. Accounts that have only tried the feature once or twice haven’t become active users of the feature yet, so you want to filter them out. 

Bucket your accounts into at least a few adoption states per feature. For example:

  • Accounts that have never tried the feature. Either because they don’t need it or because they’re unaware of it.
  • Accounts that have tried it but never became active users of it.
  • Accounts that are actively using the feature
  • Accounts that were actively using it, but now have churned away

Once you understand the adoption state distribution, you can accurately measure feature success. For example, for a feature with low usage: Does the feature have an awareness problem or a churn problem?

Don’t rely on simple interaction counts - use adoption states.

6. Set useful goals

How many accounts should be using your new feature for it to be a success? 100%? OK, maybe that’s a stretch. 80%? Why not 85%?

Setting adoption goals like that can be tricky and frankly unproductive for most features. 

A better way to address goal setting is to flip the conversation. Instead of discussing “ideal number of accounts” think about a “minimum adoption goal” instead. Ask yourself, “how low adoption would make us disappointed for the time we spent on this feature?”

Some features’ engagement aren’t that important by itself. Some features are meant to positively impact other features or to impact some downstream metrics. In such cases, the goal should be about seeing movement elsewhere after releasing the feature. 

Starter with meeting the feature’s minimum adoption goal. Then, optimize towards a higher goal, if the feature is a growth or retention driver.

7. Reporting is key

Most features take weeks or months to evaluate as accounts need to go through the adoption states before we can measure the feature’s retention rate. 

Having a dashboard to track adoption states is key, but we should also be honest about dashboards: They’re only useful when you look at them. 

Once a new feature is shipped, it doesn’t take many days for the respective product team to be knee-deep in a new backlog item. Once that happens, the new item takes all the attention and the recently shipped feature slides into the background.

Therefore, set up reporting, so you can review and iterate on shipped features without having dig up dashboards every week.

Deploying is only the beginning. Review and iterate with reporting.