Why feature evaluation is broken
Modern companies are software companies. They need to constantly innovate and improve their digital products to stay relevant. Improve or die, to be dramatic. However, features are still celebrated at release time and released without any evaluation workflow, scheduled follow-ups and clear ownership.
We need a new feature release workflow. We need continuous feature evaluation.
Here’s the 5 main problems with feature evaluation today:
#1: There’s no “Feature” in “Product Analytics”
Modern companies need to constantly evolve their products. What this looks like on the inside is several - or hundreds - of feature teams continuously prioritizing and planning what to work on next. Essentially, each feature team is deciding between two options for every cycle: Build a brand new feature or iterate on an existing live feature?
To make this call, feature teams first need to know how their live features are doing in production. Are the features adopted by their intended audiences? Do customers keep using them, or do they churn away within weeks? In other words, do the existing features need more work, a new iteration, or can the feature team prioritize net new features.
Today, answering the above feature questions is much harder than it should be. Which is likely why most feature teams don’t have answers to these questions. Product analytics tools don’t have the concept of features.
To answer these questions properly and consistently, features need to be treated as a first-class citizens. Features need to have a dedicated home with a dedicated feature evaluation workflow that works as a natural extension of the pre-deployment workflow.
#2: There’s no feature owner, post-deployment
During the past decade, the pre-deployment workflow of designing, building and releasing features has improved tremendously. There’s now powerful tools - kanban-boards, pull requests and continuous integration - and a well-established workflow from feature idea to feature release.
But, what happens after the feature is released and live with the customers? All too often, nothing happens. The feature goes into a monitoring void with no clear owner.
Technically, maybe, the Product organization, and its respective product manager, owns the feature, post-deployment. In reality, though, features are very rarely monitored as they go live with the customers in any consistent and systematic fashion, if at all.
This problem happens for several reason. In part because the typical product-to-engineering ratio is 1:10 and PMs come in many shapes and sizes. In part, because PMs often are on the sales side of the organization. In part, because the tooling and workflow simply doesn’t exist.
In the engineering organization, features are celebrated at release time. And understandably so. It takes a small village and several months to get any key feature from idea to release. Which, by the way, often amounts to a $100,000+ feature investment.
As a result, features are released to the customers and no-one monitors if the feature investment works out. Externally, if the feature doesn’t hit home, it leads to bad customer experience and marketing frustration. Internally, the lack of any “lost investment alarm!” leads to engineering managers and product managers prioritizing new backlog features. This leads to a dangerous feature factory culture that has high feature output, but low impact.
The solution is a dedicated and streamlined feature evaluation workflow that can easily be adopted by both Product and Engineering. Feature teams - designers, engineers and product managers - need to collectively monitor the feature engagement metrics as the feature goes live. It’s an additional and necessary post-deployment step that is an extension of the well-defined pre-deployment workflow.
If the feature’s customer engagement turns green, pop the champagne! If it turns red, prioritize a new iteration over building new features.
#3: There’s no feature report
Measuring feature success is much harder than one would think. For example, counting events over time doesn’t give you any real insights. How many customers became active feature users and then churned away. How long did it take for them to churn? And how does these metrics look per sub-segment of customers? Is the active feature user count stale because no new users or new users are coming in every month at the same rate as existing users leak out
People with plenty of time and an interest in writing 100-line SQL statements could dig up some of this data ad-hoc, but that’s obviously not a viable workflow for any modern company that is shipping features week in and week out. Additionally, ad-hoc reports are misleading and confusing. They look different and use different definitions which 1) makes them time-consuming to understand, which is very problematic when trying to democratize this data, and 2) makes them impossible to compare.
Many people have been in organizations where this approach failed to the frustration of everyone!
Feature evaluation needs to happen in a streamlined fashion and has to be easily digestible for non-analytical people, like most designers, engineers and PMs.
Presenting the feature engagement data in a nicely packaged and consistent feature report is the only way to make feature teams interested in becoming more data driven and taking on more ownership.
#4: There’s no feature evaluation workflow
Getting to a place where there is a common feature evaluation report in the organization is a big step forward. But, people are busy and feature evaluation can take weeks or months. Few people will remember to set their alarms to go dig out the latest feature report every week.
Therefore, feature evaluation needs to be automated and continuous. Feature reporting needs to be push, not pull.
Many feature teams design and build features together in shared Slack channels. For example, designers, engineers and PMs, working on a Google Calendar integration, will work on it in the #feature-google-calendar channel. Once the calendar feature is deployed, the Slack channel goes stale, aside from a few bug fixes.
However, that feature channel is a perfect destination for feature reports to automatically land every week. The report will inform all members of the calendar feature team how the feature is being received by its customers. The feature team can then truly own the feature in production by simply browsing feature reports in Slack every Monday morning. Once a feature looks to be doing OK in production - once it looks validated - the report can be turned off directly from Slack or automatically by reaching a certain pre-defined goal.
Slack is just an example destination. Feature reports should be available through multiple integrations, and possibly as an API endpoint, so that feature reports can be consumed where ever it suits the feature team best. For example, the office dashboard or an internal spreadsheet.
#5: There’s no feature goal
Defining goals for a feature as part of the specification is a great exercise and should really be part of most new feature specifications. Defining goals naturally forces an internal discussion about who the feature is for - free users or high paying customers - and how often the target audience is meant to use that feature - daily, weekly or monthly.
Having that discussion early leads to better decision making when it comes to user experience as the feature team can optimize for the primary feature use case.
Feature goals also lead to better feature reporting and monitoring. Naturally, feature teams don’t want feature reports for all of their feature, forever. However, muting features is scary as well. What if a muted feature suddenly sees a drastic dip in engagement for some reason? In that case, you want to be alerted and have the feature temporarily un-muted. With feature goals such monitoring and alerting becomes possible.