Putting "review and iterate" back in agile development
In the SaaS industry, we’ve largely adopted agile development as our preferred methodology for releasing software. Scoping down and time-boxing features ensures that product teams release often and don’t waste months on a grand feature that ultimately doesn’t work.
The agile shift is especially great from a software velocity output perspective. The engineering manager can move a lot of kanban-cards to “Done” and the product manager can tick off lots of feature requests every month.
Sometimes, it’s great for the customers, too. Occasionally, the bare minimum of a feature is sufficient and addresses the customer demand successfully. As a customer, putting in a feature request and seeing the feature in production shortly after is magical!
Most times, though, the first iteration isn’t great as a lot of corners were cut in order to get it shipped in the current “sprint” (ugh, that word).
It’s frustrating for the customers, but, hey, that’s OK! It’s part of the agile plan! We ship often, and then we iterate where needed, see:
Unfortunately, for the customers, this isn’t exactly how it plays out IRL. Product teams celebrate releases upon deployment time, and not at customer impact time.
Post-release, teams are quickly allocated to new backlog features and the features in production go into a void of zero ownership and zero scheduled follow up. Feature feedback flows back to the product teams at turtle speed only when customers tell their account representative that it doesn’t work.
In practice, the agile feature cycle looks like this:
Fixing the final steps
At Bucket, we believe that feature evaluation is broken for multiple reasons. We’re convinced that after 10 years with product analytics (whatever that means), we still don’t have the right tools to tackle this problem. It’s not a data problem, it’s a workflow problem.
The solution is dedicated tooling that treats the feature as a first-class citizen and follows the feature all the way from first iteration to successful customer feedback.
The solution is a repeatable – and consistent – way of measuring and evaluating features which empowers product teams to own features until they’re successful.
We definitely should keep celebrating at deployment time. We’re just missing the second celebration at feature success time. Two 🎉 celebrations 🎉 !
What this looks like in practice is automated feedback on key feature metrics to the team that designed and built it. With such incoming adoption signals, we can begin to decide, based on data, whether or not a feature is truly “Done” or not. If the feature hasn’t hit adoption and/or retention goals, the feature needs another iteration.
Having this data easily available for all product team members is transformative in terms of delivering value and just code.
Here’s what agile development with Bucket looks like:
For every feature iteration, there’s a feature report. This is the Review part of the agile flow. The report is automatically reported to the relevant team, for example on Slack.
Depending on the feature, evaluation takes a few weeks or months. During this time, the team behind the feature will ambiently get the feature report and will soon learn if the feature looks to be doing well, or if it needs tweaking.
Bucket works because it’s a repeatable approach that empowers product teams with consistent feature reporting that naturally follows the feature release workflow.
As an industry, we’ve greatly improved the pre-deployment feature workflow in the past decade to a degree where any engineer can walk into most modern software companies and start shipping code in just a day or two. It’s great! Kanban-boards, pull requests, continuous integration, etc. have been a major boost in productivity.
However, there's no standardized workflow for knowing, if the features in “Done” are successful or not.
It’s time to add the final – and frankly, most important – step of the feature workflow: Post-deployment feature evaluation.
We’re putting "review and iterate" back in agile development.