
The Lost Feedback
Shipping new features is fun. But let’s be honest, the excitement from customers isn’t always mutual. Sometimes features are just right and just work. But, often that’s not the case.
Have you tried looking over the shoulder of someone using your product? Then you know 😬.
New features can be confusing to customers, too hard to use, or miss something critical and aren’t useful.
Most feature experiments have no impact or actually make things worse than they were before.
Often, though, this confusion is fixable - but only if you know about it. Let me give you an example from one of our customers:
They shipped an “Export to CSV file” feature. Customers loved this new functionality. They could now export their data and use it however they wanted (yay!)
But they also found it frustrating to work with.
Why? The filenames were randomly generated and the sheets weren’t dated, which meant that the exports could easily get mixed up. It was chaos when an old export was mistakenly picked and the metrics suddenly decreased instead of increased.
As a developer, it’s easy to understand how naming export files based on the current date and/or adding timestamps to the sheet was forgotten when designing the feature.
But the trouble may not stop with the filenames. Maybe the column order or column names aren’t configurable. Maybe they need right-to-left alignment. Maybe the columns are poorly formatted.
Sadly, these types of friction points often go unnoticed.
You might have internal QA processes in place, but it’s easy to miss the significance of such points because the scenarios we run don’t reflect the messy, nuanced realities of real-world usage.
Instead, they get shipped.
Over time they accrue as poor UX. Like death by a thousand cuts, it becomes hard for users to pinpoint exactly what they don’t like. Individual examples sound insignificant, but together they create an insidious problem.
The cause is a whole type of low-level, technical feedback that’s getting lost.
How low-level feedback gets lost
If you’re lucky, you might find out about these pain points later from customers. If you’re a startup, this will often come via founders. In larger organizations, it’ll come from customer success, account executives, or product managers. But this is where it can start to slip through the cracks.
Since the feedback you’re getting doesn’t come from end-users directly, it can lead to false positives. For example, a customer success person might check in with their customer contact and hear great things about the new data export functionality. It seems to them that the feature is a home run while the customer’s data team is frustrated with the lack of timestamps.
Or you do get the feedback but it just goes back into the broader product development machinery - it’s a minor thing so never gets prioritized and ends up sitting in the backlog.
Meanwhile the customer grows dissatisfied. These trivially solvable issues never seem to be addressed.
The clock is ticking
The other problem with these feedback mechanisms is time. It’s not on your side.
Changing filenames of data exports is easy when working on the feature. But changing the filename of data exports three months after deployment when you’ve worked on multiple other features, not so much.
As soon as the developers behind a feature have moved on to other projects, the time to fix it goes way up.
Data export is an Enterprise feature with SLAs. To dive back into a feature to change the filenames is risky.
A 1-hour task is now a 3-day task when you factor in testing and QA.
Furthermore, the task is now also a Customer Success one since they need to communicate the fix to all the customers who’ve reported this issue. It’s also a Support task. Those who have built custom scripts to work with the export files might suddenly have broken scripts due to the filename changes.
To ensure features are truly satisfying for everyone, you need a way to capture feedback directly from the people working with them. And you need to gather that feedback as fast as possible after releasing a new feature, while it’s still in ‘hot memory.’
How to get low-level feedback on new features
When you release a feature, you may first release it internally. This is the first time where it's possible to collect feedback from team members who haven’t seen the feature before.
Once tested internally, you can give certain external customers Beta access to the feature. At Bucket, we do this by taking every feature through release stages (on Bucket, of course!) During those stages, we make it as easy as possible for users of the feature to give feedback to the builders of the feature.
We do this by:
- Feature flagging
- Collecting feedback
- Measuring adoption
On Bucket, you can do this by including the three methods you need, all hooked up to the "export-to-csv" feature key:
const { isEnabled, track, requestFeedback } = useFeature("export-to-csv");
if(isEnabled) {
return(
<button>Export CSV</button>
)
}
When we release a new feature, it’s gated based on the feature’s targeting rules. Then to gather early feedback on the feature, we either:
- Send a feedback survey to collect in-app user feedback right after a user interacts with a feature.
- Or, we add a "Give feedback" button.
In the case of the Export CSV feature example - since it isn’t an interactive feature, we’d use a “Give feedback” button next to the export functionality. This gives users an easy way to let you know about any issues as they stumble upon it, which would look something like:

Importantly, all the feedback is sent directly from users in the app to Slack. We’ll create a temporary channel per-feature with just the feature's creators added (other folk can opt-in as needed). That way, everyone relevant can see and act on the feedback, fast.
We'll iterate based on the feedback we get and the adoption metrics we're seeing. Once folks seem happy with it, the feedback CTA is removed and we'll make it GA.
We’ve found this to be a great way to ensure that this all important low-level feature feedback doesn’t get lost—saving us time and increasing customer satisfaction.