I know a founder who has conducted over two hundred customer discovery interviews.
He can tell you, with precision, what his target users struggle with. He can map their emotional journeys. He can identify the exact moment in their workflow where friction peaks. He's read all the research on his domain. He's interviewed academics, practitioners, investors.
He has not shipped anything in fourteen months.
This is a cautionary tale about a real failure mode — one that's socially acceptable in product culture, even celebrated, because it looks so much like rigor.
What insight is actually for
Here's the thing about customer discovery, user research, data analysis, competitive benchmarking: none of it is valuable in itself. It is entirely a means to an end. The end is a decision — a change in what you build, what you prioritize, what you stop doing.
Insight that doesn't lead to a decision is entertainment. Often very interesting entertainment! The human brain is designed to find pattern recognition pleasurable, and spotting a user behavior you hadn't anticipated is genuinely satisfying. But satisfaction is not value. The user whose problem you understand but haven't solved is not helped by your understanding.
This matters more than it sounds like it should, because in most product cultures, having insight is rewarded. The person who presents the most thorough analysis gets the most praise in the review meeting. The researcher who finds the most nuanced user segmentation model becomes the expert. The data analyst who surfaces the most interesting patterns builds credibility.
All of this is fine as far as it goes. But the reward for insight should be the decision it enables. Somewhere in the chain, someone has to say: here's what we're going to do differently, starting now.
The organizational version of the same problem
I've been in planning cycles where the team produces an extraordinary body of work — a thorough competitive analysis, a detailed user research synthesis, a sophisticated model of the market opportunity — and then the planning meeting ends and everyone goes back to the same priorities they had going in.
The work was real. The insight was genuine. Nothing changed.
This is the organizational version of the same failure. And it has a specific cause: the insight was not connected to a decision that anyone owned. There was no mechanism for translating the analysis into a choice.
Building that mechanism is, I think, one of the most underrated skills in product management. It's not about the quality of the analysis. It's about knowing, before you start the analysis, what decision it's supposed to inform, and making sure the person who owns that decision is in the room when the analysis lands.
Insight without a decision owner is a report. Reports get filed.
How I try to practice this
I have a single question I ask before starting any analysis: what decision will this help me make?
If I can't answer that question clearly before I start, I don't start. Or I find someone who can tell me what decision they need made, and I design the analysis around that.
This sounds like it might constrain the analysis. It doesn't, in my experience — it focuses it. Analysis designed around a specific decision question is usually sharper and more useful than analysis designed to be comprehensive.
The comprehensive analysis often produces the most insight. It also most reliably ends up as a beautiful document that nobody acts on.
I'd rather produce a worse analysis that gets acted on than a brilliant one that gets admired.