Every semester, I open my product metrics course with the same case study. I call it Pulse.
Pulse is a mental health app. It has 2.4 million monthly active users. Retention is 68% at 30 days — well above industry average. Session length is up 22% quarter over quarter. The NPS is 41. The engineering team is hitting their reliability SLAs. Every KPI dashboard is green.
The product is failing the people who need it most.
Here's what the dashboards don't show: the highest-engagement users are the ones in crisis. They open the app ten, twelve, fifteen times a day. They're not being helped — they're catastrophizing in a loop. The features that are driving up session time are the journaling prompts that encourage endless self-reflection with no mechanism for resolution. The users who actually improve — the ones who work through their anxiety, build routines, reach out to therapists — they graduate out of the app. Lower session length. Lower frequency. The metric reads them as churn.
Pulse has built a product that is optimized for suffering.
Not because anyone is malicious. Because they chose a metric that was measurable, and then forgot to ask what it was actually measuring.
Metrics are lenses, not truths
This is the thing I try to burn into students before we ever talk about North Star frameworks or DAU/MAU ratios: a metric is a lens. It reveals some things and hides others. The moment you forget that, the metric becomes the territory — and the territory quietly rots.
The classic teaching on this is Goodhart's Law: when a measure becomes a target, it ceases to be a good measure. But Goodhart is about gaming. What Pulse illustrates is different. No one is gaming the metric. Everyone is trying their best. The metric is just... the wrong lens.
The dashboard was green because engagement was up. Engagement was up because distress was up. The product had become a very efficient amplifier of anxiety.
The question I can't stop asking
Here's what I find genuinely unsettling about Pulse: it's not a cautionary tale about a hypothetical product. It's a pattern I see everywhere once you start looking.
Social media platforms that optimize for time-on-site, and find that outrage drives the most time. E-commerce recommendation engines that optimize for click-through, and find that novelty drives clicks, so they never show you the product that would actually satisfy you. Productivity apps that optimize for streaks, and find that guilt drives streaks, so people open the app at 11:58pm just to check a box.
The best metric. The worst product.
The puzzle I keep returning to is this: how do you build a measurement system that is honest about what it cannot see? How do you create dashboards that tell you when to be worried even when all the numbers are green?
I don't have a complete answer. I have a framework I teach. But the framework is less important than the habit of mind — the practice of always asking: what would the world look like if this metric was lying to me?
That's the question I want product managers to carry into every review meeting, every planning cycle, every feature decision. Not as paranoia. As epistemic hygiene.
The dashboards can be green. The product can be dying. Your job is to know the difference.