Prioritization Strategy 7 min read

Gut feel vs. data: how top teams actually prioritize

Everyone has a prioritization framework. RICE, ICE, MoSCoW, pick your acronym. But the difference between teams that prioritize well and teams that guess isn't the framework. It's the data.

The framework trap

Here's a conversation I've seen play out at every startup I've worked at:

"We need a better prioritization process."

"Let's use RICE scoring."

"Great. What's the Reach for this feature?"

"Uh... let's say 500 users?"

"How confident are we in that?"

"Like... medium?"

Sound familiar? The problem isn't the framework. RICE is perfectly fine. The problem is that Reach, Impact, Confidence, and Effort are all made up. The team is plugging guesses into a formula and pretending the output is objective.

"A prioritization framework with made-up inputs is just a spreadsheet that tells you what you already believe. It's confirmation bias with extra steps."

How gut feel actually works

Let's be honest: most product decisions are gut feel dressed up in frameworks. And gut feel isn't always wrong. Experienced PMs have strong intuitions because they've accumulated thousands of data points over years: conversations with users, patterns they've seen before, industry context.

The problem with gut feel is that it doesn't scale, it can't be verified, and it can't be shared. When a PM says "I think we should build X," nobody else on the team can see why. They can only trust or disagree.

This is where roadmap fights come from. Not disagreements about facts, but disagreements about feelings.

What data-driven actually means

Being "data-driven" doesn't mean running A/B tests on everything (that's its own rabbit hole). It means grounding your prioritization in observable evidence:

  • What are users saying? Interview transcripts, support tickets, NPS comments
  • What are users doing? Usage data, drop-off funnels, feature adoption rates
  • What are users not doing? Features they ignore, flows they abandon, invites they never send

When you have this data in front of you, the RICE inputs stop being guesses. Reach becomes "47 users mentioned this in the last 90 days." Impact becomes "this issue correlates with 30% of our churn." Confidence becomes "high, we have 47 independent data points saying the same thing."

Teams with evidence-based prioritization ship features that drive 3× more user engagement compared to teams that rely primarily on stakeholder opinions, according to Pendo's State of Product Leadership report.

The five-level hierarchy of product evidence

Not all evidence is equal. Here's how I think about it, from weakest to strongest:

  1. Stakeholder opinion "The CEO thinks we need this." Weakest signal. Often based on a single customer conversation or competitor move.
  2. Anecdotal customer feedback "A customer mentioned this on a call." One data point. Better than nothing, but easy to overweight.
  3. Clustered qualitative data "15 out of 30 interviewees mentioned this problem." Now we're talking. Patterns across multiple independent sources.
  4. Quantitative usage data "40% of new users drop off at this screen." Hard numbers that show the scale of the problem.
  5. Triangulated evidence "Users mention it in interviews, it shows up in support tickets, AND we see it in the funnel data." When qualitative and quantitative data agree, you've found something real.

Most teams operate at levels 1-2. The best teams operate at levels 3-5. The difference isn't talent or budget. It's having a system for collecting and synthesizing evidence from multiple sources.

A practical test for your roadmap

Take your current roadmap and ask this question about each item: "Can I point to specific users, by name or ticket number, who told us they need this?"

If the answer is yes, you're building on evidence. If the answer is "well, we just think it's important," you're guessing.

That's not always bad. Sometimes you need to make bets. But you should at least know when you're betting vs. when you're building on evidence.

Closing the data gap

The reason most teams don't operate at levels 3-5 isn't that they don't have data. It's that the data is scattered: interviews in Dovetail, tickets in Zendesk, NPS in Delighted, usage data in Amplitude. Nobody has time to pull it all together.

This is the core problem BuildFR solves. Feed it your interviews, tickets, and survey data, and it gives you level 3-5 evidence automatically: clustered themes, ranked by frequency and severity, with every recommendation linked to specific user evidence.

The best prioritization framework is the one with real data in it. Everything else is theater.

Ready to prioritize with evidence?

Stop plugging guesses into frameworks. Let your user data tell you what matters most.

Get Early Access