You're offline — check your connection
    All posts
    5 min read

    Why We Killed Our 'Viral Score' Feature

    We shipped a viral score, watched it quietly poison every decision creators made, and pulled it. Here's what went wrong and what replaced it.

    Last sprint we deleted a feature that was, by every dashboard we cared about, working. The viral score — a 0–100 number we attached to every trend in TINS HUB — had high engagement, was the most-clicked element on the idea card, and showed up in nearly every support message as the thing users wanted us to "make better."

    We pulled it anyway. This is the post-mortem.

    What a viral score actually is#

    Every trend tool ships a version of this. TrendTok has a virality index. Exploding Topics has a growth score. Glimpse rolls a composite. We built ours for the same reason everyone else does: a single number is the easiest way to make a noisy dataset feel actionable. You compress signal velocity, audience size, recency, and a fudge factor into one integer between 0 and 100, color it green or red, and the user feels like they understand the trend.

    We knew it was wrong about six weeks in. We kept it for another four months because retention on the idea cards was up.

    The three failure modes#

    It collapsed information that needed to stay separate. A 6-hour TikTok spike and a slow-burn Reddit pillar trend would both come back as 82. Those are completely different bets. One is "post in the next three hours or skip it." The other is "this will still be relevant when you publish next Tuesday." Compressing them into the same number didn't just lose information — it actively misled people about which kind of content to make.

    The precision was a lie. There is no honest difference between a 78 and an 81. We made it up. The underlying signals are noisy enough that the real confidence interval was probably ±15 points. But the moment you put two numbers next to each other, humans rank them. Users were skipping 78s for 81s and we had no defensible reason to tell them that was correct.

    It pulled our own engineering in the wrong direction. Once a score exists, the natural roadmap becomes "make the score better." But there's no oracle. We can't actually know which trend went viral for which creator in which niche on which platform — we'd need a feedback loop that doesn't exist outside of the platforms themselves. So "better" drifted toward "feels more accurate," which is a UX problem dressed up as a data problem. We were spending real engineering on calibration theater.

    The signal came from our own pipeline#

    The thing that finally tipped the decision came from inside the product. Two trends our Discovery layer surfaced in the same week, for a thought-leadership niche, made the case better than any internal debate had:

    The first was a trend about brands abandoning follower counts in favor of engagement-quality audits — sentiment, retention, and attribution measured separately, not collapsed into one vanity metric. The whole point of the trend was that the market had already decided single-number scores can't separate intent from staying power. We were shipping the exact pattern our own discovery pipeline was telling creators to stop trusting.

    The second was the "Micro-Community Revenue Contract" trend — Patreon and Circle reporting record sign-ups from creators who'd explicitly stopped chasing viral spikes to build recurring subscriber promises instead. Our discovery layer flagged it at 92/100 relevance for the niche. The creators we were serving had already stopped optimizing for the spike. Our score was answering a question they no longer asked.

    When your own product is surfacing the trend that invalidates one of your features, you have to listen.

    What creators were actually asking#

    Once we stopped defending the score, the real questions our users had been asking the whole time got obvious. They weren't asking how viral a trend was. They were asking four separate things:

    • Will this still be relevant by the time I publish?
    • Is my niche already saturated for this angle?
    • Does this fit the format and voice I actually make?
    • Is the upside worth the credit cost?

    A single number can't answer any of those well. Four explicit signals can.

    What replaced the score#

    Each idea now carries four labeled signals, not one composite:

    Window. A four-tier timing label: Post Now, Test This Week, Early Signal, or Skip. No false precision — just the actual decision a creator has to make today. A spiking TikTok and a slow Reddit trend can't share this label by definition.

    Saturation. How crowded the angle already is inside the user's specific niche, with the keyword overlap that drove the call. We show the evidence, not just the verdict.

    Format fit. Whether the trend matches the platform and voice the creator actually publishes in. A Substack essayist and a TikTok creator get different fits for the same underlying trend, because they should.

    Cost-to-upside. The credit cost of generating against this trend versus the geo-scoped reach we can defend. Honest about both numbers, not bundled into a vibe.

    None of these are sortable into a leaderboard. That's the point. We stopped giving users a thing to optimize against and started giving them a thing to decide with.

    Two lessons we're keeping#

    A composite score is a UX shortcut, not a product. It's a fine first pass for a beta. It's a trap as a long-term feature, because the roadmap it implies — "improve the score" — has no real definition of done.

    A high-engagement, low-outcome feature is a tax on the roadmap. Every quarter we kept the score, we paid for it in calibration work, in support load, and in the worse decisions creators made downstream. Engagement is a leading indicator of a lot of things, including "the user is confused and clicking around to figure out what to trust."

    If you're sitting on a feature your users love to argue about but don't actually act on well, that's the same shape. Users don't pay for a vibe. They pay for a decision they can defend on Monday morning. Ship the decision.

    Related posts