Feature Overdose
A team ships 11 features in a quarter. The board is happy. Delivery is green. When someone checks the usage data, 3 are actually adopted. The other 8 exist in the product, in the documentation, in the support scope. Not in user habits.
Not an outlier. A pattern. I've seen it in a lot of startups/scaleups I've worked with on the product side.
In a previous article, I wrote about the invisible technical cost of AI-generated code. Comprehension debt: the code that works but nobody understands. There's a second cost, on the product side this time. Just as invisible.
AI has accelerated our ability to build. The ability to think through what we build hasn't kept up. Not just what to remove, but what not to build in the first place.
Three weeks of dev was also three weeks to think
Pendo measured the median feature adoption rate across thousands of SaaS products (1): 6.4%. Out of 100 features a team designs, builds and ships, roughly 6 drive 80% of usage. Even top-10% products don't exceed 15.6%.
These numbers predate the AI acceleration. When we shipped at human pace, 94% of features were already not adopted. What happens when you ship 2 to 3 times faster?
Three weeks of development was also three weeks during which the PM refined the scope. The designer iterated on mockups. Stakeholders lived with the idea and sometimes realized it wasn't the right one. The build cycle doubled as a maturation cycle. AI compressed the build time. It compressed the thinking time with it. Not out of negligence. The cycle just accelerated beyond our capacity to absorb.
I call this feature overdose. Each feature, taken individually, is a good idea. It's the accumulation that kills the product.
Every unused feature carries a double cost. On the technical side: maintenance, complexity, code to evolve. The Systemic Additivity Index from the previous article measures exactly that. On the product side: interface surface, documentation, support, onboarding, cognitive load for the user.
There's also a human cost that dashboards don't capture. In the startups I've worked with on the product side, the warning sign is never a metric. It's the new hire who takes three months to understand the product instead of three weeks. It's the customer who uses 10% of the features and pays for 100%. Nobody's failing. The product has outgrown the people who carry it, and the people who use it.
Adding costs 2 days. Removing is a decision.
AI has pushed the cost of building a feature toward zero. The cost of removing one hasn't moved.
Removing a feature means saying no to someone. The enterprise client who requested it. The PM who spent three months on it. Sales already showed it in a demo. The problem isn't technical. It's political. That's why nobody does it.
It's not just a lack of courage. It's a documented cognitive bias. In 2021, Adams, Converse, Hales and Klotz published a study in Nature across eight distinct experiments (2). The most telling protocol: participants must stabilize a Lego structure to support a weight. The optimal solution is to remove a piece. Removal is free. Each added piece costs $0.10. Result: 59% of participants add anyway. When explicitly reminded that removal is free, 61% subtract, and earn 10% more. The researchers conclude that people don't reject subtraction. They don't see it. It doesn't exist in their solution space.
In product management, the additive bias is structural. Roadmaps list what we'll build, never what we'll remove. OKRs measure what we shipped, not what we simplified. AI makes the asymmetry worse: when building cost three weeks, refusing to build was easy to justify. "We don't have the bandwidth." When building costs two days, every refusal seems irrational. "It's only two days, might as well do it." The last natural guardrail of prioritization, dev time, is gone. Nothing has replaced it.
Something needs to. The replacement isn't a process. It's the product vision. When dev time no longer filters, vision has to. If a feature doesn't serve the vision, it doesn't get built, even if it only takes two days. "We could build it" is not a reason. "It serves where we're going" is. Everything else is noise, no matter how cheap it is to ship.
Feature overdose has two sources. Building what shouldn't have been built. And not removing what was. The first one is harder to see because it looks like productivity. The team shipped fast, the board is happy, the backlog is shrinking. But shipping fast in the wrong direction doesn't clear the backlog. It fills the product.
As with code, you can measure the asymmetry. I propose the Net Feature Ratio: the number of features added divided by the number of features removed per quarter. It's the product-side mirror of the SAI (Systemic Additivity Index) from the previous article, the insertion/deletion ratio on the code side.
Net Feature Ratio = features added / (features removed + 1)
Below 3, the team builds and prunes: for every 3 features added, at least 1 is removed or consolidated. That's a sustainable pace. Above 10, quarter after quarter, the product bloats without counterbalance. The interface surface grows, user cognitive load rises, but nothing comes out.
By feature, I mean a user-facing capability: a screen, a workflow, an export mode. Not a bugfix or a performance improvement. What matters is what the user sees and has to understand.
Not an industry standard. It's a diagnostic tool I'm starting to use, like the SAI in the previous article. It takes on meaning over time. A product launch quarter will naturally have a high ratio. The signal is the trend.
How to measure it: a spreadsheet with two columns. Features added, features removed (or consolidated, or hidden). Updated each release. If the right column has been empty for three quarters, that's the signal.
What velocity dashboards don't show
The NFR measures the asymmetry at the roadmap level. But product bloat has other signals, often earlier ones.
Time-to-value for new users. The time between signup and the moment the user gets value from the product. If this grows release after release, while the product is supposed to be improving, it means each added feature is complicating the path to value instead of shortening it.
The onboarding coverage ratio. If onboarding covers only 20% of the product, it's an implicit admission: 80% of the surface isn't deemed important enough to show a new user. But it still exists. In the interface, in the documentation, in support.
The number of features removed per quarter. Not a ratio. The raw count. In most organizations, this number is zero. Not since last quarter. Since forever. When I ask this question in the startups I work with, the silence says everything. Pendo frames feature removal as an "innovation event" (3), an act that frees engineering resources for higher-impact work. But for most teams, it's not even a mental category.
The percentage of shipped features tied to a strategic objective. Look at what you shipped last quarter. How much of it directly serves the product vision? If 40% doesn't connect to any strategic axis, that's noise that made it through the filter. The feature may be useful. It may even be adopted. But if it doesn't serve where the product is going, it's dilution.
These signals don't require new tooling. Time-to-value is in your onboarding analytics. The coverage ratio is in your documentation. The removal count is in your roadmap. Or rather, in its absence. The strategic alignment ratio is in your roadmap too, if you're honest about it.
"People don't reject subtraction. They just don't think of it."
— Adams, Converse, Hales & Klotz, Nature (2021)
Feature overdose doesn't come from a bad decision. It comes from a hundred good decisions that nobody counted. Adding is now trivial. Removing is just as hard as it ever was. The gap widens with every sprint.
Four signals to watch:
- The Net Feature Ratio on your roadmap. Above 10 quarter after quarter, the product is bloating.
- Time-to-value for new users. If it's growing, complexity is winning.
- The number of features removed per quarter. If it's been zero for a year, the question isn't even being asked.
- Strategic alignment of shipped features. If a significant share doesn't serve the vision, you're building fast in the wrong direction.
The missing piece isn't a tool. It's a habit. Two questions that should be part of every sprint: "what do we remove?" and "should we even build this?" Teams that ask both free up clarity, bandwidth, and absorption capacity for the features that actually matter.
The previous article covered the cost on the code side. This one covers the cost on the product side. Together, they form the true price of a feature in the age of AI.
Sources
(1) Feature Adoption Benchmarking — Pendo (2025)
(2) People systematically overlook subtractive changes — Adams, Converse, Hales & Klotz, Nature (2021)
(3) How to Effectively Remove and Retire a Feature — Pendo
(4) [Cheap to build, costly to keep — ShapeAndShip (Apr 2026)](../ia-qualite/3 - Cheap to build, costly to keep.md)
No comments yet. Be the first to comment!