All Articles
AI5 min read12 August 2025

The Problem With Shipping AI Features

A team builds a prototype. The prototype is impressive. They ship it. User adoption is lower than expected. The team is confused because the prototype worked well in testing.

AIProductEngineeringUXAdoption

There is a pattern I have observed repeatedly in 2025. A team identifies a use case where an AI feature would add genuine value. They build a prototype. The prototype is impressive. They ship it. User adoption is lower than expected. The team is confused because the prototype worked well in their testing.

The gap between prototype and adoption is not usually a capability gap. The AI feature does what it was supposed to do. The gap is in the assumptions the team made during building about how users would interact with the feature, what they would expect it to do, and how they would respond when it did not do that.

The first assumption that tends to be wrong is about trust calibration. Users who encounter an AI feature for the first time in a product are not neutral about it. They arrive with expectations shaped by previous experiences with AI tools, which may have been positive or deeply frustrating. A user who has had bad experiences with chatbots will approach a new AI feature with defensive skepticism that the feature has to work hard to overcome. A user who has been impressed by AI elsewhere may arrive with expectations the feature cannot meet. Neither behaves the way a neutral evaluator in a prototype test behaves.

The second assumption that tends to be wrong is about where in the workflow the feature fits. Prototypes are often built with the AI feature at the centre of the interaction. In practice, users have existing workflows. An AI feature that requires them to change how they start a task has much higher adoption friction than one that appears at a natural transition point in something they are already doing.

I have become interested in what I think of as the minimum viable AI feature. Rather than building a comprehensive AI capability and hoping adoption follows from comprehensiveness, the question is what is the smallest thing the AI can do that delivers enough value in enough situations that users encounter it regularly and come to rely on it. That smallest thing is usually much smaller than the team initially imagined.

Getting to that point requires more user research than most AI feature teams invest in before shipping. Not testing whether the AI capability works technically, but understanding what the user is actually trying to accomplish, where in their process they are most uncertain or frustrated, and what form of assistance would feel useful rather than intrusive at that moment.

Found this useful?

Share it with someone who'd enjoy it.