When AI Fails, Check the Brief
AI Tools
Design Strategy
Most AI products don't fail because the technology is bad. They fail because someone decided to build the product around a capability before asking whether it was solving the right problem.
There's a pattern that keeps repeating. A team gets access to an impressive AI capability. They build a product around it. They launch. Users try it once, get confused or unimpressed, and leave. The post-mortem blames adoption, or timing, or the market, but… almost never the decision to skip the part where you figure out what users actually need.
Greg Nudelman calls this out directly in UX for AI (Wiley, 2025). Across 35 AI projects, the consistent failure point wasn't technical. It was teams trying to replace human expertise with AI instead of augmenting it. The AI works on paper, the accuracy metrics look great, but nobody designed the experience around how a real person would actually use it.
A known issue is when someone has all the right information but delivers it without any context for how to use it.
Think of a video game tutorial introduced during the intro. Technically complete, practically useless. You skip through and figure things out yourself because knowing everything isn't the same as understanding what matters right now. AI products hit the same wall. The model knows everything. The interface explains nothing.
The fix is the same thing designers have always done, just applied earlier. Start with the user's actual workflow, not the model's capabilities. Map where AI genuinely reduces effort versus where it just adds a layer of uncertainty. Design the moment where the user needs to trust, verify, or override the output. Test with real tasks, not demo scenarios.
The teams getting this right start with the user problem, not the model. That hasn't changed.
April 28, 2026