When AI Agents Build Their Own Interfaces
There's a moment in old choose-your-own-adventure books where you realize the author didn't write every possible story. They wrote the rules for how stories branch. Page 1 leads to page 47 or page 89 depending on your choice, and suddenly you're holding seventeen different narratives in one book.
A2UI works the same way. Except instead of "turn to page 47," the AI agent is reading a JSON protocol that says "when the user needs checkout, render payment form with these security parameters." You're not designing the interface anymore. You're designing the RULES that generate interfaces.
Google released this in December 2025, and most designers probably missed it because it sounds like developer infrastructure. It's not. It's the difference between handing off a Figma file and handing off the logic behind why that Figma file exists.
The protocol is declarative, not imperative. Security-first. The AI doesn't get to execute code, it gets to describe structure. LLM-friendly means the AI reads JSON like you read a design system doc. Framework-agnostic means it works whether your engineers use React, Vue, or whatever comes next month.
Here's what changes: Instead of designing seventeen checkout variations for different contexts (guest checkout, returning user, saved payment, promotional code, international shipping), you define the RULES. The AI agent generates the appropriate interface based on the user's actual context at that moment.
You're not removed from the process. You're elevated in it. Your job shifts from "make this button blue and 44px tall" to "define when primary actions appear and why they matter." From execution to orchestration.
Design systems taught us to build reusable components. A2UI asks: what if the components build themselves based on context, and your job is making sure they follow the right RULES?
February 5, 2026