AI Can Build the Interface. Who's Checking the Work?
Design Ops
AI Tools
There's a pattern in strategy games where the opening moves are fast and instinctive. You're expanding, building, moving pieces into position. But mid-game, you have to stop and actually look at the board. Because speed without evaluation just means you're losing faster.
Figma's new Code to Canvas feature follows the same logic. Claude Code generates a working UI from a prompt. MCP translates it into fully editable Figma layers. Not a screenshot. Not a flat image. Real frames your team can pull apart, rearrange, and annotate.
The AI handles the fast opening. Build a checkout flow. Generate an onboarding sequence. Spin up a settings page. That part now takes minutes.
But somebody still needs to look at the board.
Does this flow actually make sense across breakpoints? Are these components pulling from your token library or are they "close enough blue" again? Is the hierarchy intentional or did the AI just stack elements in the order it generated them? What about your accessibility requirements?
Code is great at converging. You run a build, you get one answer. The canvas is where you diverge. Where you lay three options side by side and spot the gap none of them solved. That's not a step AI skips over, but the step where the product actually gets shaped.
The workflow is now a loop. Design informs code through MCP. Code generates UI through Claude. UI comes back to the canvas for review. Rinse, repeat. The interesting part isn't that AI can build interfaces. It's that the people evaluating those interfaces just became the bottleneck worth investing in.
Might be a good time to ask whether your design review process is ready for that. It probably isn't.
March 12, 2026