Open any category of AI tools right now. Productivity. Writing. Image generation. Coding. Pick one.
Look at the landing pages. Look at the interfaces. Look at the onboarding flows. If you covered the logos, could you tell them apart? Most people couldn't. The spacing is the same. The colour palette is the same gradient of blue to purple. The typography is Inter, always Inter. The interface patterns are identical. The tone of voice is the same combination of "powerful," "simple," and "built for you."
This is what happens when building becomes cheap.
When the cost of building drops to near zero, everyone builds. When everyone builds using the same AI tools, trained on the same design patterns, the outputs converge. You get a sea of products that are technically functional, sometimes even impressive, but visually and experientially indistinguishable from each other.
That's where we are right now.
The commoditisation problem
This isn't an accident. LLMs are trained as prediction engines: they learn statistical patterns from millions of existing interfaces and ask "what would be the most likely next thing here?" rather than "what would be distinctive?" Researchers call this regression to the mean: models are optimised to minimise prediction error across their entire training set, which means they consistently produce outputs that match typical examples rather than exceptional ones. The average of everything they've seen.
And then the feedback loop kicks in. Designs generated by AI go back into training data. The patterns reinforce. What gets selected as "good" by most users is the most familiar, not the most interesting. Over time, everything converges to the same font choices, the same rounded corners, the same colour ramps.
A new product drops. There's a wave of attention. Within weeks, two or three similar products exist. Sometimes better. Usually identical-looking.
The average is technically correct. It's also forgettable.
Forgettable products don't survive on product quality alone. They survive on distribution, timing, and luck. When there's nothing distinctive to hold onto, no reason a user would choose one over another beyond the first one they happened to find, the big players absorb the market and the smaller ones disappear. This isn't new. AI has accelerated it significantly.
What differentiation actually means
The word gets used to mean visual distinction. That's the smallest part of it.
Differentiation is why you feel slightly different using Linear than you do using Jira, even though both are project management tools. It's not just the cleaner interface. Karri Saarinen, one of Linear's founders and its CEO, wrote about this on the Figma blog: the goal was never to compete on features but on feel. Speed as a value, not just a feature. Opinions embedded in the product itself, not just the marketing.
It's why Stripe's documentation feels like something a person wrote for another person. Why some products feel considered and others feel assembled.
This shows up in the small decisions. The microcopy that sounds like a person rather than a template. The empty state that's helpful instead of generic. The hover states that feel thought through. The onboarding that meets you where you actually are rather than where an ideal user would be.
None of these are big decisions. All of them together are what makes a product feel made rather than generated.
This is literally our job
Designers have always been in the business of differentiation. Before AI, the constraint was execution speed. You knew what the product should feel like; you just had to build it slowly enough that by the time you got there, something had changed.
Now execution speed isn't the constraint. You can build the thing in a fraction of the time.
The constraint is knowing what to build. Having clear opinions about what this product should feel like, who it's actually for, what values it should embody. Having taste specific enough to see when AI has produced something technically correct but visually average, and being able to fix it. Designers using AI support produce more creative outputs, but only when they bring strong direction to the prompting. The AI doesn't supply the opinion. It executes it.
That's not a small skill. Most people building with AI right now are building to the same average because they don't have a clear opinion about what different looks like for their product. They're trusting the AI to decide. And the AI decides by consensus.
The designers who do this well are going to matter more in the next few years, not less. Not because AI can't design. Because the real question was never whether AI can design. It was always: design to what end, for whom, with what values. That question requires a person.
What to actually do about it
A lot of products right now are indistinguishable from each other. Users will notice. Some already do.
If you're building something, the question worth sitting with isn't "what should this look like." It's closer to: what should this feel like to use. What should someone think about this product after using it for a week. What would make them say "this doesn't feel like everything else."
The answers inform every design decision. The typography that gets chosen. The copy that gets written. The colour palette and how it shifts across different contexts. The tone of the error messages. The spacing that creates breathing room instead of just filling it.
AI can execute all of those decisions. It cannot make them. That part is still ours.
Further reading
- Large Language Models Are Designed to Be Average (Atomic Object)
- The AI feedback loop: When design systems train the models that critique them (Murphy Trueman)
- Karri Saarinen: 10 Rules for Crafting Products That Stand Out (Figma Blog)



