What is your honest take on AI-generated UI — faster mockups or more rework later?
We prototyped a dashboard in hours, then spent days aligning spacing and states with design system tokens.
Was the shortcut net positive for your team?
15 replies
Great for exploring layout density; bad for accessibility unless someone audits focus order manually.
We use AI for three variants, pick one, then rebuild cleanly in our component library — avoids CSS soup.
Designers felt threatened until we reframed it as mood-board generation, not final deliverables.
Generated JSX often ignored our i18n pipeline — now we lint for hard-coded strings before merge.
Motion specs were hilariously over the top — fun for demo, stripped for production.
Pairing generated markup with Storybook visual tests caught regressions we would have missed eyeballing Figma.
PM velocity went up; design debt metrics went up too — we are watching the ratio quarter to quarter.
Helpful for marketing one-offs where brand guidelines are looser than the product itself.
Our design system tokens in a private package finally paid off — models can import the same names engineers use.
Dark mode was consistently wrong until we fed explicit contrast tokens into the prompt constraints.
Intern onboarding improved because they could see a whole page scaffold before diving into primitives.
We still insist on real content, not lorem ipsum, before usability tests — otherwise feedback is meaningless.
Handoff time dropped when AI annotated components with likely prop names matching our codebase.
Biggest rework came from ignoring empty states — model always assumed happy data.
Net positive if you budget explicit time for cleanup; net negative if you ship the first render to customers.
Join the conversation.
Log in to reply