Why it worked
Four things ready before the AI saw the project.
Looking at the 22 hours number (which includes ~4.5h of prep at the very start), it's tempting to think the AI did magic. It didn't. I wouldn't have made it within this window without four things.
AI is like an engineer who just joined the team: it writes code but doesn't know your product, your experience, your design. All of that has to be on its desk before it starts writing.
01
Experience with dashboards and data-heavy interfaces
I've built dashboards and admin tools from scratch before, plus reworked existing ones. That experience gave me a map of typical UX problems in this kind of product, and the intuition to spot when the AI proposes something "technically correct" but wrong for this context.
For example, the AI defaulted to tabs for 12+ sections in the Validator detail. I knew this needed a grouped sub-nav, because after the seventh tab the pattern collapses. The AI wouldn't have proposed that on its own. Without the experience I would have believed "modern tabs".
What this saved: dozens of "no, not like that" interactions. I caught the AI's default misses before it got around to coding them.
02
A solid system. The product was well thought through.
The CatBM backend was clean: 14 GET endpoints with consistent URL design, Kubernetes-style apiVersion/kind in JSON, well-defined entities (Validator, Application, User, Right, Party). I didn't reverse-engineer anything or guess what fields meant. I read the JSON and the domain model explained itself.
If the API is chaotic, the first 4-6 hours go straight into reverse engineering. Here I spent zero hours on that.
What would have hurt without it: with a scattered domain model the AI would have started inventing its own entities.
03
An existing design. Several pages and a design system.
The CatalyX Design System was already in Figma: tokens (cobalt, gray ramp, success/warning/error), the Inter type scale plus the Space Grotesk wordmark, a Validator details reference frame. I didn't choose colours or argue with myself over "size 11 or 12, line-height 1.4 or 1.5". It was all there.
I pulled tokens and screenshots from Figma via MCP, handed them to the AI, and it applied them. Building a lookalike of the existing UX paradigm took about 2 hours instead of 2 days.
What would have hurt without it: with no design at all, either I or the AI would have invented tokens, hierarchy, typography from scratch. That's a week-long project on its own.
04
~4.5 hours of prep at the very start
The first ~4.5 hours (inside the 22) went into plumbing before the AI wrote a line of code. I scraped the live dev site via cURL with a real Bearer token, saved 43 JSON snapshots of every GET endpoint, brought up a local Python mock serving the UI plus mock API with SPA fallback, and pretty-printed the original JS and CSS bundles for readability.
In parallel I pulled tokens and screenshots from Figma via MCP and wrote three Obsidian documents: a full UI inventory, a four-layer UX audit, and a Phase 1 spec with sitemap and milestones. After these 4.5 hours the AI had a live mock API with real data, design tokens in front of it, and a document saying exactly what needed to be done.
What would have hurt without it: the AI would have wandered and invented. With prep, 22 hours became a realistic window.