Internal · CatalyX Blockchain Manager redesign

How I rewrote the CatBM UI
in one working session

What I actually did, where the AI helped, where it's limited, and the habits I had to learn. This isn't magic. Without the deadline the same scope would have unfolded over 3-4 working days at a normal pace. Either way, faster than the typical month.

Viacheslav Honcharov · Marketing Manager · UX/UI Designer · No-code Developer

22h
total time, including ~4.5h of prep
25+
routes in the application
43
real-API snapshots
Method

Three rules that carried the working session.

The first thing I did: I asked the AI not to do anything without my confirmation. Then I told it to show a visual reference before writing any code. The third: I watched its work in real time and wasn't afraid to interrupt. All of this works specifically in the terminal version of Claude Code, where every action is visible as it happens. In IDE or web integrations you have less of this fine-grained control.

01

Make the AI ask first.

The AI's default behaviour is to start doing things right after your message. It's easier for it to generate 1,500 lines of code in its own style than to stop and ask "how do you see this section?". If you don't make it ask, it won't ask.

Every unconfirmed assumption costs hours of edits later. Every 30 seconds spent clarifying saves those hours. This isn't about being cautious. It's about the economics of time.

when the AI started aggressively anonymising the CatalyX brand, prod URLs, and the IntellectEU footer (even though I had only asked it to remove some long copy-pasted strings), I immediately said "no, roll it back, you misunderstood". That cost 30 minutes of revert. If I had trusted it and moved on, I'd have lost half a day rebuilding on top of the wrong base.
02

Visual changes first, technical changes second.

Without a visual reference, the AI improvises. It invents its own layout, its own spacing, its own hierarchy. "Technically fine", visually slightly off.

When you give it the visual first (a Figma frame, a screenshot of a reference site, even words describing where things go), it anchors to that as ground truth. Technical edits after that become incremental detailing. Without a visual you burn time explaining things that would be obvious at a glance.

when the AI reported "sidebar collapse works", I checked in the browser and there was no width animation at all. The CSS token duration-normal didn't exist. The AI wrote "the transition looks correct", but it actually resolved to 0s because the token was undefined. Visible only in the browser and only with your own eyes.
03

Watch the work in real time. Ctrl+C when it goes off course.

The AI writes code as a stream. You can see every tool call, every decision, every line. Don't leave it unattended for 5-10 minutes: in that window it can drift off course and stack dozens of edits on top of the wrong base.

The moment you notice it going the wrong way, hit Ctrl+C. No politeness, no "let it finish first". The earlier you stop it, the less you have to roll back. This works specifically in the terminal version of Claude Code with stream output. In IDE or web integrations you mostly see the "final result" by the time you can react.

when the AI started refactoring an entire block of layouts instead of fixing one component, I hit Ctrl+C and said "no, only that file, leave the others alone". That saved about 40 minutes of roll-backs.
Why it worked

Four things ready before the AI saw the project.

Looking at the 22 hours number (which includes ~4.5h of prep at the very start), it's tempting to think the AI did magic. It didn't. I wouldn't have made it within this window without four things.

AI is like an engineer who just joined the team: it writes code but doesn't know your product, your experience, your design. All of that has to be on its desk before it starts writing.
01

Experience with dashboards and data-heavy interfaces

I've built dashboards and admin tools from scratch before, plus reworked existing ones. That experience gave me a map of typical UX problems in this kind of product, and the intuition to spot when the AI proposes something "technically correct" but wrong for this context.

For example, the AI defaulted to tabs for 12+ sections in the Validator detail. I knew this needed a grouped sub-nav, because after the seventh tab the pattern collapses. The AI wouldn't have proposed that on its own. Without the experience I would have believed "modern tabs".

What this saved: dozens of "no, not like that" interactions. I caught the AI's default misses before it got around to coding them.
02

A solid system. The product was well thought through.

The CatBM backend was clean: 14 GET endpoints with consistent URL design, Kubernetes-style apiVersion/kind in JSON, well-defined entities (Validator, Application, User, Right, Party). I didn't reverse-engineer anything or guess what fields meant. I read the JSON and the domain model explained itself.

If the API is chaotic, the first 4-6 hours go straight into reverse engineering. Here I spent zero hours on that.

What would have hurt without it: with a scattered domain model the AI would have started inventing its own entities.
03

An existing design. Several pages and a design system.

The CatalyX Design System was already in Figma: tokens (cobalt, gray ramp, success/warning/error), the Inter type scale plus the Space Grotesk wordmark, a Validator details reference frame. I didn't choose colours or argue with myself over "size 11 or 12, line-height 1.4 or 1.5". It was all there.

I pulled tokens and screenshots from Figma via MCP, handed them to the AI, and it applied them. Building a lookalike of the existing UX paradigm took about 2 hours instead of 2 days.

What would have hurt without it: with no design at all, either I or the AI would have invented tokens, hierarchy, typography from scratch. That's a week-long project on its own.
04

~4.5 hours of prep at the very start

The first ~4.5 hours (inside the 22) went into plumbing before the AI wrote a line of code. I scraped the live dev site via cURL with a real Bearer token, saved 43 JSON snapshots of every GET endpoint, brought up a local Python mock serving the UI plus mock API with SPA fallback, and pretty-printed the original JS and CSS bundles for readability.

In parallel I pulled tokens and screenshots from Figma via MCP and wrote three Obsidian documents: a full UI inventory, a four-layer UX audit, and a Phase 1 spec with sitemap and milestones. After these 4.5 hours the AI had a live mock API with real data, design tokens in front of it, and a document saying exactly what needed to be done.

What would have hurt without it: the AI would have wandered and invented. With prep, 22 hours became a realistic window.
Tools

What was at hand and sped up the loop.

Beyond Claude Code itself, three things made a real difference in the work. Without them the loop would have been much slower, and decorative or UX details would have been pulled out of thin air.

Claude Code skill

ui-ux-pro-max

A built-in library of 67 styles, 96 palettes, 57 font pairings, 25 chart types. Instead of "AI, invent something", I asked "find a palette and font pair that fits the CatBM tone". Concrete references, not improvisation.

Where it helped: picking the typographic hierarchy, status accent colours, and the Inter + Space Grotesk pairing.
Claude Code skill

emil-design-engineering

A skill encoding Emil Kowalski's animation rules: the frequency principle, ease-out for entries, never transition:all, respect prefers-reduced-motion. Instead of "add some nice animations", four targeted edits on a solid base.

Where it helped: sidebar collapse, button hover transforms, theme switch transitions, reveal-on-scroll. No explosion of decorative effects.
Voice tool

Voice input

Either Monologue or the built-in voice input in Claude. It's faster and easier to say "I want the logo to be clickable so it goes to the dashboard" than to type it. Voice-described intent shortens the loop dramatically.

Where it helped: long descriptions of UX intent, corrections like "no, I meant something else", structure discussions without typing fatigue.
Outcome

What was on the table the next day.

Not a presentation, not a Figma file, not "almost ready". A working product with a link, a password, real data, and a document that lets any engineer on the team spin it up locally without a single extra question.

Full UI redesign

25+ routes, dark/light/system themes, mobile-responsive, a11y. React 19 + TypeScript + Vite, vanilla CSS on tokens from the Figma design system.

Real-data mock API

43 snapshots from the canton2.dev.catalyx.solutions environment. Cloudflare Pages Functions mirror the local serve.py routing.

HMAC login gate

Username/password protection for the preview, HMAC-signed session cookie for 7 days. Secrets never touched the AI chat. They were injected via wrangler CLI directly into Cloudflare.

Collapsible sidebar with shortcut

A small UX polish: keyboard shortcut ⌘\\, tooltip, smooth width transition. The kind of thing that usually falls out of scope on a fast loop, but stayed in this time.

HANDOFF document

Describes the stack, deploy flow, secrets, auth architecture, mock-API mapping, and password rotation. Any team engineer can pick it up without asking around.

Live URL on the free tier

Cloudflare Pages, no hosting bill. The preview is accessible to the IntellectEU team behind a password, prod data is not indexed by search engines.

Context

The developers said it themselves: if this had been Figma files only, the build would have taken about a month.

22 hours of my work (prep included) versus ~30 days of the traditional Figma → handoff → development path. Without the deadline, the same scope would have unfolded over 3-4 working days at a normal pace. Still much faster than the team's month.

What about Figma?

None of this means Figma isn't needed. The design process in Figma stays irreplaceable. Without the CatalyX Design System and a few reference pages I wouldn't have reached this level of quality. AI speeds up turning a design into code, not the design process itself. The designer doesn't disappear, their work becomes the foundation for fast implementation.

For the team

What's useful to take from this.

This isn't a one-off story. The approach works on any product where there is a solid domain model and a design system. Internal changes that wait for quarters can be closed in days, not "instead of" the engineers, but before they receive a finished Figma file.

Delivery speed

Internal UI changes that wait because of dev-team bandwidth can be closed in days. Especially for preview and user-testing versions that don't need prod-grade quality on day one.

Cost economics

Hosting $0. Backend $0. Infra $0. Everything on Cloudflare's free tier. Budget is only needed when the product passes validation and goes to prod. Internal showcases and user testing need no infra engineer at all.

Repeatability

The same approach works on any Catalyst product where you need to quickly validate a UX hypothesis, redesign, or stand up a preview. Snapshot-based mock APIs are universal, if the API is clean.

Safe by design

Sensitive secrets never appear in the AI chat. The wrangler pages secret put pattern technically prevents the model from seeing the values. Pre-production preview is noindex and behind a password.

AI works as an engineering partner only when it knows what you want to see. Without a visual it improvises. Without honest factors (experience, product, design, preparation, tools) it doesn't save you hours. With them, it closes work that used to take a quarter. The rest is discipline: ask, watch, stop it when it drifts.