This experiment started not with a well-defined plan, but with a spark of curiosity. I had a big-picture problem I wanted to solve, and I was eager to see how AI tools could accelerate product design workflows. But I hadn’t yet broken the problem into concrete requirements. Like many newcomers, I dove in headfirst, driven more by excitement than structure.
I started with Lovable, a promising tool that lets you build UI with prompts. However, I quickly hit a few walls:
Limited prompt attempts on the free plan
Vague prompts leading to unclear outputs
A realization that I needed to define my ideas better before expecting AI to make sense of them
It was my first lesson: AI is only as good as the clarity you bring to it.
After some research, I refined my concept by outlining core features and user activities. With a clearer picture, I turned to v0, a tool that immediately felt more aligned with my process.
The prompts generated solid, basic layouts
I appreciated the use of shadcn UI, Tailwind, and Framer Motion
The downside was that the free version didn’t support Figma import
Still, it was a step forward. I could see my ideas take shape with minimal friction, even if they were just rough scaffolds.
Curious about Figma integration, I tried Bolt next. Its ability to parse Figma files gave me a better sense of how AI could understand visual input. The generated results were promising, but the tool struggled when I requested components from popular libraries. Errors began stacking up, and I quickly burned through my prompt quota without meaningful progress.
Then came Cursor, a more developer-oriented tool. Installing it required setting up Node.js, and getting started was rough. As a designer, I found its code-heavy environment intimidating. Unlike visual builders, Cursor showed no previews unless I ran the project locally, making it hard to iterate quickly.
The interface reminded me of Dreamweaver, which I used back in 2016, but this was on another level. I got stuck often:
I couldn’t remember component names
Prompts felt blind without real-time visuals
I only later discovered I could use MCP to link my Figma files, which was a major unlock
By then, my free usage had run out, and I hadn’t yet seen enough value to justify a paid plan.
After trying different tools, I found myself returning to v0. Its workflow clicked with how I think and work. One standout feature was the ability to select a component and prompt for changes directly, which significantly improved iteration speed.
It didn’t try to do too much. It just helped me move forward.
This experiment taught me a lot, not just about tools, but about how I work and learn.
Clarity matters. The more I defined my problem and expected outputs, the better AI tools responded.
Process fit is more important than feature set. The tool that felt right helped me make the most progress, even if it had fewer bells and whistles.
Adaptability is key. I went from visual drag-and-drop to developer environments and back, learning what worked and what didn’t for me.
This journey is ongoing. I haven’t built the full solution yet, and that’s okay. What I’ve gained so far is momentum, insight, and confidence that I can build with AI, even without a technical background.
Next, I’m planning to:
Explore more focused flows using v0
Dig deeper into Figma-AI integrations
Test early concepts with potential users
Too often, we only see the polished outcomes. But building in public, especially in emerging spaces like AI, is how we learn faster and help others do the same.
If you’re a designer curious about AI, my biggest advice is this: don’t wait to “get it right.” Just start, learn out loud, and adapt as you go.