Livestream: Implement features in your webapp with AI

What are best AI tools? Take the State of AI survey

Builder logo
builder.io
Contact salesGo to app
Builder logo
builder.io

Blog

Home

Resources

Blog

Forum

Github

Login

Signup

×

Visual CMS

Drag-and-drop visual editor and headless CMS for any tech stack

Theme Studio for Shopify

Build and optimize your Shopify-hosted storefront, no coding required

Resources

Blog

Get StartedLogin

‹ Back to blog

Design to Code

Design to Code with the Figma MCP Server

July 3, 2025

Written By Alice Moore

Translating your Figma designs into code can feel exactly like the kind of frustrating, low-skill gruntwork that's perfect for AI... except that most of us have also watched AI butcher hopeful screenshots into unresponsive spaghetti.

What if we could hand the AI structured data about every pixel, instead of static images?

This is how the Figma Model Context Protocol (MCP) servers work. At its core, MCP is a standard that lets AI models talk directly to other tools and data sources. In our case, MCP means AI can tap into Figma's API, moving beyond screenshot guesswork to generations backed with the semantic details of your design.

Today, we’ll explore what’s achievable with the official Figma Dev Mode MCP server, using Cursor as our MCP Client.

How to use the Figma MCP server

Okay, down to business. Feel free to follow along. We're going to:

  1. Grab a design
  2. Enable the Figma MCP server
  3. Get the MCP server running in Cursor (or your client of choice)
  4. Set up a quick target repo
  5. Walk through an example design to code flow

If you've already got some Figma designs handy, great! It's more rewarding to see your own designs come to life. Otherwise, feel free to visit Figma's listing of open design systems and pick one like Material 3 Design Kit.

I'll be using this screen from the Material 3 Design Kit for my test:

Screenshot of a design in Figma for a chat application. The mockups show both web/tablet and mobile layouts of a conversation where users discuss making "Homemade Dumplings." The interface includes user messages, an image placeholder, and quick reply buttons.

First, be aware that Figma’s official server only works a paid plan that includes Dev Mode. If you’d rather use a free community server, I’ve used this one a bunch and had good luck with it.

Once you have a Figma plan, you can head over to preferences in any design file and check “Enable Dev Mode MCP Server”

A walkthrough in Figma's preferences menu. The first panel shows the main menu with "Preferences" selected. The second, expanded panel shows a list of preference options, with a blue highlight over the setting "Enable Dev Mode MCP Server.”

The server should now be running at http://127.0.0.1:3845/sse.

Now we can hop into an MCP client of your choosing.

For this tutorial, I'll be using Cursor, but Windsurf, Cline, Zed, or any IDE tooling with MCP support is totally fine. (Here’s a breakdown of the differences.) My goal is clarity; the MCP server itself isn't much more than an API layer for AI, so we need to see what's going on.

Cursor has an online directory from which you can install Figma’s MCP server.

The Cursor "MCP Tools" directory. It shows a grid of available integrations including Notion, Figma, Linear, and GitHub, each with a button to "Add to Cursor.”

Clicking “Add Figma to Cursor” will open up Cursor with an install dialog.

The "Install MCP Server?" dialog within Cursor's settings. The dialog is pre-filled with the details for Figma, including its name, type, and a local URL, with an "Install" button at the bottom.

After clicking “Install,” if the server is working properly, you should see a green dot and the enabled tools in your Cursor Settings.

Cursor's settings after installation, showing the "Figma Dev Mode MCP" now listed under "MCP Tools." A green dot and the text "4 tools enabled" indicate the connection is active.

If you see a red dot, try toggling the server off and on again, making sure that you have the Figma Desktop app open in the background with the MCP server enabled.

Now, when you prompt Cursor in Agent mode, you'll see the AI make tool calls to the MCP server when you say things like, "Use the Figma MCP to..."

Next, we'll need somewhere to actually put the resulting code. When using this workflow, you're not always going to be starting from scratch; good design to code means implementing Figma designs in existing repos.

For our purposes today, I'll just spin up a Next.js starter template, with npx create-next-app@latest. Feel free to try this on a branch of one of your existing repos to truly stress test the workflow.

Okay, we should be all set. Select the relevant layer in Figma (note that you can only select one) and then switch over to Cursor. The server can only see what you have selected.

In this case, I’ll prompt Cursor with: “Could you replace my homepage with this Figma component? It should be a chat app.”

In total, the AI wrote 215 lines of React and 350 lines of CSS. The component mostly looks like the design, though it’s not pixel perfect, it’s missing some of the data in the original, and none of the buttons work. The generation took around 4 minutes with Claude 4.

I can use a few more prompts to add functionality, clean up the code, and then go in there by hand to finesse some of the CSS, which AI never seems to get as clean as I like (too many magic numbers). But it definitely saves me time over setting this all up by hand.

In the demo, I wanted to see how far Cursor + Figma MCP could get with basic instructions, but we could definitely push this further by providing a whole bunch more context.

Think of it this way: When you do this task as a person, what are all the things you have to know to get it right? Break that down, document it in markdown files (with AI's help), and then point the agent there every time you need to do this task.

The most powerful, built-in way to provide this context is with Figma's Code Connect feature, which allows you to directly map your design components to their corresponding components in your codebase.

But ultimately, since the Figma MCP server is just a connection layer between the Figma API and Cursor's agent, better results depend on learning how to get the most out of Cursor. For that, we have a whole bunch of best practices and setup tips, if you're interested.

Remember, the goal isn’t to replace yourself; it’s just to be able to focus on the more interesting parts of the job.

This is a cool workflow. As a more senior developer, offloading the boring work to configurable AI is really fun.

But when you move from experimentation to building production-ready software for a team, I still see a lot of fundamental limitations:

  • Even with Code Connect providing instructions like, “use our Button component,” the AI’s process is still a one-way street. It has no visibility into the final, rendered output of its own code, unable to see if a global style from another file is overriding its work or if a component renders incorrectly on the page.
  • Design systems are mapped, but not dynamically enforced. Although Code Connect provides component advice, the AI can still “creatively” generate one-off styles for margins or colors when it encounters something not explicitly in the map. It lacks the live, structural understanding of your codebase required to guarantee consistency.
  • The entire process—from mapping components in Code Connect to configuring the IDE and merging code—is highly technical. This not only excludes designers and PMs from the process, but it also means the developer bears the full burden of setup, maintenance, and visually QA'ing every AI output to catch the inevitable discrepancies.

These limitations create a workflow that, while definitely an improvement over screenshots, still feels like a fragile translation layer between Figma and your codebase. It requires constant developer oversight, struggles with bespoke systems, and doesn't solve for team-wide collaboration.

This is the gap that a more integrated visual development platform is designed to fill. Let’s take a look.

The solution to these limitations is to give the AI eyes to see the rendered output, and to strategically enforce your components and design system.

This is the gap that Builder.io’s Fusion is designed to fill. It moves beyond static mapping to create a live, visually-aware development environment where the AI edits your existing repository by observing your application.

Instead of working from disconnected instructions, Fusion's AI agent operates on a live, interactive preview of your application. This is made possible by instrumenting your code to create a precise, two-way map between the rendered UI and the source files.

Under the hood, every element in the visual canvas is enriched with attributes that point to its exact origin in the codebase—the specific file and line of code that generated it. For example:

<!-- What the AI "sees" -->
<main data-loc="pages/Account.tsx:37" css={...}>
  <div data-loc="src/components/Card.tsx:56" css={...}>
    <button data-loc="src/components/Button.tsx:12" css={...}>
      Button text
    </button>
  </div>
</main>

The AI doesn't just see a button in the DOM; it sees a button that it knows corresponds to the component in src/components/Button.tsx on line 12. Plus, it can access all rendered CSS, so it knows the button's exact visual state at all times.

This architecture fundamentally changes what’s possible.

The problem: An AI working from instructions alone often regenerates large blocks of code to apply a change, resulting in massive, multi-thousand-line diffs that are difficult and time-consuming to review.

Plus, if it can't see the final rendered output, it can’t debug visual discrepancies without you getting in there with a good, “Nuh uhh, that’s not what I see.”

The Fusion solution: With a live visual map, the AI performs a differential analysis, comparing the design intent to the actual rendered pixels.

This keeps AI-generated diffs small and focused. The agent edits only the exact lines of code necessary because it can pinpoint the source of a visual element and see the direct impact of its changes.

Less automated gaslighting, more time to watch your shows.

Example: Imagine you're fixing a dark mode bug. You tell the agent, "This text is too dark and hard to read." A traditional AI may argue with you for a while about how the text is nearly pure white. The Fusion agent, however, can see the problem:

"It looks like the rendered CSS color for this text is rgb(50, 50, 50). However, the design token applied in the code is --color-text-primary, which should resolve to rgb(230, 230, 230). I can see the style is being overridden by the global CSS file at app/do-not-use. I will fix the selectors’ specificities to ensure the correct color is applied."

This is only possible because the AI isn't just following instructions; it's observing, diagnosing, and correcting your live, running application.

The problem: Mapping alone treats your design system as a suggestion. Nothing forces the AI to follow the rules, or just not hallucinate.

The Fusion solution: Fusion indexes your entire component library and design system, turning them into hard constraints that the AI must follow.

Example: A Product Manager asks to "add a three-column feature section with icons above the footer."

An AI working only from instructions might generate new divs with inline styles. The Fusion agent, having indexed your codebase, understands your system's structure. It will instead generate code that correctly uses your existing components:

<Stack direction="row" spacing={4}>
  <FeatureCard icon="IconTime" title="Save Time" />
  <FeatureCard icon="IconCode" title="Write Less" />
  <FeatureCard icon="IconTeam" title="Do Crime" />
</Stack>

The AI reuses your components because it's the only path available within the established guardrails, ensuring every change is consistent and maintainable.

The problem: In case you haven't had the distinct pleasure of experiencing a typical design handoff to engineering, let me take you on a brief tour:

Flowchart showing "Elegant Figma design" leading to "Literal hell", which then leads to "Elegant code".

Any tool that can speed this up is a godsend. But if it’s not usable by the entire team, due to the technical complexity of MCP, IDEs, and a whole bunch of other developer acronyms, then devs stay the bottleneck for visual changes. We’re back to the endless loop of “oops, the developer found an edge case and we have to redesign.”

The solution: Fusion breaks down these silos by providing a shared visual canvas that’s accessible to everyone, thanks to it being both a VS Code extension for developers and a web app for everyone else.

This means your entire team can interact with your codebase and make PRs in a visual, intuitive way, without ever touching a line of code.

Example: Your Product Manager gets inspired by a feature on a competitor's site. Instead of writing a ticket, they can just use the Builder Chrome extension and the Fusion web app to quickly have AI make the feature using your existing components and styles.

The PM can iterate as much as they’d like and then create a clean PR for the devs to review, with imports from existing components:

Devs still own the codebase and all its particularities. It’s just now open for more contributions, as if you’ve open-sourced your codebase to your org.

We're really excited about with this workflow because it lets designers, product managers, and marketers all get stuff done in spaces usually reserved for devs. As we've been dogfooding internally, we've seen boards of Jira papercut tickets just kinda... vanish.

I’m much more excited about all these advanced repo features above, but it’s also nice to know that our design-to-code agent, as a specialist, also handles extrapolating on basic instructions really well.

Running the same simple “design-from-scratch” prompt as the Figma MCP demo with Builder results in a more functional, pixel-perfect starting place:

Using an Figma’s MCP server to convert your designs to code is a nice upgrade over parsing design screenshots with AI. Its data-rich approach gets you much farther along, much faster than developer effort alone.

Then, if you end up needing to convert designs to code in a more sustainable and accurate way, especially with a team, check out the toolkit we've made at Builder.

Happy design engineering!

Generate clean code using your components & design tokens
Try FusionGet a demo

Share

Twitter / X
LinkedIn
Facebook
Share this blog
Copy icon
Twitter "X" icon
LinkedIn icon
Facebook icon

Visually edit your codebase with AI

Using simple prompts or Figma-like controls.

Try it nowGet a demo

Design to Code Automation

A pragmatic guide for engineering leaders and development teams

Access Now

Continue Reading
AI4 MIN
The MCP client for design handoffs
July 3, 2025
AI12 MIN
11 prompting tips for building UIs that don’t suck
July 2, 2025
Design to Code4 MIN
Convert Figma to Next.js using AI
July 1, 2025