Skip to Content

Figma & AI: Testing the Viability of AI Plugins

Figma’s community of design plugins are an abundance of tools made available to empower our design processes. There are over 3,000 plugins available to download and 200 widgets to add that help us create high-quality designs for our users.

As we face the integration of design and developer tools, we seek new ways to approach our work. The increasing popularity of Artificial Intelligence (AI) provides opportunities to test how well these new tools can translate our designs into code for specific web components. We began thinking of an experiment that would test the viability of such tools. We hypothesized that we could deliver higher quality code in less time compared to traditional coding using AI plugins available through the Figma community.

Builder.io was one plugin that we turned to, after reading reviews of its use by other designers. The software boasts an impressive ability to take a Figma file and translate the designs to code. In theory this would not only significantly reduce the time it takes to deliver frontend code, but would also reduce the time required for the design-to-dev handoff. With 9,300 “likes” in the community and over 720,000 users, we believed this was a good place to start.

Experiment Setup

In the first half of our experiment, we wanted to test how well AI-generated code performed; it was important to create a test environment that wasn’t overly complicated but not too easy either. We created a sample authenticated login screen user interface (UI) for our experiment; it’s a common design request and would include basic UI controls that would weed out AI tools that couldn’t handle simple design files.

Our test authenticated login screen included the following design components:

  • Layout (responsive)
  • Cards
  • Fonts (h1-body)
  • Buttons (all states)
  • Links (all states)
  • Input (all states)
  • Validation error
  • Checkbox

The second half of our experiment required comparing the quality of code that comes from a more traditional coding method versus that of the AI-generated code. Two of our developers stepped in to join the experiment: one to assume the standard procedure of creating traditional code “from scratch,” and the other taking the AI-generated code (with AI prompts) and making whatever changes necessary before the code was ready for release to production.

We evaluated not only whether the code produced by Builder.io would result in a functioning page, but also other key metrics. These included the page’s accessibility score, Lighthouse audit rating (which helps with improving the overall quality of a webpage), time to produce usable code, responsiveness, code quality (peer-reviewed), design fidelity, code agnosticism, and UX support time.

The Results

The experiment yielded some interesting results.

From the data, we can see that Builder.io’s time to usable code was significantly longer compared to traditional coding. If you consider the steps of creating code from start to finish, Builder.io’s AI-generated code got us through 80% of the steps in only 4 hours. However, it took longer to debug the code output to ensure that deployment would create a functional experience.

When reviewing the responsiveness of the code, which refers to how quickly and efficiently the code responds to user inputs or external events, we found that Builder.io’s code was responsive right out of the gate. The developer who worked with the AI-generated code did not have to tell the AI to be responsive and add breakpoint classes, which are essential to creating layouts that automatically adjust to the screen size.

This is interesting to note because the developer who created scratch code in the traditional process manually incorporated those breakpoint classes. A required, manual step in the traditional coding process is an automated standard practice for Builder.io; using the AI-generated code mitigates the risk of forgetting to add those breakpoint classes and ensures the responsiveness of the code.

Our Recommendation

Our results show that while Builder.io’s AI-generated code performed on par with traditional code development, and even outperformed in some respects, we don’t believe it is a viable substitute for traditional coding at this point in time.

Builder.io is great out of the box if you’re using it within their content management system (CMS) and for landing and marketing pages. If you’re using its Figma to Code feature in your own development environments and as a base for more complex products, we found it takes more time in the long run.

Nonetheless, we were thrilled to discover that using Builder.io got us 80% through the process extremely fast. Unfortunately, the final 20% of the process took longer than standard coding to deliver our desired level of quality. For that reason, we don’t find it to be a viable solution for now.

Our experiment highlights the future potential of AI-generated code. We’re looking ahead and finding the next AI tool to experiment with. The only way we are going to definitively know whether AI-generated code is worth it is by trying, testing, and experimenting. With over 3,000 Figma plugins available, the possibilities are endless.


Discover key insights

Explore more of our research and innovation stories.


Brian Loughner is a Lead UX Designer at ITX. He works to connect with clients, understand their problems and find solutions to meet their needs. Brian co-organizes meetings for Upstate UX Meetup, aimed to facilitate conversation on various UX topics for professionals and students.

Like what you see? Let’s talk now.

Reach Out