AI-Driven Code Generation: AEM Component Development from Figma
Welcome to the new era of code automation.
Code automation has undergone a remarkable transformation, evolving from basic tools into advanced AI agents that significantly improve the developer experience.
Initially, automation tools provided simple functionalities such as auto-completion. These progressed into intelligent assistants capable of understanding and suggesting code within the context of a single file. The next advancement saw AI agents generating code across multiple files, actually understanding how different components of a project work together.
The latest breakthroughs involve AI's direct interpretation of design files. Leveraging Figma's Model Context Protocol (MCP) server, AI agents can now semantically understand design elements and convert them into structured, production-ready code. This creates a direct, automated pipeline from design to development, something that was impossible just months ago.
This capability is particularly transformative for Adobe Experience Manager (AEM) projects, where AEM Components are the foundation. These are the reusable building blocks that encapsulate an experience's core logic, presentation markup, and crucial authoring capabilities. In the context of large-scale Adobe implementations, efficient and precise component development is paramount for delivering modular, scalable, and consistent digital experiences.
Traditionally, translating intricate Figma designs into functional AEM components has involved a meticulous, often manual process. However, AI-driven code generation is now fundamentally transforming this crucial step, creating a direct, intelligent conduit from design to deployable code, as we will explore in the following technical deep dive.
Deep-dive into accelerating AEM components in Figma
This guide walks developers through using AI agents, such as Cursor AI, to accelerate the creation of Adobe Experience Manager (AEM) components directly from Figma designs. This process leverages deep integration between design tooling, AI intelligence, and your local development environment (e.g., VS Code).
1. Pre-computation in Figma: Crafting AI-Ready Designs
Your AI output quality depends directly on how well your Figma design is structured. Before initiating the AI process, ensure your Figma design is meticulously structured:
- Component-Driven Design: Ensure your design elements are structured as reusable Figma components and variants. This allows the AI to correctly identify distinct UI elements.
- Leverage Variables and Properties:
- Define properties for different states (e.g., selected, unselected, enabled, disabled), sizes (e.g., small, medium, large), or types (e.g., primary, secondary).
- Use Figma variables for colors, typography, spacing, and other design tokens. The AI can interpret these to generate corresponding CSS variables or utility classes.
- Example: For a radio button, clearly define variants for "Unselected/Enabled," "Selected/Enabled," "Unselected/Disabled," etc.
- Semantic Naming Conventions: Adopt clear and consistent naming conventions for layers and components (e.g., Form/RadioButton, Button/Primary, Field/Input). This helps the AI in inferring the element's purpose and functionality.
- Auto Layout for Structure: Utilize Figma's Auto Layout feature extensively. This provides structural information to the AI, helping it understand how elements are grouped, spaced, and resized, which translates directly into responsive CSS and HTML structure.
- Code Connect (Optional but Recommended): If available, leverage Figma's Code Connect feature to map design components to existing code components or define initial code snippets. While AI can infer much, explicit connections provide stronger hints.
2. Initiating the AI Generation in Your IDE
Your IDE (e.g., VS Code, as shown in the video) becomes the command center for orchestrating the AI. Ensure your AI agent (like Cursor AI) is properly integrated and has access to your local project directory.
- Select Target Figma Node: In Figma, select the specific component or frame you wish to generate code for.
- Copy Figma Link for Selection: Use Figma's "Copy/Paste as" > "Copy link" or similar functionality to obtain a direct link to the selected element. This link contains the unique ID of the Figma node.
- Open Your Project in IDE: Ensure the relevant development project (e.g., your AEM ui.apps and core modules) is open in your IDE. The AI agent needs access to this context.
- Engage the AI Agent:
- Open the AI chat interface or command palette within your IDE.
- Issue a command to generate an AEM component by pasting the Figma link and providing a natural language prompt.
- Prompting Best Practices:
- Be Specific about the Component: Clearly state what the component is (e.g., "Create an AEM component for this Form Radio Button").
- Describe Key Properties/States: Articulate the important variations and functionalities of the component that the AI should implement (e.g., "It has selected and unselected states, enabled and disabled states, and also a hover state for visual feedback").
- Specify Output Goals: State what you want the AI to generate (e.g., "Generate the component's HTML, AEM dialog, a Sling Model to read properties, and necessary CSS/JS clientlibs").
- Reference Project Standards (Implicitly or Explicitly): The AI will automatically read your project's existing files for context. However, you can reference at specific project standards if necessary (e.g., "Ensure it follows our DXN project's standard component structure and naming conventions").
- Iterate and Refine: If the initial output isn't perfect, refine your prompt. Break down complex requests into smaller, more manageable steps (e.g., first generate structure, then add dialogue, then add styling).
- Example Prompt
I've selected a node in Figma for a Form Radio Buttons. This radio button component needs to have selected, unselected, enabled, disabled, and hover states.
Please create an AEM component following DXN standard component structure, HTL, and Dialog.
Then add a Sling Model and pass data to HTL.
Create ClientLibrary and add it to the component.
Then apply Design from Figma.
3. AI Processing and Multi-File Generation
Once the prompt is submitted, the AI agent performs a series of complex operations in the background:
- Figma Data Ingestion: The AI makes API calls to Figma's MCP server using the provided link to retrieve the component's design data (e.g., figma.com/file/.../component?node-id=...).
- Local Project Context Mapping: The AI simultaneously performs a deep scan of your open AEM project. It identifies:
- Standard Component Path: Locates the typical folder where AEM components reside (e.g., /apps/<project-name>/components).
- Existing Component Blueprints: Analyzes other components in your project to understand established patterns for folder structure, file naming, and code style.
- Naming Conventions: Infers preferred naming conventions (e.g., camelCase for variables, kebab-case for CSS classes, PascalCase for Java classes).
- Intelligent Scaffolding and Code Generation: Based on the combined understanding, the AI generates the following interconnected files and structures:
- Component Folder: Creates a new folder for your component (e.g., form-radio-button) within your AEM ui.apps project.
- HTL Script (.html): Generates the primary rendering script, including markup for the radio buttons and dynamic attributes based on component properties.
- _cq_dialog/.content.xml: Creates the XML definition for the AEM authoring dialog, allowing authors to configure the radio button options, labels, and default values. This often includes fields for properties like value, text, disabled, selected.
- _cq_editConfig/.content.xml (Optional): Generates configuration for in-place editing if relevant.
- Sling Model (Java): Creates a Java class (e.g., FormRadioButtonModel.java) that adapts to the AEM resource, providing methods to expose dialog properties to the HTL script. This class includes annotations for Sling Models (@Model, @Inject) and logic to retrieve component-specific data.
- Client Library (clientlibs-site or dedicated):
- Creates a clientlib.css (or clientlib.scss) file, translating Figma styles into CSS rules, including rules for various states (:hover, :checked, :disabled).
- Creates a clientlib.js file for any necessary client-side interaction (though often minimal for simple radio buttons).
- Includes a .content.xml for the clientlib to define categories and dependencies.
- JCR Definitions (.content.xml files for component root): Generates the necessary JCR properties for the component itself, defining its title, group, component type (cq:Component), and Sling resource types.
4. Review, Refine, and Integrate
The AI's generated code provides a strong foundation, but human review and refinement remain essential.
- Code Review: Examine the generated files for correctness, adherence to any unstated project conventions, and optimization.
- Accept/Reject Changes: Most AI IDE integrations allow you to preview the changes and selectively accept or reject them. This is crucial for maintaining control and integrating the AI's output seamlessly.
- Add Business Logic: While the AI handles scaffolding, complex business logic (e.g., dynamic options fetched from a backend service, complex validation) still needs to be implemented manually by the developer.
- Testing: Thoroughly test the generated component in AEM to ensure it functions as expected, is authorable, and displays correctly across different browsers and devices.
- Version Control: Commit the generated and refined code to your version control system (e.g., Git) as part of your standard development process.
By embracing this AI-accelerated workflow, developers can significantly reduce time spent on repetitive tasks and focus on advanced problem-solving, while delivering AEM components with enhanced speed and consistency.
Let’s make AI work for us
AI-driven code generation from Figma to AEM represents more than just an efficiency gain, it's a fundamental shift in digital experience development.
By enabling AI agents to understand design intent and apply it within existing codebases, the traditionally tedious and error-prone process of component scaffolding becomes largely automated.
This paradigm shift liberates engineering teams from repetitive boilerplate, allowing them to redirect their expertise towards complex business logic, innovative solutions, and the overarching architectural integrity of the platform.
The inherent potential of this approach lies not only in the acceleration of delivery timelines but also in the enforced standardization, consistent quality, and proactive reduction of technical debt that can be embedded directly into the generation process.
For organizations using Adobe Experience Cloud, this means unprecedented agility in responding to market demands, rigorous design fidelity from concept to deployment, and a fundamental re-shift in how we approach content componentization. This is not merely a tool; it is a foundational change in the software development lifecycle, unlocking new frontiers in digital experience creation.