Discover Firefly Graph, a visual canvas for building, executing, and sharing AI-powered creative workflows.
Firefly Graph is a visual system for designing and understanding AI-powered creative workflows. It enables creative professionals to connect individual actions into structured, repeatable processes that can be refined, reused, and shared.
Firefly Graph provides a visual system for building, executing, and sharing AI-powered creative workflows with complete precision and repeatability. Unlike generation tools that rely on a single prompt, Firefly Graph lets you build step-by-step workflows (called graphs) by connecting individual actions— nodes, on a flexible canvas. Each creative decision is recorded, repeatable, and editable at any point in the process.
Combining 250+ nodes that span top AI models from Adobe and leading partners alongside professional editing tools from across Creative Cloud, Firefly Graph is the platform for creative professionals who need consistency, control, and scale without sacrificing quality.
Learn more about Firefly Graph through the Firefly Graph microsite.
How Firefly Graph works
A graph is a series of creative decisions laid out visually as individual actions on a canvas. Each action is a node. Nodes are connected together to form a workflow: a complete, end-to-end sequence of creative steps that can be run, refined, saved, and shared.
The typical workflow follows this pattern:
- Open a blank canvas and add your first node: Start from an empty graph. Browse the node library to find the action you want, whether it's uploading a source image, generating one with Firefly, or running a subject-selection model.
- Connect nodes to build your workflow: Draw connections between node outputs and inputs to chain steps together. For example: Input Image → Segmentation → Apply Mask → Hue Shift → Composite Layers → Output. Each node's output feeds directly into the next step.
- Add input controls for the variables you want to adjust: Replace static values with interactive controls (sliders, angle dials, and color pickers) so that any user running the workflow can adjust key parameters without touching the underlying graph.
- Run the graph and preview outputs at each node: Nodes that use AI models or external API services require generative credits and display a Run button. Non-AI steps execute immediately. You can preview the output at any node before the full workflow completes.
- Change a decision, re-run, and iterate: Because every step is captured in the graph, changing one early decision—such as swapping a source image, picking a different mask, or adjusting a color—automatically re-executes all downstream steps. The entire workflow updates instantly.
Key concepts
Nodes
A node is a single creative action in a workflow. Nodes can represent AI generation steps (such as Firefly Text to Image, Gemini LLM prompt generation, upscaling, or subject selection), professional editing operations from Creative Cloud (such as masking, color grading, hue shifting, Gaussian blur, or compositing), or utility actions (such as selecting an item from a list, resizing, or blending layers).
The Firefly Graph node library includes 250+ nodes spanning AI models from Adobe and partners, Creative Cloud editing tools, math and compositing utilities, and shader nodes, with more extensive video and 3D support on the roadmap.
Connections and lists
Nodes are linked by drawing connections from an output port to an input port. When a node produces multiple outputs (for example, a subject-selection model that finds two subjects in an image), those outputs are passed as a list. Lists are represented by a double-line connection. Use a Get Item from List node (or an input slider) to select which item from the list continues through the rest of the workflow.
Input controls
Many nodes expose numerical or text inputs. Rather than hard-coding a value, you can connect an input control (a slider, angle dial, text field, or color picker) to make that parameter interactive. Input controls appear on the canvas as standalone nodes that feed their value into the target node. This is the foundation for building reusable, adjustable workflows.
Subgraphs and capsules
As workflows grow in complexity, you can select a group of nodes and collapse them into a subgraph. The subgraph exposes only the inputs and outputs you designate, hiding internal steps. When a workflow and its exposed controls are packaged for sharing, it is referred to as a capsule – a self-contained creative tool that any team member can run without understanding how it was built.
Key capabilities
250+ node library: Access AI models from Adobe Firefly and leading third-party providers alongside professional editing actions from Photoshop, Illustrator, Premiere, and more, all in one place.
Repeatable, precise workflows: Every creative decision is captured as a node. Change any step (swap an image, shift a color, or adjust a mask) and the entire workflow re-executes automatically with full fidelity.
Custom input controls: Attach sliders, dials, and pickers to any parameter to build interactive control panels for your workflows, without exposing the underlying graph complexity.
Subgraphs: Collapse complex node groups into reusable subgraphs. Share them as key components so creative teams can execute sophisticated workflows without needing to build them from scratch in future graphs.
AI model chaining: Connect LLM prompt-generation nodes directly to image or video generation nodes. Build fully automated pipelines that go from a concept to a finished visual in a single run.
Who Firefly Graph is for
Firefly Graph is designed for creative professionals, creative technologists, and creative production leads who shape how content gets made at scale. It is best suited for those who need more control over the creative process than prompt-based tools alone can provide.
Creative practitioners and technologists who want to author precise, repeatable workflows and share them as reusable tools for their wider teams.
Studio heads and creative production leads who need to turn best-in-class creative workflows into organizational assets that every team member can execute consistently.
Creative decision-makers (CMOs, IT leaders) who want full visibility into how generative AI is being used, with governance, reporting, and control across the organization.
Firefly Graph is purpose-built for upstream creative work: the design, authoring, and templating of workflows. For high-volume batch execution of published workflows, see Firefly Creative Production for Enterprise.
Common use cases
- Subject isolation and compositing
Automatically select subjects, apply masks, and composite them onto custom backgrounds with full color control, all in a single repeatable graph.
- AI-generated image effects
Chain Firefly generation with hue shifting, Gaussian blur, and color sampling to produce stylized editorial images at scale.
- LLM-to-image pipelines
Use a Gemini or other LLM node to generate an optimized prompt, then pipe it directly into a Firefly generation node for fully automated creative output.
- Reusable campaign templates
Build a complete creative workflow once, package it as a capsule with exposed controls, and share it so every designer on the team can produce on-brand assets independently.
- Dynamic color exploration
Sample colors from source images, generate analogous or complementary palettes, and apply them back to backgrounds and overlays, all wired together in a single graph.
- Multi-asset production
Connect generation, retouching, compositing, and export nodes into an end-to-end pipeline for producing multiple asset variations from one workflow run.
Turn text into art with Adobe Firefly
Create beautiful images and video clips from text prompts using generative AI.