Weavy approaches AI-generated images and video differently than most tools. Instead of treating generation as a one-shot text-to-image transaction, it gives you a node-based workspace where you connect models, editing tools, and processing steps into visual workflows. If you’ve ever used a node editor in Blender, Nuke, or even Figma’s component system, the mental model will feel familiar. You build a pipeline where generation feeds into compositing, compositing feeds into color grading, color grading feeds into output, and the whole chain runs as a single workflow that you can reuse, modify, and share.
What Weavy Actually Is
Weavy is a browser-based platform built around a visual node graph. Each node represents either an AI model, an editing operation, or an input/output point. You drag nodes onto a canvas, connect them, configure their parameters, and run the workflow. The output of one node becomes the input of the next.
The AI model library is broad. You have access to image generation models like Flux, Stable Diffusion 3.5, and Ideogram alongside video models like Runway Gen-4, Sora, and Seedance. Rather than committing to a single model’s strengths and weaknesses, you can route different parts of your workflow through different models. Use one for initial generation, another for inpainting, a third for upscaling.
What separates Weavy from most generative tools is the editing layer. The platform includes professional compositing tools: layer management, blend modes, masking, inpainting, outpainting, typography, color grading, blur, crop, relighting, and depth extraction. These aren’t afterthought filters. They’re the kind of tools that let you take a generated image and bring it to a level of finish that raw model output rarely achieves on its own.
The combination matters. Generation alone gives you raw material. Editing alone requires raw material to exist. Weavy puts both in the same environment, connected through a pipeline you can see and adjust at every stage.
Why Node-Based Workflows Matter for Design
The value of a node-based approach is legibility and repeatability. When your workflow is a visual graph rather than a sequence of disconnected steps across multiple tools, you can see exactly what’s happening at each stage. You can swap out a model without rebuilding everything downstream. You can adjust a parameter in the middle of the chain and watch the effect propagate. You can hand the workflow to a collaborator and they can understand it without a walkthrough.
For studios producing imagery at volume (brand campaigns, editorial series, product photography, social content) this repeatability is the difference between a process and a gamble. You define the pipeline once: source image goes through style transfer, then compositing with brand elements, then color correction to match your palette, then output at multiple resolutions. After that, every new image runs through the same chain with predictable results. The creative decisions are embedded in the workflow structure, not reinvented every time someone opens a prompt field.
Weavy also generates simplified user interfaces from your node graphs automatically. This means you can build a complex multi-step workflow, then hand a simplified version to a teammate or client who only needs to change the inputs, swap a product photo, adjust a headline, select a color variant, without touching the underlying pipeline. It’s the same principle as building a Figma component with exposed properties, but applied to AI-powered image production.
Where It Fits in a Design Workflow
The most natural use cases fall into three areas.
First, asset production at scale. If you’re generating imagery for a website, pitch deck, or campaign and need dozens of images that feel cohesive, Weavy lets you build the visual logic once and run it across variations. You define the lighting, palette, compositing approach, and model chain in the workflow, then swap subjects or contexts per image. The structural consistency comes from the pipeline, not from hoping the model remembers what you asked for last time.
Second, concept exploration with control. Early-stage creative work benefits from speed, but not from randomness. A node-based workflow lets you set constraints (this model for generation, this style reference for consistency, this color grade for mood) and explore within those boundaries. You’re generating options, but the options are all in the neighborhood you defined.
Third, post-production refinement. Raw AI output often needs work. Edges cleaned up, elements composited, lighting adjusted, text overlaid. Doing this in a separate tool means exporting, importing, and losing the connection between generation and finishing. Weavy keeps both in the same environment, so refinement is part of the workflow rather than a separate phase.
Figma Acquisition and What It Signals
Figma acquired Weavy in late 2025, rebranding it as Figma Weave. The acquisition signals where design tooling is headed: generative AI capabilities integrated directly into the platforms where designers already work, rather than existing as standalone tools that require export/import workflows.
For Weavy users, the acquisition means tighter integration with Figma’s ecosystem is coming. Design files, component libraries, and collaborative features connected to generative workflows. For the broader design community, it suggests that node-based AI pipelines are becoming a standard part of the toolkit, not an experimental sideshow.
The founding team joined Figma from Tel Aviv, where they’d built Weavy on the principle they call “Artistic Intelligence,” the idea that AI models are most useful when they’re embedded in workflows that preserve human creative judgment rather than replacing it. That philosophy aligns with how most working designers think about these tools: useful when controlled, frustrating when autonomous.