Skip to main content
dagraph lets you define AI workflows as plain YAML files, check them into git, and run them from the command line. Instead of wiring agents together in code or clicking through a UI, you describe a directed acyclic graph (DAG) of nodes — each node is an LLM call, a code execution step, or a human approval gate — and dagraph handles the scheduling, parallel execution, checkpointing, and artifact storage for you.

How it works

A dagraph workflow is a YAML file with a nodes list. Each node declares what it does (type), which model to use (model), what prompt to send (prompt), and which other nodes it depends on (depends_on). dagraph reads the dependency graph, groups independent nodes into parallel waves, and fires each wave concurrently. When a wave completes, the next wave starts automatically — no orchestration code required. Here is a stripped-down version of the built-in research example:
name: research
budget:
  max_tokens: 50000
  max_usd: 2.00

nodes:
  - id: research_a
    type: agent
    model: claude-haiku-4-5-20251001
    prompt: |
      Research "{{ topic }}" from a TECHNICAL perspective.
      Return 5–8 bullet points.

  - id: research_b
    type: agent
    model: claude-haiku-4-5-20251001
    prompt: |
      Research "{{ topic }}" from an ECONOMIC perspective.
      Return 5–8 bullet points.

  - id: synthesizer
    type: agent
    model: claude-sonnet-4-6
    depends_on: [research_a, research_b]
    prompt: |
      Synthesize these two research angles on "{{ topic }}":
      == Technical == {{ research_a }}
      == Economic == {{ research_b }}
research_a and research_b have no dependencies, so dagraph fires them in the same wave. synthesizer depends on both, so it waits until both complete — then receives their outputs as template variables.

Key capabilities

Parallel execution

Nodes with no mutual dependencies run simultaneously. dagraph computes the wave schedule automatically from your depends_on declarations.

7 LLM backends

Claude Code (default), Anthropic API, OpenAI, Gemini, AWS Bedrock, Ollama, and Codex. Mix providers per node using model-prefix routing (openai/gpt-4o, ollama/llama3.2).

HITL gates

Insert approval_gate or user_input nodes to pause a run and wait for a human decision. Resume with agentgraph approve or agentgraph respond.

Deterministic replay

Every run writes a fixture file. Replay any run without making LLM calls — useful for iterating on prompts and debugging logic without spending tokens.

Content-addressed artifacts

Node outputs are stored by SHA-256 hash. Downstream nodes reference outputs by ID, not by copying text, keeping LLM context bounded.

Workflow-as-code

YAML files are the source of truth. Version them in git, diff them in PRs, deploy them through CI — no proprietary platform lock-in.

Node types

dagraph supports 10 node types that cover the most common patterns in AI workflows:
TypeWhat it does
agentSingle LLM call. Renders a Jinja prompt, calls a model, stores output.
evaluator_loopGenerator + evaluator pair that iterates until the evaluator approves.
loopIterative agent loop with a Jinja termination condition.
approval_gatePauses the run and waits for a human approve or reject.
user_inputPauses and waits for free-text, a selection, or a confirm response.
plannerEmits new nodes at runtime based on intermediate results.
mapFan-out: runs one agent call per item in a list, concurrently.
subgraphRuns another DAG file as a single composable node.
bashRuns a shell command in a sandbox; stdout becomes the artifact.
python_execRuns Python code in a sandbox; same output contract as bash.

What dagraph is not

dagraph does not manage your prompts in a database, provide a visual drag-and-drop editor, or require a hosted platform to run. It is a CLI tool and a Python library. Your workflows live in files you own.

Get started

Install dagraph and run your first workflow in under 5 minutes.

Installation

Full install options and backend configuration for all 7 providers.