Skip to main content
This guide walks you through installing dagraph, writing a minimal two-node workflow, and running it end-to-end. By the end you will have a working agent pipeline that runs two nodes in parallel and synthesizes their output — the same pattern used by the built-in research example.
1

Install dagraph

Install the dagraph package from PyPI. Python 3.12 or newer is required.
pip install dagraph
Verify the install:
agentgraph --version
The default backend is claude_code, which runs the claude CLI as a subprocess and bills against your Claude Code plan. You do not need an API key to use it. Make sure the claude CLI is installed and on your PATH before running workflows. See Installation for other backend options.
2

Write a workflow file

Create a file called my-workflow.yaml with the following content. This defines two research nodes that run in parallel, followed by a synthesizer node that waits for both.
name: my-workflow
description: Two-angle research with a synthesizing summary.
budget:
  max_tokens: 20000
  max_usd: 1.00

nodes:
  - id: angle_a
    type: agent
    model: claude-haiku-4-5-20251001
    max_output_tokens: 800
    prompt: |
      Research "{{ topic }}" from a TECHNICAL perspective.
      Return 5 concise bullet points.

  - id: angle_b
    type: agent
    model: claude-haiku-4-5-20251001
    max_output_tokens: 800
    prompt: |
      Research "{{ topic }}" from an ECONOMIC perspective.
      Return 5 concise bullet points.

  - id: summary
    type: agent
    model: claude-sonnet-4-6
    max_output_tokens: 1200
    depends_on: [angle_a, angle_b]
    prompt: |
      Summarize the following two research angles on "{{ topic }}"
      into a 3-point takeaway.

      == Technical ==
      {{ angle_a }}

      == Economic ==
      {{ angle_b }}
angle_a and angle_b have no depends_on, so dagraph fires them simultaneously. summary lists both as dependencies, so it starts only after both complete and receives their outputs as template variables.
3

Validate the workflow

Before running, check that the YAML parses correctly and preview the execution plan:
agentgraph validate my-workflow.yaml
Expected output:
✓ my-workflow: 3 nodes, 2 wave(s)
  wave 1: ['angle_a', 'angle_b']
  wave 2: ['summary']
This confirms dagraph identified the two independent nodes and will run them in parallel in the first wave.
4

Run the workflow

Pass an input value for {{ topic }} using the --input flag:
agentgraph run my-workflow.yaml --input topic="quantum computing"
dagraph will fire angle_a and angle_b in parallel, wait for both to finish, then run summary with their outputs injected into the prompt template.
To use a different backend, add --backend api (requires ANTHROPIC_API_KEY) or --backend openai (requires OPENAI_API_KEY). See Installation for the full list.
5

Inspect the results

When the run completes, dagraph prints a run ID. Use it to inspect outputs:
agentgraph inspect <run_id>
This shows a table of all nodes with their status, token usage, and a preview of each artifact. To see the full output of a specific node:
agentgraph inspect <run_id> --node summary --full
All artifacts land in runs/<run_id>/artifacts/ as content-addressed binary files. The index.json in that directory maps each node ID to its SHA-256 hash. You can also diff two runs with agentgraph diff <run_a> <run_b> to see per-node text differences.

Next steps

You now have a working parallel agent workflow. From here you can:
  • Add an approval_gate node to pause the workflow for human review — see Human-in-the-loop
  • Swap backends per node using model-prefix routing (openai/gpt-4o, ollama/llama3.2) — see Multi-provider fallback
  • Replay a run without LLM calls for fast iteration: agentgraph run my-workflow.yaml --input topic="..." --replay-from <run_id>
  • Schedule recurring runs: agentgraph schedule my-workflow.yaml --cron "0 9 * * 1"