Skip to main content
agentgraph run is the primary command for executing a workflow. It reads your DAG YAML, computes the execution wave schedule from depends_on declarations, fires each wave in parallel, and writes all results to a run directory under runs/. If a run was previously interrupted or paused at an approval gate, passing --run-id resumes it from exactly where it stopped — completed nodes are skipped automatically.
agentgraph run <dag_path> [OPTIONS]

Arguments and flags

dag_path
string
required
Path to the DAG YAML file. Relative or absolute.
--input / -i
string
Pass a template input as key=value. Repeat the flag for multiple inputs. Keys must match [A-Za-z_][A-Za-z0-9_]*. Values are available as Jinja variables inside node prompts (e.g., {{ topic }}).
--run-id
string
Use a specific run ID instead of the auto-generated 12-character hex string. Pass the same ID as a previous run to resume it.
--runs-dir
string
default:"runs"
Directory where run subdirectories are created. Defaults to runs/ relative to your working directory.
--backend
string
default:"claude_code"
LLM executor backend. Options:
  • claude_code — spawns claude -p as a subprocess using your Claude Code plan (no API key needed)
  • api — calls the Anthropic Messages API directly (requires ANTHROPIC_API_KEY)
  • codex — spawns the Codex CLI using your OpenAI/Codex plan
--sandbox
string
Sandbox for bash and python_exec nodes. Required if your DAG contains exec nodes. Options:
  • inprocess — runs subprocesses in the current process (for trusted code)
  • docker — runs each exec call in an ephemeral Docker container
--record
string
Path to a JSONL file where every LLM call is recorded. The run still executes against the real backend; the fixture can be used later with --replay for deterministic testing.
--replay
string
Path to a JSONL fixture file. All LLM calls are served from the fixture — no backend is contacted. Calls with unknown hashes exit with an error.
--replay-from
string
Run ID of a previous run. Replays from that run’s auto-recorded fixture at runs/<run_id>/fixture.jsonl. Equivalent to --replay runs/<run_id>/fixture.jsonl.
--max-concurrent
number
default:"10"
Maximum number of simultaneous in-flight LLM calls. Useful with claude_code where each call spawns a subprocess.
--rpm
number
Requests-per-minute cap. Applies a token-bucket rate limiter across all nodes. Most useful with --backend api to avoid Anthropic 429 errors.

Examples

agentgraph run examples/research.yaml --input topic="quantum computing"

Run directory layout

Every run creates a directory at runs/<run_id>/ with the following structure:
runs/<run_id>/
├── state.db          # SQLite checkpoint — one row per node status change
├── trace.jsonl       # OTel-semconv JSON-line trace events
├── fixture.jsonl     # Auto-recorded LLM calls (for --replay-from)
└── artifacts/
    ├── index.json    # Maps node_id → SHA-256 digest
    └── sha256/
        ├── <hash>.bin
        └── ...
After the run completes, agentgraph prints the artifacts directory path and trace path so you can navigate directly.
--record, --replay, and --replay-from are mutually exclusive. Passing more than one exits with an error.
Every run automatically records a fixture file at runs/<run_id>/fixture.jsonl, so you can always replay any past run with --replay-from <run_id> — even if you did not pass --record explicitly.