Nodes are the building blocks of a dagraph workflow. Each node in the nodes list has a type field that tells the scheduler what to execute — an LLM call, a shell command, a fan-out over a list, or a pause for human review. Every node also accepts a common set of policy fields that control conditional execution, retries, and budget limits.
Common fields (all node types)
These fields work on every node type regardless of type.
Unique identifier for this node within the DAG. Used to declare dependencies and to reference the node’s output in downstream prompts via {{ id }}.
List of node IDs that must complete before this node runs. Nodes with no depends_on run immediately when the DAG starts.
A Jinja2 expression evaluated against inputs and upstream outputs. If the expression is falsy, the node is skipped and treated as having produced an empty output. Downstream nodes still run.
Retry policy for transient failures.
max_attempts (integer, 1–20, default 1) — total attempts including the first.
backoff_seconds (float, default 0) — fixed delay between attempts.
retry_on (list of exception names, default ["*"]) — which exception classes trigger a retry. "*" retries on anything. BudgetExceededError and ApprovalPending are never retried regardless of this setting.
retry:
max_attempts: 3
backoff_seconds: 2
retry_on: ["*"]
Per-node budget cap. Uses the same max_tokens / max_usd fields as the top-level budget. The cap covers all LLM calls the node makes, including all iterations of a loop.budget:
max_tokens: 10000
max_usd: 0.50
Node types
agent
evaluator_loop
loop
map
planner
bash
python_exec
subgraph
approval_gate
user_input
An agent node makes a single LLM call. The scheduler renders the prompt using Jinja2, calls the model, and stores the response as the node’s artifact. If you add tools or mcp_servers, the scheduler runs a full tool-use loop until the model stops calling tools or max_tool_iterations is reached.Required fields: id, type, model, promptModel identifier. Use a plain name to use the default backend (claude-sonnet-4-6), or a provider/model prefix to pin a specific backend (openai/gpt-4o). See Backends. Jinja2 template rendered at run time. Reference inputs and upstream node outputs by name. Use prompt_file to load from a file instead.
System prompt. Use system_file to load from a file.
Token budget for the model’s response. Default: 4096.
When true, print token deltas to the console as they arrive. Only respected by the api backend. Default: false.
Tool specifications to make available to the model. When non-empty, the scheduler runs a tool-use loop.
MCP server configurations. Cannot be combined with output_schema.
Maximum rounds in the tool-use loop before the node stops regardless of stop reason. Range: 1–50. Default: 10.
JSON Schema. When set, the model’s response is forced to be valid JSON matching the schema. Only works with the api backend; cannot be combined with tools or mcp_servers.
Memory scope name, or a { scope, persist } object. When persist: true, the conversation history for this scope survives across runs (SQLite-backed).
Ordered list of model strings to try if the primary model fails with a retriable error (429, 5xx, network error, timeout). See Backends. - id: analyst
type: agent
model: claude-sonnet-4-6
max_output_tokens: 2000
depends_on: [data_fetch]
prompt: |
Analyse this data and return three key insights:
{{ data_fetch }}
fallback_chain:
- openai/gpt-4o
An evaluator_loop node runs a generator–evaluator pair iteratively. The generator produces a candidate response; the evaluator scores it and returns {"approved": true|false, "feedback": "..."}. If not approved, the generator runs again with the feedback. The loop exits when the evaluator approves or max_iterations is reached.Required fields: id, type, generator, evaluatorAn AgentRole object with model, prompt, max_output_tokens, system, and fallback_chain. The generator prompt can reference {{ previous_output }}, {{ evaluator_feedback }}, and {{ iteration }}.
An AgentRole object. The evaluator prompt can reference {{ candidate }} (the generator’s current output) and {{ iteration }}. Must return a JSON object with at least {"approved": bool, "feedback": string}.
Maximum generator+evaluator rounds. Range: 1–20. Default: 3.
- id: haiku
type: evaluator_loop
max_iterations: 3
generator:
model: claude-haiku-4-5-20251001
max_output_tokens: 400
prompt: |
Write a haiku about "{{ topic }}".
{% if iteration > 1 %}
Previous attempt: {{ previous_output }}
Feedback: {{ evaluator_feedback }}
Revise based on the feedback.
{% endif %}
evaluator:
model: claude-sonnet-4-6
max_output_tokens: 500
prompt: |
Evaluate this haiku for correct 5-7-5 syllable structure
and imagery about "{{ topic }}".
Candidate:
{{ candidate }}
Reply with ONLY:
{"approved": true|false, "feedback": "<critique>"}
A loop node runs a single agent repeatedly up to max_iterations times. After each iteration, dagraph evaluates the until Jinja expression. When the expression is truthy, or when the iteration count is reached, the loop stops. The node’s final artifact is the last iteration’s output.Required fields: id, type, agentAn AgentRole object (model, prompt, max_output_tokens, system, fallback_chain). The prompt can reference {{ previous_output }} and {{ iteration }}.
Maximum number of iterations. Range: 1–50. Default: 5.
Jinja expression evaluated after each iteration against inputs, upstream outputs, previous_output, and iteration. When truthy, the loop exits early. If omitted, the loop always runs max_iterations times.
- id: refine
type: loop
max_iterations: 4
until: "'DONE' in previous_output"
agent:
model: claude-haiku-4-5-20251001
max_output_tokens: 800
prompt: |
Iteration {{ iteration }}. Refine this draft:
{{ previous_output if previous_output else initial_draft }}
When satisfied, include the word DONE on its own line.
A map node fans out over a list: it runs one agent call per item concurrently, up to max_concurrency at a time. The node’s output is a JSON array of per-item response texts, which downstream nodes can consume with Jinja’s fromjson filter.Required fields: id, type, over, model, promptJinja expression (not a template) that evaluates to an iterable. If the result is a string, dagraph first tries to parse it as JSON; if that fails, it splits on newlines.
The template variable name bound to each item. Default: item. Each item also gets an index variable (0-based).
Model to use for each item’s agent call.
Jinja template rendered once per item. Reference the current item via the name set in as.
Maximum parallel item calls. Range: 1–50. Default: 5.
- id: summaries
type: map
over: topics
as: topic
model: claude-haiku-4-5-20251001
max_concurrency: 3
prompt: |
Write one sentence summarising "{{ topic }}". No preamble.
- id: synthesize
type: agent
depends_on: [summaries]
model: claude-sonnet-4-6
prompt: |
Combine these summaries into a short paragraph:
{% for s in summaries | fromjson %}
- {{ s }}
{% endfor %}
A planner node makes one LLM call whose output must be a JSON object conforming to {"nodes": [...]}. The scheduler validates and injects those nodes into the running DAG, then executes them. Use a planner when the number or shape of downstream nodes depends on the content of earlier outputs.Required fields: id, type, model, promptModel for the planning call.
Jinja template. The model must return ONLY a JSON object with a nodes array. Emitted nodes can be of type agent, evaluator_loop, loop, map, subgraph, bash, python_exec, approval_gate, or user_input — but not another planner.
Hard cap on the number of nodes the planner may emit. Range: 1–50. Default: 10.
Token budget for the planning response. Default: 4096.
- id: plan_angles
type: planner
model: claude-sonnet-4-6
max_output_tokens: 2000
max_emitted_nodes: 5
prompt: |
You are a research planner for "{{ topic }}".
Identify 2-3 angles worth investigating.
For each angle emit one agent node, then emit a synthesis node.
Return ONLY valid JSON — no prose, no fencing:
{
"nodes": [
{
"type": "agent",
"id": "angle_<name>",
"model": "claude-haiku-4-5-20251001",
"depends_on": ["plan_angles"],
"prompt": "Research this angle of \"{{ topic }}\": ..."
}
]
}
The planner’s prompt must instruct the model to return raw JSON with no markdown fencing. Any text outside the JSON object will cause the node to fail.
A bash node runs a shell command. Stdout becomes the node’s artifact. A non-zero exit code fails the node. The command string is rendered with Jinja using inputs and upstream outputs before execution.Required fields: id, type, commandShell command to run. Rendered as a Jinja template.
Maximum runtime. Default: 300.
Environment variables to set for the process. Values are plain strings.
Maximum stdout size to capture. Default: 1000000 (1 MB).
Container image to run the command inside (when sandbox support is enabled).
- id: fetch_data
type: bash
command: "curl -s https://api.example.com/data/{{ dataset_id }}"
timeout_seconds: 30
env:
API_KEY: "{{ api_key }}"
If your command contains Jinja-style braces ({ or }), wrap the relevant section in {% raw %}...{% endraw %} to prevent template rendering.
A python_exec node runs Python code. The code string is a Jinja template rendered before execution. Stdout becomes the node’s artifact; a non-zero exit code fails the node. The same rules about {% raw %} apply when your code contains dict literals or f-strings.Required fields: id, type, codePython code to execute. Rendered as a Jinja template before running.
Maximum runtime. Default: 300.
Environment variables to set for the process.
Maximum stdout size to capture. Default: 1000000 (1 MB).
Container image to run the code inside (when sandbox support is enabled).
- id: parse_results
type: python_exec
depends_on: [fetch_data]
code: |
import json
data = json.loads("""{{ fetch_data }}""")
for item in data["results"]:
print(f"- {item['name']}: {item['score']}")
A subgraph node loads another DAG YAML file and runs it as if it were a single node. The sub-DAG receives the values in inputs (rendered with Jinja) and exposes the output of its export node (or the last topologically sorted node if export is omitted) as the subgraph node’s artifact.The parent’s budget and executor are shared with the sub-DAG. Sub-DAG artifacts are stored under runs/<run_id>/subgraphs/<subgraph_id>/ to avoid collisions.Required fields: id, type, fromPath to the child DAG YAML file, relative to the parent DAG file.
Key/value map of inputs to pass to the sub-DAG. Values are Jinja templates rendered against the parent context.
Node ID inside the sub-DAG whose output becomes this node’s artifact. Defaults to the last topologically sorted node.
- id: triage
type: subgraph
from: ./shared/triage.yaml
inputs:
ticket: "{{ raw_ticket }}"
priority: "{{ priority }}"
export: triage_summary
An approval_gate node pauses the run and waits for a human to approve or reject via agentgraph approve. On pause, the scheduler persists run state and exits. Resume the run after approval; the downstream nodes then execute with access to the approval artifact.Required fields: id, typeInstructions shown to the reviewer. Tell them how to inspect artifacts and issue the approve/reject command. Default: "Review the upstream artifacts and approve or reject."
If set, the node fails when this many seconds have elapsed since the pause started and no approval has arrived.
- id: review
type: approval_gate
depends_on: [draft]
timeout_seconds: 86400
prompt_to_human: |
Review the draft with:
agentgraph inspect <run_id> --node draft
Approve:
agentgraph approve <run_id> review
Reject:
agentgraph approve <run_id> review --reject --comment "reason"
A user_input node pauses the run and prompts the user for free-form text, a selection from a list, or a yes/no confirmation. Resume the run by responding via agentgraph respond. The submitted value is available to downstream nodes as a template variable.Required fields: id, type, prompt_to_humanThe question or instruction shown to the user.
One of text, select, or confirm. Default: text.
Required when input_type: select. The list of valid choices.
Fail the node if no response arrives within this many seconds.
- id: choose_tone
type: user_input
prompt_to_human: "Which tone should the report use?"
input_type: select
options:
- formal
- casual
- technical
timeout_seconds: 3600
- id: write_report
type: agent
depends_on: [choose_tone]
model: claude-sonnet-4-6
prompt: |
Write the report in a {{ choose_tone }} tone.
{{ research_findings }}