Skip to main content
Nodes are the building blocks of a dagraph workflow. Each node in the nodes list has a type field that tells the scheduler what to execute — an LLM call, a shell command, a fan-out over a list, or a pause for human review. Every node also accepts a common set of policy fields that control conditional execution, retries, and budget limits.

Common fields (all node types)

These fields work on every node type regardless of type.
id
string
required
Unique identifier for this node within the DAG. Used to declare dependencies and to reference the node’s output in downstream prompts via {{ id }}.
depends_on
list[string]
List of node IDs that must complete before this node runs. Nodes with no depends_on run immediately when the DAG starts.
when
string
A Jinja2 expression evaluated against inputs and upstream outputs. If the expression is falsy, the node is skipped and treated as having produced an empty output. Downstream nodes still run.
when: "mode == 'deep'"
retry
object
Retry policy for transient failures.
  • max_attempts (integer, 1–20, default 1) — total attempts including the first.
  • backoff_seconds (float, default 0) — fixed delay between attempts.
  • retry_on (list of exception names, default ["*"]) — which exception classes trigger a retry. "*" retries on anything. BudgetExceededError and ApprovalPending are never retried regardless of this setting.
retry:
  max_attempts: 3
  backoff_seconds: 2
  retry_on: ["*"]
budget
object
Per-node budget cap. Uses the same max_tokens / max_usd fields as the top-level budget. The cap covers all LLM calls the node makes, including all iterations of a loop.
budget:
  max_tokens: 10000
  max_usd: 0.50

Node types

An agent node makes a single LLM call. The scheduler renders the prompt using Jinja2, calls the model, and stores the response as the node’s artifact. If you add tools or mcp_servers, the scheduler runs a full tool-use loop until the model stops calling tools or max_tool_iterations is reached.Required fields: id, type, model, prompt
model
string
required
Model identifier. Use a plain name to use the default backend (claude-sonnet-4-6), or a provider/model prefix to pin a specific backend (openai/gpt-4o). See Backends.
prompt
string
required
Jinja2 template rendered at run time. Reference inputs and upstream node outputs by name. Use prompt_file to load from a file instead.
system
string
System prompt. Use system_file to load from a file.
max_output_tokens
integer
Token budget for the model’s response. Default: 4096.
stream
boolean
When true, print token deltas to the console as they arrive. Only respected by the api backend. Default: false.
tools
list
Tool specifications to make available to the model. When non-empty, the scheduler runs a tool-use loop.
mcp_servers
list
MCP server configurations. Cannot be combined with output_schema.
max_tool_iterations
integer
Maximum rounds in the tool-use loop before the node stops regardless of stop reason. Range: 1–50. Default: 10.
output_schema
object
JSON Schema. When set, the model’s response is forced to be valid JSON matching the schema. Only works with the api backend; cannot be combined with tools or mcp_servers.
memory
string | object
Memory scope name, or a { scope, persist } object. When persist: true, the conversation history for this scope survives across runs (SQLite-backed).
fallback_chain
list[string]
Ordered list of model strings to try if the primary model fails with a retriable error (429, 5xx, network error, timeout). See Backends.
- id: analyst
  type: agent
  model: claude-sonnet-4-6
  max_output_tokens: 2000
  depends_on: [data_fetch]
  prompt: |
    Analyse this data and return three key insights:

    {{ data_fetch }}
  fallback_chain:
    - openai/gpt-4o