--backend CLI flag; individual nodes can override it by prefixing the model name with a provider identifier.
Backend summary
| Backend | How it runs | Billing | Required setup |
|---|---|---|---|
claude_code | Spawns the claude CLI as a subprocess | Claude Code plan (Max/Pro) | claude CLI installed and authenticated |
api | Anthropic Messages API | Per-token via API key | ANTHROPIC_API_KEY environment variable |
openai | OpenAI Chat Completions API | Per-token via API key | OPENAI_API_KEY environment variable |
gemini | Google GenAI API | Per-token via API key | GEMINI_API_KEY environment variable |
bedrock | AWS Bedrock | Per-token via AWS account | pip install dagraph[bedrock] + AWS credentials |
ollama | Local Ollama daemon | Free (local) | Ollama running on localhost:11434 |
codex | OpenAI Codex CLI subprocess | OpenAI plan | codex CLI installed and authenticated |
Backend details
claude_code (default)
claude_code (default)
The
claude_code backend spawns claude -p as a subprocess for each node call. It runs against your Claude Code plan (Max or Pro) rather than the API, so nodes do not accrue per-token API charges. Token usage is still recorded in traces and counted against any budget you set (using equivalent API cost), but the charges are your plan subscription, not pay-per-token.The
claude_code backend starts a new, isolated claude session per node call with no persistent tools. Streaming (stream: true on a node) is silently ignored for this backend.api — Anthropic Messages API
api — Anthropic Messages API
The
api backend calls the Anthropic Messages API directly. It supports all agent node features including tools, mcp_servers, output_schema, and stream.Setup: Set ANTHROPIC_API_KEY in your environment or in a .env file at your project root.openai — OpenAI Chat Completions
openai — OpenAI Chat Completions
The Use the
openai backend calls the OpenAI Chat Completions API.Setup: Set OPENAI_API_KEY in your environment or .env.openai/ prefix to route a specific node to OpenAI regardless of the default backend:gemini — Google GenAI
gemini — Google GenAI
The
gemini backend calls the Google GenAI API.Setup: Set GEMINI_API_KEY in your environment or .env.bedrock — AWS Bedrock
bedrock — AWS Bedrock
The
bedrock backend calls AWS Bedrock. It requires the [bedrock] extra and AWS credentials configured in your environment (via AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY, an IAM role, or an AWS profile).Setup:ollama — local models
ollama — local models
The
ollama backend sends requests to a locally running Ollama daemon on localhost:11434. No API key is required. Use this as a free last resort in a fallback_chain, or as the default backend for development.Setup: Install and start Ollama, then pull the models you need:codex — OpenAI Codex CLI
codex — OpenAI Codex CLI
The
codex backend spawns the OpenAI Codex CLI as a subprocess, similar to how claude_code spawns the claude CLI. It runs against your OpenAI plan.Setup: Install and authenticate the codex CLI.Model-prefix routing
Any node can be pinned to a specific backend by prefixing themodel value with provider/. dagraph splits on the first /, resolves the backend, and passes the remainder to that backend’s SDK.
model value | Backend used | Model passed to SDK |
|---|---|---|
claude-sonnet-4-6 | default backend (from --backend) | claude-sonnet-4-6 |
anthropic/claude-sonnet-4-6 | api | claude-sonnet-4-6 |
openai/gpt-4o | openai | gpt-4o |
gemini/gemini-2.0-flash | gemini | gemini-2.0-flash |
bedrock/anthropic.claude-sonnet-4-6-v1 | bedrock | anthropic.claude-sonnet-4-6-v1 |
ollama/llama3.2 | ollama | llama3.2 |
anthropic, openai, gemini, bedrock, ollama. An unknown prefix raises a validation error at run time.
Setting the default backend
Use the--backend flag with agentgraph run. Every node that does not have a provider prefix in its model field uses this default.
model prefix always override the default:
Fallback chains
Everyagent node (and the generator/evaluator roles inside composite nodes) accepts a fallback_chain: an ordered list of model strings to try when the primary model returns a retriable error.
Multi-provider example
The following example (fromexamples/multi_provider_fallback.yaml) shows two nodes each using a different primary provider with a fallback chain: