dagraph is a Python CLI tool distributed on PyPI as the dagraph package. This page covers system requirements, install options, and how to configure each of the 7 supported LLM backends.
Requirements
- Python 3.12 or newer — dagraph uses features not available in earlier versions.
- uv — recommended for managing Python environments and fast installs.
claude CLI — required only for the default claude_code backend. Install it from claude.ai/code and ensure it is on your PATH.
Install
Clone the repository and install in editable mode with development dependencies:git clone https://github.com/tamboo-dev/agentgraph.git
cd agentgraph
uv venv --python 3.12
uv pip install -e ".[dev]"
The [dev] extra includes pytest, ruff, and mypy.
Verify the install
You can also validate one of the bundled examples to confirm the engine is working:
agentgraph validate examples/research.yaml
# ✓ research: 4 nodes, 2 wave(s)
# wave 1: ['research_a', 'research_b', 'research_c']
# wave 2: ['synthesizer']
Backend setup
dagraph supports 7 executor backends. The default is claude_code. You can override the default at run time with --backend <name>, or route individual nodes to a specific backend using a model-prefix in the model field (e.g. openai/gpt-4o).
claude_code (default)
Spawns claude -p as a subprocess and bills against your Claude Code subscription. No API key required.
Setup: Install the claude CLI and confirm it is on your PATH:
api (Anthropic Messages API)
Calls the Anthropic API directly. Billed per token.
Setup: Set ANTHROPIC_API_KEY in your environment or in a .env file at the project root:
ANTHROPIC_API_KEY=sk-ant-...
openai
Calls the OpenAI Chat Completions API. Billed per token.
Setup:
gemini
Calls the Google GenAI API. Billed per token.
Setup:
bedrock
Calls Anthropic models via AWS Bedrock. Billed per token through your AWS account.
Setup: Install the [bedrock] extra, then configure your AWS credentials using any method in the AWS credential chain (environment variables, ~/.aws/credentials, IAM role, etc.):
pip install "dagraph[bedrock]"
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=...
AWS_DEFAULT_REGION=us-east-1
ollama
Runs models locally via an Ollama daemon. Free to use.
Setup: Start the Ollama daemon so it is listening on localhost:11434:
Then pull a model before running a workflow that uses it:
codex
Runs OpenAI Codex via the codex CLI. Billed against your OpenAI Codex plan.
Setup: Install the codex CLI and confirm it is on your PATH:
.env file setup
For API-key backends, create a .env file in your project directory. dagraph loads it automatically at startup:
# .env
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
GEMINI_API_KEY=AIza...
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=...
AWS_DEFAULT_REGION=us-east-1
Never commit your .env file to version control. Add it to .gitignore.
Model-prefix routing
You can mix backends within a single workflow by prefixing the model field. dagraph detects the prefix and routes to the matching executor regardless of the --backend flag:
nodes:
- id: brainstorm
type: agent
model: anthropic/claude-sonnet-4-6 # → api backend
prompt: "..."
- id: check
type: agent
model: openai/gpt-4o # → openai backend
prompt: "..."
- id: summarize
type: agent
model: ollama/llama3.2 # → ollama backend (local)
prompt: "..."
Nodes without a prefix use the backend set by --backend (default: claude_code).
Fallback chains
To keep a workflow running when a provider has an outage, declare a fallback_chain on any agent node. dagraph tries each model in order and uses the first successful response:
- id: research_a
type: agent
model: claude-sonnet-4-6
fallback_chain:
- openai/gpt-4o
- ollama/llama3.2
prompt: "..."
Auth errors (HTTP 401/403) and bad-request errors (400/422) skip the fallback chain — a different provider will not fix invalid credentials or malformed requests.