Skip to main content
dagraph gives agent nodes two observability primitives: memory scopes so an agent can remember context from previous runs, and structured traces so you can see exactly what tokens were spent and where time went. Both are designed to work out of the box with no extra configuration, with opt-in persistence for memory and opt-in OTel export for traces.

Memory scopes

How memory works

When you add a memory: field to an agent node, dagraph loads any previous conversation messages for that scope and prepends them to the agent’s context before the LLM call. After the call completes, the new exchange is appended to the scope. This gives the agent a rolling conversation history across calls — it sees what it said before, not just the current prompt. Memory is keyed by scope name, not by run ID. Two separate runs that use the same scope name share the same history.

Short form (in-process)

The simplest way to use memory is to pass a bare string as the scope name. This keeps the history in memory for the duration of the current run only — it does not survive a process restart or a new agentgraph run invocation.
- id: topic_researcher
  type: agent
  model: claude-sonnet-4-6
  memory: "research-thread"
  prompt: "Continue researching {{ topic }}. What have we not covered yet?"

Long form (persistent)

Set persist: true to store conversation history in a local SQLite database. The history survives across runs, so the agent picks up exactly where it left off the next time you run the DAG with the same scope name.
- id: topic_researcher
  type: agent
  model: claude-sonnet-4-6
  memory:
    scope: "research-thread"
    persist: true
  prompt: "Continue researching {{ topic }}. What have we not covered yet?"
memory
string | object
A bare string is shorthand for {scope: "...", persist: false}. Use the object form when you need persistent memory across runs.

Example: recurring research agent

A recurring research agent that remembers which topics it has already covered is a common use case for persist: true. Each time you trigger the DAG, the agent sees the full history of previous sessions and avoids repeating itself:
name: weekly_digest
description: Recurring research agent that builds on prior sessions.

inputs:
  topic:
    type: string
    required: true

nodes:
  - id: researcher
    type: agent
    model: claude-sonnet-4-6
    memory:
      scope: "digest-history"
      persist: true
    prompt: |
      You are a research assistant that tracks ongoing coverage of {{ topic }}.
      Review your conversation history above and identify what you have already
      covered. Then research and summarize only NEW developments from the past week.
      Do not repeat information from prior sessions.
Memory is tied to the scope name, not to a specific DAG or run. If two different DAGs use scope: "shared-thread", they share the same conversation history. Use unique scope names to keep contexts separate.

Run tracing

What gets traced

Every agentgraph run writes a trace.jsonl file in the run directory (runs/<run_id>/trace.jsonl). The file uses JSON Lines format — one JSON object per line — following the OpenTelemetry GenAI semantic conventions. Each run produces spans with these names:
Span nameWhat it covers
dag.runThe entire run from start to finish
dag.nodeA single node execution
invoke_agentThe full agent call including any tool loops
chatA single LLM API call
execute_toolA single tool execution inside a tool-use loop
Span attributes include the model name, input tokens, output tokens, cache tokens, and wall-clock duration. All string values — including prompts and outputs — are automatically redacted before writing, so secrets that appear in prompts do not leak into trace files.

Viewing traces

Use agentgraph inspect <run_id> to see a summary of token counts and timing per node:
agentgraph inspect run_20260426_143201

Node            Status     Input   Output  Cache   USD
──────────────  ─────────  ──────  ──────  ──────  ──────
research_a      completed  1 204    843      0     $0.004
research_b      completed  1 198    791      0     $0.003
research_c      completed  1 211    820      0     $0.004
synthesizer     completed  4 102   2 341     0     $0.042
──────────────────────────────────────────────────────────
Total                      7 715   4 795     0     $0.053
The raw trace.jsonl file contains the full detail. Here is an example of what a dag.node span looks like:
{
  "kind": "span",
  "trace_id": "a3f8c21b4e9d0f76",
  "span_id": "b7e1209d3c4a5f80",
  "parent_span_id": "001f4a8c2d3e9b17",
  "name": "dag.node",
  "status": "ok",
  "start_ns": 1745673121004000000,
  "end_ns": 1745673128741000000,
  "duration_ms": 7737.0,
  "attributes": {
    "dag.node.id": "synthesizer",
    "dag.node.type": "agent",
    "gen_ai.request.model": "claude-sonnet-4-6",
    "gen_ai.usage.input_tokens": 4102,
    "gen_ai.usage.output_tokens": 2341,
    "gen_ai.usage.cache_read_tokens": 0,
    "gen_ai.usage.cache_write_tokens": 0
  }
}

Integrating with OTel backends

dagraph’s trace format follows OTel GenAI semantic conventions. The trace.jsonl file produced in each run directory can be ingested by any OTel-compatible observability backend such as Langfuse, Braintrust, or Langsmith. Load the file using your backend’s import tool or script the upload against its API.
The trace.jsonl file is always written for every run. You can import it into any observability tool after the fact without re-running the DAG.
Secrets in prompts and model outputs are automatically redacted before being written to trace.jsonl or sent to an OTel backend. dagraph detects common secret patterns (API keys, tokens, bearer credentials) and replaces them with [REDACTED].