Memory scopes
How memory works
When you add amemory: field to an agent node, dagraph loads any previous conversation messages for that scope and prepends them to the agent’s context before the LLM call. After the call completes, the new exchange is appended to the scope. This gives the agent a rolling conversation history across calls — it sees what it said before, not just the current prompt.
Memory is keyed by scope name, not by run ID. Two separate runs that use the same scope name share the same history.
Short form (in-process)
The simplest way to use memory is to pass a bare string as the scope name. This keeps the history in memory for the duration of the current run only — it does not survive a process restart or a newagentgraph run invocation.
Long form (persistent)
Setpersist: true to store conversation history in a local SQLite database. The history survives across runs, so the agent picks up exactly where it left off the next time you run the DAG with the same scope name.
A bare string is shorthand for
{scope: "...", persist: false}. Use the object form when you need persistent memory across runs.Example: recurring research agent
A recurring research agent that remembers which topics it has already covered is a common use case forpersist: true. Each time you trigger the DAG, the agent sees the full history of previous sessions and avoids repeating itself:
Memory is tied to the scope name, not to a specific DAG or run. If two different DAGs use
scope: "shared-thread", they share the same conversation history. Use unique scope names to keep contexts separate.Run tracing
What gets traced
Everyagentgraph run writes a trace.jsonl file in the run directory (runs/<run_id>/trace.jsonl). The file uses JSON Lines format — one JSON object per line — following the OpenTelemetry GenAI semantic conventions.
Each run produces spans with these names:
| Span name | What it covers |
|---|---|
dag.run | The entire run from start to finish |
dag.node | A single node execution |
invoke_agent | The full agent call including any tool loops |
chat | A single LLM API call |
execute_tool | A single tool execution inside a tool-use loop |
Viewing traces
Useagentgraph inspect <run_id> to see a summary of token counts and timing per node:
trace.jsonl file contains the full detail. Here is an example of what a dag.node span looks like:
Integrating with OTel backends
dagraph’s trace format follows OTel GenAI semantic conventions. Thetrace.jsonl file produced in each run directory can be ingested by any OTel-compatible observability backend such as Langfuse, Braintrust, or Langsmith. Load the file using your backend’s import tool or script the upload against its API.
Secrets in prompts and model outputs are automatically redacted before being written to
trace.jsonl or sent to an OTel backend. dagraph detects common secret patterns (API keys, tokens, bearer credentials) and replaces them with [REDACTED].