Skip to main content
The agentgraph run command is the right tool for one-off executions, but production pipelines usually need to run on a schedule, respond to external events, or both. dagraph provides two commands for this: schedule runs a DAG on a cron schedule and blocks until you stop it, and serve starts a long-running HTTP server that accepts webhook triggers and optionally also runs on a cron. This guide covers both.

Prerequisites

Both schedule and serve require the serve extras package, which adds APScheduler and Uvicorn:
uv pip install 'agentgraph[serve]'
If you run agentgraph schedule or agentgraph serve without the extras installed, dagraph prints a clear error message with the install command.

Running on a cron schedule

Use agentgraph schedule to run a DAG repeatedly on a cron expression. The command blocks until you press Ctrl+C — run it under a process manager (systemd, supervisor, Docker) in production.
agentgraph schedule research.yaml \
  --cron "0 */6 * * *" \
  --input topic="AI industry news" \
  --backend api
This runs the research.yaml DAG every 6 hours on the hour.

Cron syntax

dagraph uses standard 5-field cron syntax: minute hour day month weekday.
┌─────── minute (0–59)
│ ┌───── hour (0–23)
│ │ ┌─── day of month (1–31)
│ │ │ ┌─ month (1–12)
│ │ │ │ ┌ weekday (0–7, 0 and 7 = Sunday)
│ │ │ │ │
* * * * *
Common examples:
ExpressionMeaning
0 */6 * * *Every 6 hours on the hour
0 9 * * 1-59 AM Monday–Friday
*/15 * * * *Every 15 minutes
0 0 * * *Midnight every day

Non-overlapping execution

dagraph enforces that only one instance of a DAG runs at a time. If a tick fires while a previous run is still in progress, the tick is skipped and a warning is logged:
⚠ cron tick skipped: previous run still in progress
Missed ticks are coalesced — if the process was down when several ticks should have fired, dagraph runs at most one catch-up execution on restart, not one per missed tick.

Webhook triggers with agentgraph serve

agentgraph serve starts a lightweight HTTP server that triggers a DAG run on each incoming POST request. The JSON body of the request becomes the DAG inputs.
agentgraph serve research.yaml \
  --webhook /trigger \
  --port 8000
Trigger a run:
curl -X POST http://localhost:8000/trigger \
  -H "Content-Type: application/json" \
  -d '{"topic": "quantum computing breakthroughs"}'
Response:
{"run_id": "abc123def456", "status": "completed"}
The response is synchronous — the HTTP request blocks until the DAG finishes and returns the final status. Each request creates a fresh run with a unique run_id stored under runs/.

Merging static and webhook inputs

Pass --input flags to serve to provide static defaults. Webhook body keys are merged on top, so the webhook caller can override specific inputs without having to send everything:
agentgraph serve research.yaml \
  --webhook /trigger \
  --input backend="api" \
  --input max_words="500"
A request body of {"topic": "AI safety"} results in the inputs: topic="AI safety", backend="api", max_words="500".

Securing the webhook with Bearer auth

Use --webhook-secret to require an Authorization: Bearer <secret> header on every request. Requests without the correct header receive a 401 response and do not trigger a run:
agentgraph serve research.yaml \
  --webhook /trigger \
  --webhook-secret my-secret-token
curl -X POST http://localhost:8000/trigger \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer my-secret-token" \
  -d '{"topic": "robotics"}'
You can also set the secret via the AGENTGRAPH_WEBHOOK_SECRET environment variable instead of passing it as a flag — this keeps secrets out of your shell history and process list.

Combining webhook and cron

Pass both --webhook and --cron to serve to run on both triggers simultaneously from a single process:
agentgraph serve research.yaml \
  --webhook /run \
  --cron "0 9 * * 1-5" \
  --input topic="daily AI digest" \
  --backend api
The webhook and cron triggers share the same non-overlapping constraint — if a cron run is in progress when a webhook request arrives, the webhook run still starts (they are independent runs), but a cron tick that fires while a cron run is already executing will be skipped.

Changing the bind address and port

By default serve listens on 127.0.0.1:8000. Change this with --host and --port:
# Listen on all interfaces, port 9000
agentgraph serve research.yaml \
  --webhook /trigger \
  --host 0.0.0.0 \
  --port 9000
Binding to 0.0.0.0 exposes the server to all network interfaces. Always use --webhook-secret when serving on a non-localhost address.

Lifecycle hooks as an alternative notification mechanism

If you need to notify an external system when a scheduled run completes (send a Slack message, write to a database, trigger a downstream pipeline), use lifecycle hooks instead of polling the run status. Add a hooks section to your DAG YAML:
name: research
hooks:
  on_dag_complete:
    - type: webhook
      url: "https://hooks.slack.com/services/${SLACK_WEBHOOK_PATH}"
      headers:
        Content-Type: application/json

  on_dag_failed:
    - type: command
      command: "echo 'DAG failed' | mail -s 'dagraph alert' ops@example.com"

nodes:
  - id: research_a
    ...
Supported lifecycle events:
EventFires when
on_dag_startThe scheduler begins executing the first wave
on_dag_completeAll nodes completed successfully
on_dag_pausedAn approval_gate or user_input node paused the run
on_dag_failedAny node failed and the run halted
on_node_startA specific node begins executing
on_node_completeA specific node finishes successfully
on_node_failedA specific node fails
Webhook hook URLs support ${VAR} environment variable expansion in both the URL and headers, so you can keep secrets out of the YAML file. Hook failures are logged as warnings and never fail the DAG.

Human approval gates

Use approval gates to pause scheduled runs for human review before continuing.

Parallel agents

Build the fan-out DAGs that your scheduled and webhook-triggered runs execute.