swarm is installed as a standalone command via pip install dot-swarm.
pip install dot-swarm # base CLI
pip install 'dot-swarm[ai]' # + AWS Bedrock support (boto3)
| Flag | Default | Description |
|---|---|---|
--path PATH |
. (cwd) |
Root path to search for .swarm/ directory |
--version |
— | Print version and exit |
--help |
— | Show help |
All commands inherit --path. Example: swarm --path ../api-service status
swarm initInitialize a .swarm/ directory in the current repo.
swarm init # auto-detect org vs division level
swarm init --level org # force org level (ORG- item IDs)
swarm init --level division # force division level
swarm init --code CLD # set division code (default: derived from folder name)
Creates: BOOTSTRAP.md, context.md, state.md, queue.md, memory.md
swarm statusPrint current state and active/pending queue items for this division.
swarm status # active + pending items only
swarm status --all # include done items
swarm lsList queue items with filtering.
swarm ls # all items
swarm ls --section active # active only
swarm ls --section pending # pending only
swarm ls --priority high # filter by priority
swarm ls --project cloud-stability # filter by project tag
swarm exploreShow the heartbeat of all divisions in the colony. Recursively discovers .swarm/ directories.
swarm explore # from current directory, depth 2
swarm explore --depth 3 # search deeper
swarm --path ~/org explore # from org root
swarm reportGenerate a full markdown report of all divisions. Unlike explore, outputs a complete
document suitable for sharing, filing as a GitHub issue, or posting to a wiki.
swarm report # print to stdout
swarm report --out REPORT.md # write to file
swarm report --only active # active items only
swarm report --no-done # skip done sections
swarm readyList OPEN items with all dependencies satisfied — safe to pick up right now.
Equivalent to bd ready in the Gastown/Beads ecosystem.
swarm ready # human-readable list
swarm ready --json # machine-readable JSON array (for agent scripts)
Only items in the Pending section with state OPEN whose entire depends: chain
appears in the Done section are shown. Items with no dependencies are always listed.
swarm addAdd a new work item to the Pending queue.
swarm add "Add request ID tracing to all services"
swarm add "Fix Redis timeout" --priority high --project infra
swarm add "OAuth2 discovery" --notes "See RFC 8414 for discovery spec"
swarm add "Add request ID tracing to all services"
swarm add "Fix Redis timeout" --priority high --project infra
swarm add "OAuth2 discovery" --notes "See RFC 8414 for discovery spec"
swarm add "Run integration tests" --depends API-042,API-043
swarm add "Implement rate limiter" --max-retries 5 # override inspector retry limit for this task
Options: --priority [low|medium|high|critical], --project TEXT, --notes TEXT,
--depends ITEM-IDs (comma-separated), --max-retries N
--max-retries N sets a task-level retry override for the inspector role. When
an inspector rejects this item N times, it is automatically BLOCKED (surfacing in
swarm audit and swarm status) rather than re-opened. Set to 0 (default) to
inherit the inspector role’s max_iterations setting.
swarm claimClaim an item (move Active, stamp with agent ID + timestamp).
swarm claim API-042
swarm claim API-042 --agent my-agent-id
swarm doneMark a claimed item as done.
swarm done API-042
swarm done API-042 --note "Fixed by switching to sliding window counter"
swarm done API-042 --next "Pick up API-043 next" # update state.md focus
Inspector gate: When the inspector role is enabled,
swarm doneis blocked unless the item has valid proof attached. Workers must useswarm partial --prooffirst, then an inspector agent runsswarm inspect --pass. Use--forceto bypass as a human director.
swarm done API-042 --force # human director override
swarm partialCheckpoint progress on a claimed item without marking it done. Updates the item’s in-progress note and refreshes the claim timestamp.
swarm partial API-042 --note "Counter logic done, eviction policy next"
With proof (required when inspector role is enabled):
swarm partial API-042 --proof "branch:feature/rate-limiter commit:abc1234 tests:87/87"
The --proof value is a space-separated list of key:value pairs. Required fields
are validated against the inspector role config (branch and commit by default).
A warning is printed if required fields are missing — the inspector will reject the
item unless they are present.
See Agent Roles → Inspector for the full proof workflow.
swarm blockMark a claimed item as blocked.
swarm block API-042 "Waiting for staging environment credentials from ops"
swarm unblockClear a blocked item back to Open (or back to Claimed if an agent is specified).
swarm unblock API-042 # → OPEN
swarm unblock API-042 --reclaim # → re-CLAIMED by current agent
swarm auditCheck for drift: stale claims, blocked items, pending items, security scan, and AI-powered code-vs-docs drift check.
swarm audit # Basic: stale claims + blocked items (always shown)
swarm audit --pending # Also list all pending items
swarm audit --security # Add adversarial content scan of .swarm/ files
swarm audit --drift # Add AI code-vs-docs drift check (requires LLM backend)
swarm audit --trail # Verify pheromone trail HMAC signatures
swarm audit --full # All of the above
swarm audit --since 24 # Stale threshold in hours (default: 48)
--security scans .swarm/ markdown files and platform shims (CLAUDE.md, .windsurfrules, .cursorrules) for:
CRITICAL: Prompt injection, instruction erasure, persona hijacking, jailbreaks, LLM template injectionHIGH: Non-disclosure directives, hidden instructions, control characters, priority manipulationMEDIUM: Hidden HTML/markdown comments, HTML injection, code injection--drift runs the same AI analysis as the GitHub Actions drift-check workflow, locally. Compares the last 5 commits against .swarm/ state to detect misalignment.
--trail re-verifies every HMAC-SHA256 signature in trail.log. Tampered entries are flagged with the agent fingerprint responsible.
swarm healFull health pass: alignment + security scan + trail verification. Runs everything in sequence and logs all security findings to memory.md (findings are never silently swallowed).
swarm heal # Read-only health check
swarm heal --fix # Quarantine adversarial content + block tampered trail signers
swarm heal --depth 2 # Descend two levels into child divisions (default: 1)
Sections run by swarm heal:
swarm audit --security, covering all .swarm/ files and platform shims.trail.log.--fix not used.With --fix:
.swarm/quarantine/<timestamp>_<file>.bak for human forensic review. Does not auto-delete content — humans must excise injections..swarm/blocked_peers.json.swarm heal again after cleaning to confirm resolution.swarm handoffPrint a structured handoff note for the current session — what was done, what’s in flight, what’s next. Useful at the end of a work session.
swarm handoff
swarm handoff --format json # machine-readable output
OGP-lite cross-swarm federation — exchange work items and alignment signals between separate .swarm/ hierarchies using signed intent messages. Trust is bilateral and explicit; there is no central registry.
Key design principles (from OGP build learnings):
swarm federation initCreate the federation/ directory structure inside .swarm/.
swarm federation init
Creates: federation/trusted_peers/, federation/inbox/, federation/outbox/, federation/policy.md, federation/exports.md.
swarm federation export-idPrint this swarm’s public identity — the file to share with federation peers out-of-band (email, Slack, git).
swarm federation export-id # print to stdout
swarm federation export-id --out id.json # write to file
This is safe to commit or share. The private .signing_key is never exposed.
swarm federation trustImport a peer’s identity file and establish bilateral trust.
swarm federation trust peer_identity.json
swarm federation trust peer_identity.json --name "Acme Corp" --scopes "work_request,alignment_signal"
| Option | Default | Description |
|---|---|---|
--name NAME |
peer’s swarm ID | Human-readable label |
--scopes SCOPES |
work_request,alignment_signal |
Comma-separated list of permitted intents |
swarm federation revokeRemove a peer from trusted peers.
swarm federation revoke <fingerprint>
swarm federation peersList all trusted federation peers and their permitted scopes.
swarm federation peers
swarm federation sendCreate a signed outbound intent message in outbox/. Deliver the resulting file to the peer manually (git push, shared directory, email attachment).
swarm federation send <fingerprint> work_request --desc "Need help with OAuth2 token exchange"
swarm federation send <fingerprint> alignment_signal --context "Completed auth module"
swarm federation send <fingerprint> capability_ad --context "Available for API integration work"
Intent types:
| Intent | Effect on peer | Notes |
|---|---|---|
work_request |
Adds item to peer’s queue | Requires work_request scope |
alignment_signal |
Informational only | No queue change at peer |
capability_ad |
Informational only | Advertise what this swarm can do |
swarm federation inboxList received messages waiting in inbox/.
swarm federation inbox
swarm federation applyApply a received inbox message to this swarm’s queue after doorman enforcement.
swarm federation apply inbox/20260406T1400Z_work_request_ab123456.json
swarm federation apply inbox/msg.json --yes # skip confirmation prompt
Doorman enforcement sequence (Layer 1 → 2 → 3):
from_fingerprint from the message body (never the claimed header)trusted_peers/? Does their record permit this intent?federation/policy.md allow this intent globally?A 403-equivalent reason is printed for any failure at any layer.
federation/policy.md is Layer 1 of the scope model. To disable an intent globally:
disabled: work_request
Per-peer scopes in trusted_peers/<fingerprint>.json are Layer 2. Both must pass for a message to be applied.
The current implementation exchanges identity files out-of-band and signs messages with HMAC-SHA256 for local trail integrity. The natural upgrade path to full OGP:
| Now (OGP-lite) | Full OGP |
|---|---|
Identity via swarm federation export-id (manual) |
Ed25519 public key, discoverable via DNS _ogp.example.com TXT record |
| Transport: git push / shared dir / manual | HTTP/gRPC OGP gateway |
| HMAC-SHA256 local trail signing | Ed25519 asymmetric signatures (peers can verify independently) |
trusted_peers/ files |
Bilateral gateway trust handshake |
Schedules are stored in .swarm/schedules.md. No daemon required — intended to be called from the system crontab or triggered manually.
swarm schedule listswarm schedule list
swarm schedule addswarm schedule add '0 */6 * * *' 'swarm heal --fix' # every 6 hours
swarm schedule add '6h' 'swarm audit --security' --name 'Security check'
swarm schedule add 'on:done API-042' 'swarm ai "claim API-043"' # event-driven
Schedule types:
| Type | Spec format | Example |
|---|---|---|
cron |
5-field cron | 0 9 * * 1 (Mondays 9am) |
interval |
Nm / Nh / Nd |
30m, 6h, 2d |
on:done |
on:done ITEM-ID |
on:done API-042 |
on:blocked |
on:blocked ITEM-ID |
on:blocked API-042 |
swarm schedule removeswarm schedule remove SCHED-001
swarm schedule runManually trigger a specific schedule regardless of due status.
swarm schedule run SCHED-001
swarm schedule run-dueRun all currently-due cron/interval schedules. Add to system crontab:
# In crontab (crontab -e):
* * * * * cd /path/to/repo && swarm schedule run-due
# Or manually:
swarm schedule run-due
Workflows are markdown files in .swarm/workflows/*.md with a YAML frontmatter header. They define multi-step sequences of swarm commands or arbitrary shell commands.
Patterns (inspired by swarms.ai):
sequential — steps run in order; halts on first failureconcurrent — all steps run in parallel threadsconditional — steps have if: guards based on previous step resultsmixture — concurrent with result aggregation (implement per-step agent: assignments)swarm workflow createScaffold a new workflow file.
swarm workflow create rate-limiter-rollout --pattern sequential --trigger "on:done API-041"
swarm workflow create weekly-report --pattern sequential --trigger "0 9 * * 1"
Edit the generated .swarm/workflows/<name>.md:
---
trigger: on:done API-041
pattern: sequential
description: Rate limiter implementation sequence
---
## Steps
1. swarm claim API-042
agent: bedrock
timeout: 30
2. swarm claim API-043
agent: claude
depends: API-042
timeout: 45
if: step1.ok
3. swarm heal --fix
agent: auto
timeout: 5
swarm workflow listswarm workflow list
swarm workflow showswarm workflow show rate-limiter-rollout
swarm workflow runswarm workflow run rate-limiter-rollout # confirm before running
swarm workflow run rate-limiter-rollout --dry-run # show steps, no execution
swarm workflow run rate-limiter-rollout --yes # skip confirmation
swarm workflow statusShow last run result from .swarm/workflow_runs.jsonl.
swarm workflow status rate-limiter-rollout
swarm aiTranslate a natural-language instruction into .swarm/ operations using an LLM backend.
Previews proposed changes before executing (unless --yes).
swarm ai "mark API-042 as done, merged the rate limiter PR"
swarm ai "what should I work on next?"
swarm ai "add three items for distributed tracing: design, implement, test"
swarm ai "write a memory entry: chose sliding window over token bucket for burst tolerance"
swarm ai "update focus to auth service hardening" --yes
# With a specific backend:
swarm ai "summarise the queue" --via claude
swarm ai "what needs doing?" --via gemini
swarm ai "mark done" --via bedrock # explicit Bedrock (default)
Options: --yes / -y, --agent TEXT, --limit INT (context token budget), --via [bedrock|claude|gemini|opencode], --chain, --max-steps INT
Workflow chaining (--chain):
With --chain, the AI is re-invoked after each successful set of write operations using the refreshed .swarm/ context. This continues until the AI returns no further write ops (work is complete) or --max-steps is reached.
Each batch of chained operations is signed and recorded in trail.log.
swarm ai "run the rate limiter rollout: design, implement, test, deploy" --chain --yes
swarm ai "implement auth hardening, then tracing, then dashboard metrics" --chain --max-steps 9 --yes
swarm ai "process the full pending queue" --chain --max-steps 20 --yes
Use --yes with --chain for fully automated runs; omit it to confirm each step interactively.
swarm sessionLaunch an interactive LLM session in the division root, seeded with .swarm/ context.
swarm session # interactive, auto-detect CLI
swarm session --with claude # prefer Claude Code
swarm session --with gemini # prefer Gemini CLI
swarm session "what should I pick up?" # single non-interactive turn
For Claude Code: CLAUDE.md already loads .swarm/ context automatically.
For gemini / opencode: writes .swarm/CURRENT_SESSION.md context file first.
Agent roles extend multi-agent task mode with structured behaviors. See
Agent Roles for the full conceptual guide. All role state is stored in
.swarm/roles/<name>.json — enabling or disabling a role never modifies queue.md.
swarm role listShow all known roles and their current status.
swarm role list
swarm role enableEnable a role (or reconfigure it if already enabled).
swarm role enable inspector
swarm role enable inspector --max-iterations 3 --require-proof "branch,commit,tests"
swarm role enable inspector --agent inspector-bot-1
swarm role enable watchdog
swarm role enable supervisor
swarm role enable librarian
| Option | Default | Description |
|---|---|---|
--max-iterations N |
3 |
(inspector) Fail count before watchdog escalation |
--require-proof FIELDS |
branch,commit |
(inspector) Comma-separated required proof fields |
--agent AGENT_ID |
(any) | Assign a specific agent ID to this role |
swarm role disableRemove a role config (idempotent).
swarm role disable inspector
swarm role showPrint full configuration for a role.
swarm role show inspector
The swarm inspect command is used by the inspector agent to verify a worker’s
proof-of-work and either pass (mark done) or fail (re-open) a work item.
The inspector role must be enabled first: swarm role enable inspector.
swarm inspectswarm inspect API-042 --pass
swarm inspect API-042 --pass --note "Tests pass, memory profile clean"
swarm inspect API-042 --fail --reason "Edge case under burst not handled — see test_rate_limiter.py:98"
| Option | Required | Description |
|---|---|---|
--pass |
one of | Accept proof — mark item done, sign in trail |
--fail |
one of | Reject proof — reopen item, increment inspect_fails |
--reason TEXT |
with --fail |
Explanation written back to the item’s notes |
--agent TEXT |
no | Inspector agent ID override |
On --pass: item moves to Done, inspector agent ID is recorded, operation signed
in trail.log.
On --fail: item moves back to OPEN, proof: is cleared, inspect_fails is
incremented. If inspect_fails >= max_iterations and watchdog is enabled, an
escalation alert is printed.
See Agent Roles → Inspector for the full workflow diagram.
swarm spawnLaunch an agent CLI in a named tmux window for a specific work item or role. Requires tmux 3.0+ and the chosen agent CLI on PATH.
# Worker — claim and open
swarm spawn API-042 # opencode worker, auto-claims item
swarm spawn API-042 --agent claude # Claude Code worker
swarm spawn API-042 --agent ollama # local Ollama worker
swarm spawn API-042 --no-claim # open window without claiming
# Role agents
swarm spawn --role inspector # inspector monitor window
swarm spawn --role supervisor # supervisor overview window
swarm spawn --role watchdog # watchdog audit loop
# Session control
swarm spawn API-042 --session my-project # custom tmux session name
swarm spawn API-042 --window-name rate-limiter-fix # custom window name
swarm spawn API-042 --agent-id my-agent-42 # explicit SWARM_AGENT_ID
| Option | Default | Description |
|---|---|---|
--agent |
opencode |
Agent CLI: opencode, claude, ollama, bedrock |
--role |
— | Spawn as a role agent (inspector, supervisor, watchdog) |
--session |
swarm |
tmux session name (created if absent) |
--window-name |
item_id or role | tmux window name |
--no-claim |
false | Skip auto-claiming the item |
--agent-id |
spawn-<id>-<ts> |
Override SWARM_AGENT_ID env var |
Environment set in the tmux window:
| Variable | Value |
|---|---|
SWARM_AGENT_ID |
Effective agent ID |
SWARM_ITEM_ID |
Item being worked on (if any) |
SWARM_ROLE |
worker, inspector, supervisor, or watchdog |
Attach / navigate:
tmux attach -t swarm # attach to session
tmux select-window -t swarm:API-042 # switch to window
tmux list-windows -t swarm # list all windows
Supported agents and install:
| Agent | Install |
|---|---|
opencode |
npm install -g opencode-ai |
claude |
Claude Code CLI |
ollama |
brew install ollama |
bedrock |
pip install 'dot-swarm[ai]' + aws configure |
swarm crawlWalk the current directory tree to build context. Stops descending into any
subdirectory that already has a .swarm/ directory (those are separate divisions).
Results are written to .swarm/context.md under a ## Directory Map section.
Combined with swarm heal, this replaces the need for a separate librarian role agent.
swarm crawl # catalog from cwd, depth 3
swarm crawl --depth 5 # go deeper
swarm crawl --create-items # also create OPEN queue items for each uncatalogued dir
swarm crawl --dry-run # preview without writing anything
| Option | Default | Description |
|---|---|---|
--depth N |
3 |
Max directory depth to walk |
--create-items |
false | Create OPEN queue items (project: librarian) for each uncatalogued dir |
--dry-run |
false | Print what would be cataloged, write nothing |
What gets skipped:
.git/, __pycache__/, node_modules/, .venv/, venv/, dist/, build/
Example output:
Crawled /Users/me/api-service
Swarm divisions found (2) — skipped:
services/auth/
services/payments/
Catalogued (4 dirs):
docs/ — 12 files (8×.md, 3×.png, 1×.svg)
scripts/ — 5 files (5×.sh)
config/ — 3 files (2×.toml, 1×.json)
tests/fixtures/ — 8 files (8×.json)
Written to: .swarm/context.md
Run 'swarm heal' to verify context alignment after cataloging.
swarm trailManage whether .swarm/ is visible or hidden in git. The trail is invisible by
default — your swarm state stays private unless you explicitly share it.
swarm trail status # show current visibility + .gitignore path
swarm trail invisible # hide .swarm/ (adds .swarm/ to .gitignore)
swarm trail visible # share .swarm/ (removes .swarm/ from .gitignore)
Why this matters: sharing a git repo also shares the full swarm trail — every
claim, completion, handoff note, and memory entry. invisible is the default so
that decision is always explicit.
After making visible:
swarm trail visible
git add .swarm/
git commit -m "chore: share swarm trail"
Security note: trail invisible only affects git tracking. The .swarm/
files remain fully functional locally; the signing key (.swarm/.signing_key)
and quarantine dir are always excluded by .swarm/.gitignore regardless of
trail visibility.
swarm init defaults to invisible. Pass --visible to opt in at init time:
swarm init --visible # share trail from the start
swarm configureInteractive wizard to set your default LLM interface and (if Bedrock) model + region.
swarm configure
Config stored at ~/.config/swarm/config.toml. Credentials are never stored here —
use aws configure or env vars for Bedrock; the respective CLI handles auth for others.
swarm setup-drift-checkInstall the swarm-drift-check.yml GitHub Actions workflow into the current repo.
Uses the gh CLI to set secrets if needed.
swarm setup-drift-check # install workflow file only
swarm setup-drift-check --commit # also commit + push
See Drift Check Setup for AWS Bedrock prerequisites.
<DIVISION-CODE>-<3-digit-number>
| Division | Code |
|---|---|
| Org level | ORG |
| api-service | API |
| auth-service | AUTH |
| dashboard | DASH |
| mobile-app | MOB |
| firmware | FW |
| docs | DOC |
| infra | INF |
| homelab | LAB |
| dot_swarm | SWC |
IDs are assigned sequentially and never reused.
Every swarm init generates a per-swarm HMAC-SHA256 signing identity:
| File | Description | Commit? |
|---|---|---|
.swarm/identity.json |
Public fingerprint (swarm ID, algorithm, created) | ✅ Yes |
.swarm/.signing_key |
256-bit private HMAC key | ❌ No (.gitignored) |
.swarm/trail.log |
Append-only signed operation log | ❌ No (local only) |
.swarm/blocked_peers.json |
Blocked fingerprints | ✅ Optional |
Each swarm ai batch records a signed entry in trail.log:
{"timestamp":"2026-04-06T14:00Z","swarm_id":"a1b2c3d4","fingerprint":"f8e7d6c5","agent_id":"cascade","op":"ai_batch","payload":{"step":1,"ops":["done","write_state"]},"signature":"abc123..."}
To verify the trail: swarm audit --trail or swarm heal.
To block a bad actor: swarm heal --fix (auto-blocks tampered fingerprints).
swarm heal and swarm audit --security scan for 18 patterns across 3 severity levels:
| Severity | Categories |
|---|---|
| CRITICAL | PROMPT_INJECTION, INSTRUCTION_ERASURE, PERSONA_HIJACK, JAILBREAK, LLM_TEMPLATE_INJECTION, SAFETY_OVERRIDE |
| HIGH | NON_DISCLOSURE, CONTROL_CHARACTERS, PRIORITY_OVERRIDE |
| MEDIUM | HIDDEN_HTML_COMMENT, HIDDEN_MD_COMMENT, HTML_INJECTION, CODE_INJECTION |
Files scanned: state.md, queue.md, memory.md, context.md, BOOTSTRAP.md, workflows/*.md, CLAUDE.md, .windsurfrules, .cursorrules, .github/copilot-instructions.md.
For multi-agent frameworks, use the built-in bridge:
from dot_swarm.swarms_provider import DotSwarmStateProvider, StigmergicSwarm
# Inject .swarm/ state into any agent's system prompt
provider = DotSwarmStateProvider(swarm_path="./.swarm")
system_prompt = provider.build_system_prompt(agent_name="Coordinator")
# Read queue state
queue = provider.get_queue() # {"active": [...], "pending": [...], "done": [...]}
# Apply AI response operations back to .swarm/ files
results = provider.apply_operations(llm_response_json, agent_id="my-agent")
# Stigmergic multi-agent coordination (requires: pip install swarms)
from swarms import Agent
swarm = StigmergicSwarm(swarm_path=".", agents=[agent1, agent2], max_rounds=10)
results = swarm.run("Implement the OAuth2 integration")
Agents coordinate indirectly through .swarm/ files (stigmergic protocol) — no direct agent-to-agent message passing required. Full git audit trail maintained automatically.