Architecture

OpenRig is a local control plane built as a three-package monorepo: daemon, CLI, and UI. The daemon manages state, the CLI drives it, and the UI observes it.

System Stack

CLI / UI / MCP
      |
      v
Hono daemon routes
      |
      +-- dual-format route adapters
      |     - legacy v1 rigs / package bundles
      |     - rebooted v0.2 rigs / v2 pod bundles
      |
      v
Framework-free domain services
      |
      +-- SQLite state
      +-- tmux / cmux / resume adapters
      +-- runtime adapters (Claude Code / Codex)

Three packages: @openrig/daemon,@openrig/cli,@openrig/ui. The daemon is a Hono HTTP server with SQLite persistence. The CLI is the primary interface for both humans and agents. The UI is a React application for monitoring.

tmux: The Session Substrate

Every agent session in OpenRig is a tmux session. This is the foundational architectural decision — not an implementation detail. tmux provides the universal control surface that makes everything else possible.

Identity

Session naming convention {pod}-{member}@{rig} gives every agent a human-readable address. rig whoami resolves identity from tmux pane metadata when all else fails.

Communication

rig send injects text via tmux send-keys. rig capture reads pane output via capture-pane. No custom IPC, no message queues — the terminal is the protocol.

Observability

tmux pipe-pane captures entire session transcripts to disk. rig transcript and rig ask query this evidence store. Every agent action is auditable without the agent opting in.

Discovery

rig discover scans tmux sessions to find unmanaged agents. rig adopt brings them under management. Agents don't need to know OpenRig exists.

Because every terminal-based coding agent already runs in a terminal, OpenRig can manage Claude Code, Codex CLI, or any future agent without requiring integration, plugins, or agent-side changes. The agent is unmodified. tmux is the interface.

Packages

@openrig/daemon
109 source files
  • 65 domain files
  • 14 route files
  • 9 adapters
  • 15 migrations
@openrig/cli
23 source files
  • Command handlers
  • MCP server
  • Output formatting
@openrig/ui
53 source files
  • Dashboard
  • Topology graph view
  • Import/bootstrap wizards
  • Bundle inspector

HTTP Routes

The daemon mounts the following route groups via createApp():

GET /healthz
/api/rigs
/api/rigs/:rigId/sessions
/api/rigs/:rigId/nodes
/api/rigs/:rigId/snapshots
/api/rigs/:rigId/restore
/api/rigs/:rigId/spec
/api/rigs/:rigId/spec.json
/api/rigs/import
/api/adapters
/api/events
/api/packages
/api/agents
/api/bootstrap
/api/discovery
/api/bundles
/api/ps
/api/up
/api/down
Dual-Format Behavior
  • /api/rigs/import, /validate, /preflight are dual-format (legacy + pod-aware)
  • Pod-aware import and preflight require X-Rig-Root header
  • /api/bundles/* routes detect schema version automatically
  • /api/up accepts both direct rig specs and v2 bundles

Domain Services

All domain services live under packages/daemon/src/domain/. Architecture rule: zero Hono imports in domain code. Routes depend on the domain; the domain never depends on routes.

Parsing and Validation

agent-manifest.ts
AgentSpec parse/normalize/validate
rigspec-schema.ts
Dual-format RigSpec validation
rigspec-codec.ts
Dual-format YAML codec
startup-validation.ts
Shared startup block validation

Resolution Pipeline

agent-resolver.ts
Resolves agent_ref, imports, collisions
profile-resolver.ts
Profile selection, startup layering, restore-policy narrowing
projection-planner.ts
Runtime resource projection planning
startup-resolver.ts
Additive startup layering

Startup and Runtime

runtime-adapter.ts
Adapter contract and bridge types
startup-orchestrator.ts
Startup projection, delivery, actions, readiness
rigspec-instantiator.ts
Dual-stack instantiation
rigspec-exporter.ts
Live rig export to YAML/JSON

Snapshot and Restore

snapshot-capture.ts
Captures pods, continuity state, startup context
restore-orchestrator.ts
Resume, checkpoint delivery, startup replay
checkpoint-store.ts
Per-agent checkpoint persistence
pod-repository.ts
Pod CRUD and continuity state

Runtime Adapter Contract

The RuntimeAdapter interface is the four-method contract between OpenRig and a harness runtime:

interface RuntimeAdapter {
  listInstalled(binding): InstalledResource[]
  project(plan, binding): ProjectionResult
  deliverStartup(files, binding): DeliveryResult
  checkReady(binding): ReadinessResult
}
ClaudeCodeAdapter

Projects to .claude/, merges into CLAUDE.md

CodexRuntimeAdapter

Projects to .agents/, merges into AGENTS.md

TerminalAdapter

No-op project/deliver, immediate readiness

Database Schema

SQLite with 15 migrations. Core state tables:

rigs
Top-level topology container
nodes
Logical node identity inside a rig. Pod-aware columns: pod_id, agent_ref, profile, label
edges
Logical topology relationships
bindings
Physical surface attachment: tmux/cmux coordinates
sessions
Live execution state with startup_status
events
Append-only event log
snapshots
Serialized rig state
checkpoints
Per-node recovery state with pod/continuity context
pods
Pod record with label, summary, continuity policy
continuity_state
Live per-pod/node operational continuity: healthy, degraded, restoring
node_startup_context
Persisted startup replay context for restore

Execution Flows

Import / Validate / Preflight

Validate: pod-aware uses RigSpecSchema.validate; legacy uses LegacyRigSpecSchema.validate
Preflight: pod-aware runs rigPreflight() with fsOps; legacy uses RigSpecPreflight.check()
Import: pod-aware uses PodRigInstantiator; legacy uses RigInstantiator
Export: pod-aware exports version "0.2" YAML; legacy exports flat-node v1

Startup Sequence

01Mark pending, emit event
02Project resources (filesystem)
03Deliver pre-launch files (guidance, skills)
04Launch harness via adapter
05Wait for harness readiness (retry with backoff)
06Deliver interactive files (send_text through TUI)
07Execute after_files actions
08Execute after_ready actions
09Persist startup context + resume token
10Mark ready, emit event

Snapshot / Restore

  • Reads newest session by monotonic ULID, not timestamp alone
  • Consults live continuity_state before replaying startup
  • Preserves state when a node is already restoring
  • Replays restore-safe startup using persisted context
  • Prefilters missing optional artifacts into warnings
  • Hard-fails a node if a required startup file is missing

Architecture Rules

1Zero Hono in domain/ and adapters/.
2Routes depend on the domain; the domain never depends on routes.
3Shared DB-handle invariants are enforced at construction time.
4Runtime is member-authoritative in the pod-aware model.
5Startup layering is additive and ordered (7 layers).
6Restore-policy narrowing is one-way only.
7Base/import collisions warn; ambiguous import/import refs fail loudly.
8Bundle assembly uses containment checks rooted in the owning artifact.
9Restore replay uses classification-free projection intent.
10Startup status is explicit session state: pending, ready, failed.
11Session recency depends on monotonic ULIDs.

Event System

Append-only, SQLite-backed. The RigEvent union includes:

Emitted in Production
node.startup_pending
node.startup_ready
node.startup_failed
Defined, Not Yet Emitted
pod.created
pod.deleted
continuity.sync
continuity.degraded

Test Coverage

daemon
1153
90 test files
cli
168
17 test files
ui
237
20 test files

Total: 127 Vitest files, 1,558 tests. All suites passing.