What is a rig?

A rig is a topology of coding agents working together. To understand what that means, let's build the mental model from the ground up - starting with what you already know.

The Model

At the center of everything is a foundation model. Claude Opus, GPT-5.4, Gemini, Codex - these are the raw reasoning engines. Tokens in, tokens out.

A model by itself can't do anything useful. It has no tools, no memory, no filesystem access. It's like a CPU without an operating system.

tokens in
Model
Claude Opus
GPT-5.4·Gemini·Codex
tokens out

The Agent

Wrap a model in a decision loop - observe, decide, act, repeat - and you get an agent core. That's the reasoning policy that decides what to do next.

But the core still can't act on the real world. It needs tools, file access, permissions, memory, context management. That's the harness runtime - products like Claude Code and Codex CLI.

Together, the harness and the core form an agent. Not separately. The harness isn't an agent. The core isn't an agent. Together they are. In infrastructure terms: the core is the process loop, the harness is the container / OS.

Claude Code
harness runtime · tools, context, memory, lifecycle
Agent Core
observe · decide · act · repeat
Claude Opus
↑ together, this is an agent
AgentSpec

You can define an agent as a YAML file - skills, hooks, guidance, startup commands, lifecycle defaults. That's an AgentSpec. It's the portable blueprint for one agent.

Full AgentSpec reference →
# agent.yaml - one agent's blueprint
name: reviewer
version: "1.0.0"

resources:
  skills:
    - code-review
    - security-audit
  guidance:
    - review-standards.md
  hooks:
    - on: post_review
      run: notify-orchestrator

profiles:
  thorough:
    uses:
      skills: [code-review, security-audit]
  quick:
    uses:
      skills: [code-review]

startup:
  files:
    - source: review-prompt.md
      target: CLAUDE.md
  restore_policy: resume_if_possible

Agents can talk to each other.

Open two terminal tabs. One runs Claude Code, the other runs Codex CLI. Point them at the same repo. Now they can communicate - through the filesystem, through shared files, or through terminal transports like tmux.

A concrete example: have both agents review the same PR. Different models catch different classes of mistakes. You get adversarial review that's better than either agent alone.

Claude Code
Agent Core
Claude Opus
filesystem
tmux
Codex CLI
Agent Core
GPT-5.4

A pod is agents sharing a context domain.

When agents externalize their state into shared memory, they form a pod. Each agent writes what it knows. The others can read it. Communication stays on-topic and contextual because the shared memory is scoped to what the pod is working on.

This creates mental-model HA. When one agent compacts or needs to restart, the others restore it from shared memory. You get infinite session persistence for every agent in the pod.

A pod might be a coder + QA + frontend specialist - all working on the same feature, all sharing context, all keeping each other current.

Pods are bounded context groups with continuity responsibility. Continuity policies are scoped to the pod - when one agent compacts, the pod coordinates to preserve shared knowledge. Session namespaces follow the format: {pod}-{member}@{rig}.

Pod
Development Pod
Coder
Claude Code
Core
Claude Opus
QA
Codex CLI
Core
GPT-5.4
Frontend
Claude Code
Core
Claude Sonnet
Shared Memory / Mental-Model HA

A rig is a topology of pods.

Combine pods and you get a rig - a complete multi-agent topology. An implementation pod, a review pod, a research pod, an orchestrator pod. Each pod is a context domain with scoped communication.

The orchestrator is the human's interface. One conversation thread, not fifteen. It runs as an HA pair so you always have continuity. Everything goes through the orchestrator - it ensures all members work in alignment with your vision.

Communication paths between pods are intentional. It's separation of concerns - the orchestrator facilitates, pods stay focused on their domain.

Real agent teams are not linear pipelines. They are topologies with distinct edge kinds: an orchestrator delegates to workers, QA can observe implementation, reviewers collaborate with each other, workers escalate to the orchestrator. A rig captures this declaratively. In infrastructure terms: a rig is Docker Compose / Terraform for agents.

# Edge kinds in a RigSpec
edges:
  - kind: delegates_to        # orchestrator → worker
    from: orch.lead
    to: dev.impl
  - kind: can_observe          # QA watches implementation
    from: dev.qa
    to: dev.impl
  - kind: collaborates_with    # peer review
    from: review.r1
    to: review.r2
  - kind: escalates_to         # worker → orchestrator
    from: dev.impl
    to: orch.lead
Rig
Full Development Topology
Orchestration Pod
HA Pair
Lead
Claude Code
Opus
Backup
Codex CLI
GPT-5.4
Shared Memory / Mental-Model HA
Development Pod
Coder
QA
Frontend
shared memory
Review Pod
Reviewer 1
Reviewer 2
shared memory
Research Pod
Claude
Codex
shared memory
The infrastructure analogy
0
ModelCPU
1
Agent CoreProcess loop
2
HarnessContainer / OS
3
RigDocker Compose / Terraform

A rig is just a YAML file. Share it.

A RigSpec defines the topology - pods, agents, edges, continuity policies. Each agent in the rig is referenced as an AgentSpec. Both are YAML files you can version-control and share.

A RigBundle goes further: it bundles the RigSpec plus all the actual files it references - skills, hooks, guidance, startup commands - into a portable archive. Send it to a teammate, they import it, boot it, and they're running the same topology.

Popular harness frameworks like G-Stack or GSD are just collections of files. You can distribute them as rig bundles too - turn any framework into a multi-agent topology and share it.

AgentSpec
agent.yaml

One agent's blueprint - skills, guidance, hooks, profiles, startup behavior, lifecycle. Portable across harness runtimes.

RigSpec
rig.yaml

The topology - pods, agents, edges, culture file, layered startup (7 layers), continuity policies.

RigBundle
.rigbundle

RigSpec + vendored AgentSpecs + SHA-256 integrity checksums. Portable archive - share and boot on any machine.

What do you do with a rig?

Run 40 agents on a Mac Mini while you manage four orchestrator threads from your phone. Each orchestrator gives you a single conversation that controls an entire pod.

Adversarial review

Two agents from different pods review the same PR. Different models catch different mistakes. Better than either alone.

Architectural round-tables

Pull one agent from each pod - implementation, research, review - into a tiger team. Cross-pollinate domain context for better decisions.

Simulated focus groups

One agent demos your product. Another plays the customer. A third watches and writes a dossier. Run it 20 times. Hill-climb.

Recursive self-improvement

A QA pod dog-foods your software, files issues, and hands them to an architecture team that decides how to fix them. Autonomous improvement loops.