OpenRig vs CrewAI

CrewAI and OpenRig started from different assumptions about what an agent even is. That difference propagates into everything else — what you install, how you start, how things coordinate, who operates the thing. The comparison is useful because those different starting assumptions produce different shapes, and seeing both helps clarify what category each one is actually in.

Why I publish comparisons. This category is new. The vocabulary isn't settled. Nobody has a complete mental model of what's in the space, including me. The comparisons here are an attempt to find the shape of the category — what approaches are being taken, what each approach looks like, what we might learn from each other. Not a bake-off.

Two different assumptions about "agent"

CrewAI started from: what if agents could be composed from Python primitives — an LLM API call, a role, a set of tools — and orchestrated in code? The answer is a Python framework. You define Agent objects with roles and goals, compose them into a Crew, call kickoff(), and the framework runs a Process (sequential or hierarchical) that executes Tasks against LLM API calls. The agent is code you wrote.

OpenRig started from a different question: what if Claude Code and Codex are already extraordinarily capable, and what I actually want is to run several of them together as a coordinated team? The answer is a topology-first system. OpenRig doesn't build agents and doesn't wrap LLMs — it runs Claude Code and Codex as they ship, unchanged, arranged into a rig you define in YAML. The agent is a shipped product.

Neither answer is obviously right. They're different bets about where the leverage is. CrewAI bets that the interesting design work is in the programmatic orchestration layer. OpenRig bets that the interesting design work is in the topology of agents that already exist and already work.

Side-by-side

AxisOpenRigCrewAI
What the primitive isThe rig — a topology of running shipped harnesses.The Agent — a Python object with a role, goals, and a set of tools.
What runs as a nodeClaude Code and Codex as they ship. Terminal processes too.Your Python code, plus LLM API calls (OpenAI, Anthropic, Ollama, etc.) that the code makes.
How you startrig up demo. A shipped spec gives you 8 agents in one command.pip install crewai. Define Agents, Tasks, a Crew; call kickoff().
How agents coordinateDirectly through tmux (rig send / broadcast / chatroom). Rig queue for durable cross-pod signaling, shipping soon.Crew with Process (sequential or hierarchical) or Flow with @listen / @router / @start decorators.
Interactivitytmux-attachable. You can read any agent's screen and talk to it live.Non-interactive. The crew runs, you read the output.
Who operates itA developer at the terminal.A Python application's code path.

What it looks like when the primitive is a shipped harness

Because an OpenRig node is a real Claude Code or Codex session — not a Python object making API calls — a few things fall out:

  • You get all of Claude Code. All the tooling, all the reasoning, all the shipped product capabilities. You don't re-implement any of it. The same goes for Codex. You use these agents the way you already do, because they are the same agents.
  • Agents are addressable, not runnable. A rig doesn't have a kickoff(). The agents are running; you attach to them, talk to them, ask them for help. You don't start a program and wait for it to return.
  • State lives in the session, not in your code. Each agent's context is inside its Claude Code or Codex process, where Claude or Codex already manages it. The rig doesn't serialize that state through Python objects. It snapshots the topology and restores the sessions by name.
  • The topology is operator infrastructure, not application code. A rig is a YAML file you keep next to your project, not a Python class in your application. You don't import a rig; you boot it.

The tradeoff is real. OpenRig is narrow by design — it runs the coding agents that ship with TUIs, it runs them on your machine, and it treats them as first-class. If you're building a Python application that needs to orchestrate agent behavior as part of its own logic — not a tool you run, but a system you ship to end users — you're looking for the other shape, and CrewAI is well-designed for it.

Other Comparisons

Compare other approaches

Quickly build a mental model for multi-agent harnesses and topologies by comparing the leading approaches, including more traditional multi-agent systems.

Try OpenRig

If running the shipped coding agents together as a team is the shape you're looking for, start here.