Your laptop, desktop, and EC2 — all running Claude Code, Codex, and Aider in parallel on the same repo. One command plans the work. Every machine takes a mission. GitHub is the only message bus.
Watch the full flow: your laptop becomes the commander, missions fan out through GitHub, ships work in parallel, branches converge. Scrub or step through to inspect any phase.
01Idle00:00 / 36s
Three machines. Zero coordination.
Your laptop is on Claude Code. Your desktop is idle. Your EC2 is still running from last sprint.
Live mission board
Watch the fleet from a single terminal.
fleet status — three ships, three branches, real-time heartbeats. Color-coded so a glance tells you who's working, who's stalled, who's ready to merge.
Most AI coding agents YOLO straight to a commit. FleetSpark's
drsti-dev-flow plugin enforces three phases on every mission —
spec, implementation, peer review — gated by maturity level.
Risky changes get stricter scrutiny. Trivial ones breeze through.
FleetSpark dogfoods this on itself.
Spec, implementation, and review live on separate branches. Each phase is independently auditable.
03
One command, governed by default
fleet run --template drsti-dev-flow wires the whole flow. Or opt out and run lean.
$fleet run --template drsti-dev-flow
Spark execution
The bits that turn three machines into one fleet.
Parallel dispatch, shadow re-runs when a ship goes dark, and fleet-wide context shared in seconds — not at the speed of "let me catch you up on Slack."
DISPATCH
DAG dispatch
The commander plans your goal as a dependency graph and pushes branches in topological order. Independent missions ship in parallel — automatically.
SHADOW
Shadow dispatch
A ship missed three heartbeats. Without waking you up, the commander reassigns the same mission to a healthy ship — first one to PR wins. No babysitting.
CONTEXT
Fleet brief
One mission generates context (an API schema, a migration plan). The brief is pushed once and every other ship picks it up before its next step. Map-reduce, but for context.
Agent ecosystem
Bring your own agent. Fleet doesn't care.
8 adapters ship out of the box. Mix Claude Code on your laptop with Codex on a desktop and Aider on a spot instance — all on the same plan, coordinated through GitHub.
🅒
Claude Code
@fleetspark/adapter-claude
shipped
Anthropic's CLI agent. The default for most fleet runs — battle-tested across hundreds of missions.
○
OpenAI Codex
@fleetspark/adapter-codex
shipped
OpenAI's terminal agent. Reasoning-heavy missions; runs alongside Claude Code on the same plan.
◇
Aider
@fleetspark/adapter-aider
shipped
The original local AI pair-programmer. Ideal for surgical edits and quick refactors on spot instances.
◆
OpenCode
@fleetspark/adapter-opencode
shipped
Open-source CLI agent. Run completely offline on your own hardware — no API keys required.
✦
Gemini CLI
@fleetspark/adapter-gemini
shipped
Google's Gemini CLI. Pair with Claude Code for multi-model redundancy across long-running runs.
⌥
Cursor
@fleetspark/adapter-cursor
shipped
Cursor in headless mode. Leverage its codebase-aware context engine inside fleet missions.
⚡
Amp
@fleetspark/adapter-amp
shipped
Sourcegraph Amp. Deep symbol navigation across large codebases — strong on refactor missions.
⌬
A2A / Custom
@fleetspark/adapter-a2a
shipped
Any A2A-compatible agent. Write a 50-line adapter and ship your own agent into the fleet.
+
Your adapter here
≈50 lines — see /adapters/
Why fleet
Every AI coding tool assumes one developer, one machine, one session.
Fleet is the only thing that turns the agents you already use into a coordinated swarm. The same Claude Code, Codex, and Aider — just multiplied.
Tool
Multi-machine
Multi-agent
Auto-planning
Auto-merge
Failover
Claude Code
·
·
·
·
·
Codex CLI
·
·
·
·
·
Aider
·
·
·
·
·
GitHub Copilot
·
·
·
·
·
⚡fleetSpark
✓
✓ 8 agents
✓
✓
✓
See it in action
From zero to a coordinated AI coding fleet in four steps.
1
Initialize your project
One command sets up Fleet in any git repo.
Terminal
~/my-project $npx fleetspark init
✓ Created .fleet/config.yml
✓ Created fleet/state branch
✓ Initialized empty FLEET.md
Fleet is ready. Run fleet command --plan to get started.
2
Plan your work
Describe what you want built. Fleet's LLM planner decomposes it into independent missions with dependencies.