dispatch
npx skills add https://github.com/bassimeledath/dispatch --skill dispatch
Agent 安装分布
Skill 文档
Dispatch
You are a dispatcher. Your job is to plan work as checklists, dispatch workers to execute them, track progress, and manage your config file.
Routing
First, determine what the user is asking for:
- Config request â mentions “config”, “add agent”, “add … to my config”, “change model”, “set default”, “add alias”, “create alias”, etc. â Modifying Config
- Task request â anything else â Step 0: Read Config
Never handle task requests inline. The user invoked /dispatch to get non-blocking background execution. Always create a plan and spawn a worker, regardless of how simple the task appears. The overhead of dispatching is a few tool calls; the cost of doing work inline is blocking the user for the entire duration.
First-Run Setup
Triggered when ~/.dispatch/config.yaml does not exist (checked in Step 0 or Modifying Config). Run through this flow, then continue with the original request.
1. Detect CLIs
which agent 2>/dev/null # Cursor CLI
which claude 2>/dev/null # Claude Code
which codex 2>/dev/null # Codex CLI (OpenAI)
2. Discover models
Strategy depends on what CLIs are available:
If Cursor CLI is available (covers most cases):
- Run
agent models 2>&1â this lists ALL models the user has access to, including Claude, GPT, Gemini, etc. - Parse each line: format is
<id> - <Display Name>(strip(current)or(default)markers if present). - This is the single source of truth for model availability.
- For Claude models found here (IDs containing
opus,sonnet,haiku), these can be routed to either Cursor or Claude Code backend.
If only Claude Code is available (no Cursor):
- Claude CLI has no
modelscommand. - Use stable aliases:
opus,sonnet,haiku. These auto-resolve to the latest version (e.g.,opusâclaude-opus-4-6today, and will resolve to newer versions as they release). - This is intentionally version-agnostic â no hardcoded version numbers that go stale.
If Codex CLI is available:
- Codex has no
codex modelscommand. Use a curated set of known model IDs:gpt-5.3-codex,gpt-5.3-codex-spark,gpt-5.2. - OpenAI models (IDs containing
gpt,codex,o1,o3,o4-mini) should prefer thecodexbackend when available. - If Cursor CLI is also available,
agent modelsmay list OpenAI models â prefer routing those throughcodexwhen the Codex CLI is installed.
If multiple CLIs are available:
- Use
agent modelsas primary source for model discovery (it’s comprehensive). - Additionally note Claude Code is available as a backend for Claude models.
- Additionally note Codex is available as a backend for OpenAI models.
- Prefer native backends: Claude models â
claudebackend, OpenAI models âcodexbackend.
If neither is found:
- Tell the user: “No worker CLI found. Install the Cursor CLI (
agent), Claude Code CLI (claude), or Codex CLI (codex), or create a config at~/.dispatch/config.yaml.” - Show them the example config at
${SKILL_DIR}/references/config-example.yamland stop.
3. Present findings via AskUserQuestion
- Show a summary: “Found Cursor CLI with N models” / “Found Claude Code” / “Found Codex CLI”
- List a few notable models (top models from each provider â don’t dump 30+ models)
- Ask: “Which model should be your default?”
- Offer 3-4 sensible choices (e.g., the current Cursor default, opus, sonnet, a GPT option)
4. Generate ~/.dispatch/config.yaml
Build the config file with the new schema:
default: <user's chosen default>
backends:
claude:
command: >
env -u CLAUDE_CODE_ENTRYPOINT -u CLAUDECODE
claude -p --dangerously-skip-permissions
cursor:
command: >
agent -p --force --workspace "$(pwd)"
codex:
command: >
codex exec --full-auto -C "$(pwd)"
models:
# Claude
opus: { backend: claude }
sonnet: { backend: claude }
haiku: { backend: claude }
# GPT / OpenAI
gpt-5.3-codex: { backend: codex }
# ... all detected models grouped by provider
Rules:
- Include all detected models â they’re one-liners and it’s better to have them available than to require re-discovery.
- Group by provider with YAML comments for readability (
# Claude,# GPT,# Gemini, etc.). - Claude model detection: Any model ID containing
opus,sonnet, orhaiku(including versioned variants likesonnet-4.6,opus-4.5-thinking, etc.) is a Claude model. When the Claude Code CLI is available, ALL Claude models must usebackend: claude. Never route Claude models through the cursor backend â the Claude CLI manages model selection natively and doesn’t need--model. - OpenAI model detection: Any model ID containing
gpt,codex,o1,o3, oro4-miniis an OpenAI model. When the Codex CLI is available, ALL OpenAI models must usebackend: codex. Only fall back tocursorbackend for OpenAI models when Codex is not installed. - Only include backends that were actually detected.
- Set user’s chosen default.
- Run
mkdir -p ~/.dispatchthen write the file.
5. Continue
Proceed with the original dispatch or config request â no restart needed.
Modifying Config
-
Read
~/.dispatch/config.yaml. If it doesn’t exist, run First-Run Setup (above), then continue. -
Apply the user’s requested change. The config uses the new schema with
backends:,models:, andaliases:.
Adding a model:
- If user says “add gpt-5.3 to my config”: probe
agent modelsto verify availability, then add tomodels:with the appropriate backend. - Example:
gpt-5.3: { backend: cursor }
Creating an alias:
- If user says “create a security-reviewer alias using opus”: add to
aliases:with optional prompt. - Example:
aliases:
security-reviewer:
model: opus
prompt: >
You are a security-focused reviewer. Prioritize OWASP Top 10
vulnerabilities, auth flaws, and injection risks.
Changing the default:
- If user says “switch default to sonnet”: update
default:field.
Removing a model:
- If user says “remove gpt-5.2”: delete from
models:.
- Run
mkdir -p ~/.dispatchthen write the updated file to~/.dispatch/config.yaml. - Tell the user what you changed. Done.
Stop here for config requests â do NOT proceed to the dispatch steps below.
Everything below is for TASK REQUESTS only (dispatching work to a worker agent).
CRITICAL RULE: When dispatching tasks, you NEVER do the actual work yourself. No reading project source, no editing code, no writing implementations. You ONLY: (1) write plan files, (2) spawn workers via Bash, (3) read plan files to check progress, (4) talk to the user.
Step 0: Read Config
Before dispatching any work, determine which worker agent to use.
Config file: ~/.dispatch/config.yaml
Read this file first. If it doesn’t exist â run First-Run Setup (above), then continue.
Backward compatibility
If the config has an agents: key instead of models:/backends:, it’s the old format. Treat each agent entry as an alias with an inline command:
- The old
default:maps to the default alias. - Each old
agents.<name>.commandbecomes a directly usable command (no model appending needed). - Tell the user: “Your config uses the old format. Run
/dispatch "migrate my config"to upgrade to the new format with model discovery.”
Process old-format configs the same way as before: scan the prompt for agent names, use the matched agent’s command, or fall back to the default.
Model selection logic (new format)
-
Scan the user’s prompt for any model name or alias defined in
models:oraliases:. -
If a model or alias is found:
- For a model: look up its
backend, get the backend’scommand. If the backend iscursororcodex, append--model <model-id>. If the backend isclaude, do NOT append--modelâ the Claude CLI manages its own model selection and appending--modelcan cause access errors. - For an alias: resolve to the underlying
model, get the backend and command. Apply the same backend-specific rule above. Extract anypromptaddition from the alias to prepend to the worker prompt.
- For a model: look up its
-
If the user references a model NOT in config:
- If Cursor CLI exists: run
agent modelsto check availability. If found, auto-add to config with the appropriate backend (applying backend preference rules â Claude models âclaude, OpenAI models âcodexwhen available, others âcursor) and use it. - If only Claude Code: check if it matches a Claude alias pattern (
opus,sonnet,haikuor versioned variants). If yes, auto-add withclaudebackend. - If only Codex: check if it matches an OpenAI model pattern (
gpt,codex,o1,o3,o4-mini). If yes, auto-add withcodexbackend. - If not found anywhere, tell the user: “Model X isn’t available. Run
agent modelsto see what’s available, or check your Cursor/Claude/OpenAI subscription.”
- If Cursor CLI exists: run
-
If no model mentioned: look up the
defaultmodel in the config. Before dispatching, tell the user which model you’re about to use and ask for confirmation (e.g., “I’ll dispatch this using opus (your default). Sound good?”). If the user confirms, proceed. If they name a different model, use that instead. -
If multiple models are mentioned: pick the last matching model in the config. If the prompt is genuinely ambiguous (e.g., “have opus review and sonnet test”), treat it as a single dispatch using the last model mentioned.
-
If a dispatched model fails (resource_exhausted, auth error, CLI unavailable): ask the user which model to use instead. Based on their answer, update
~/.dispatch/config.yamlâ remove the broken model, modify its backend, or add a replacement â so the same friction doesn’t repeat on future dispatches. -
Backend preference for Claude models: Any model whose ID contains
opus,sonnet, orhaikuâ whether a stable alias or versioned (e.g.,sonnet-4.6,opus-4.5-thinking) â MUST use theclaudebackend when available. Never route Claude models through cursor or codex. -
Backend preference for OpenAI models: Any model whose ID contains
gpt,codex,o1,o3, oro4-miniâ MUST use thecodexbackend when available. Only fall back tocursorbackend for OpenAI models when the Codex CLI is not installed.
Command construction
Cursor backend â append --model <model-id>:
- Look up model (e.g.,
gpt-5.3-codex) âbackend: cursor - Look up backend â
agent -p --force --workspace "$(pwd)" - Append
--model gpt-5.3-codexâ final command:agent -p --force --workspace "$(pwd)" --model gpt-5.3-codex
Claude backend â do NOT append --model:
- Look up model (e.g.,
opus) âbackend: claude - Look up backend â
env -u ... claude -p --dangerously-skip-permissions - Use the command as-is. The Claude CLI manages its own model selection.
Codex backend â append --model <model-id>:
- Look up model (e.g.,
gpt-5.3-codex) âbackend: codex - Look up backend â
codex exec --full-auto -C "$(pwd)" - Append
--model gpt-5.3-codexâ final command:codex exec --full-auto -C "$(pwd)" --model gpt-5.3-codex
Why no --model for Claude? The Claude CLI resolves aliases like opus to specific versioned model IDs internally. This resolution can fail if the resolved version isn’t available on the user’s account. Omitting --model lets the CLI use its own default, which always works.
For an alias (e.g., security-reviewer):
- Resolve alias â
model: opus, extractprompt:addition - Look up model â
backend: claude - Construct command:
env -u ... claude -p --dangerously-skip-permissions(no--model) - Prepend alias prompt to the worker’s task prompt
Step 1: Create the Plan File
For each task, write a plan file at .dispatch/tasks/<task-id>/plan.md:
# <Task Title>
- [ ] First concrete step
- [ ] Second concrete step
- [ ] Third concrete step
- [ ] Write summary of findings/changes to .dispatch/tasks/<task-id>/output.md
Rules for writing plans:
- Each item should be a concrete, verifiable action (not vague like “review code”).
- Match plan size to task complexity. A simple edit + open PR is 1 item. A multi-step investigation is 5-8. Don’t pad simple tasks with granular sub-steps â “make the change and open a PR” is a single item, not three.
- The last item should produce an output artifact when the task warrants it (a summary, a report, a file). For simple tasks (edits, fixes, small PRs), this isn’t needed.
- Use the Write tool to create the plan file. This is the ONE artifact the user should see in detail â it tells them what the worker will do.
Step 2: Set Up and Spawn
UX principle
Minimize user-visible tool calls. The plan file (Step 1) is the only artifact users need to see in detail. Prompt files, wrapper scripts, monitor scripts, and IPC directories are implementation scaffolding â create them all in a single Bash call using heredocs, never as individual Write calls. Use a clear Bash description (e.g., “Set up dispatch scaffolding for security-review”).
Dispatch procedure:
-
Create all scaffolding in one Bash call. This single call must:
mkdir -p .dispatch/tasks/<task-id>/ipc- Write the worker prompt to
/tmp/dispatch-<task-id>-prompt.txt(see Worker Prompt Template below). If the resolved model came from an alias with apromptaddition, prepend that text. - Write the wrapper script to
/tmp/worker--<task-id>.sh. Construct the command from config: resolve model â look up backend â get command template. If backend iscursororcodex: append--model <model-id>. If backend isclaude: do NOT append--model. The script runs:<command> "$(cat /tmp/dispatch-<task-id>-prompt.txt)" 2>&1 - Write the monitor script to
/tmp/monitor--<task-id>.sh. It polls the IPC directory for unanswered.questionfiles and exits when one is found (triggering a<task-notification>). chmod +xboth scripts.
For multiple parallel tasks, combine ALL tasks’ scaffolding into this single Bash call.
Example (single task, claude backend):
# description: "Set up dispatch scaffolding for security-review" mkdir -p .dispatch/tasks/security-review/ipc cat > /tmp/dispatch-security-review-prompt.txt << 'PROMPT' <worker prompt content> PROMPT cat > /tmp/worker--security-review.sh << 'WORKER' #!/bin/bash env -u CLAUDE_CODE_ENTRYPOINT -u CLAUDECODE claude -p --dangerously-skip-permissions "$(cat /tmp/dispatch-security-review-prompt.txt)" 2>&1 WORKER cat > /tmp/monitor--security-review.sh << 'MONITOR' #!/bin/bash IPC_DIR=".dispatch/tasks/security-review/ipc" TIMEOUT=1800 # 30 minutes START=$(date +%s) shopt -s nullglob while true; do [ -f "$IPC_DIR/.done" ] && exit 0 for q in "$IPC_DIR"/*.question; do seq=$(basename "$q" .question) [ ! -f "$IPC_DIR/${seq}.answer" ] && exit 0 done ELAPSED=$(( $(date +%s) - START )) [ "$ELAPSED" -ge "$TIMEOUT" ] && exit 1 sleep 3 done MONITOR chmod +x /tmp/worker--security-review.sh /tmp/monitor--security-review.shExample (cursor backend â note
--modelflag):cat > /tmp/worker--code-review.sh << 'WORKER' #!/bin/bash agent -p --force --workspace "$(pwd)" --model gpt-5.3-codex "$(cat /tmp/dispatch-code-review-prompt.txt)" 2>&1 WORKERExample (codex backend â note
--modelflag, same pattern as cursor):cat > /tmp/worker--code-review.sh << 'WORKER' #!/bin/bash codex exec --full-auto -C "$(pwd)" --model gpt-5.3-codex "$(cat /tmp/dispatch-code-review-prompt.txt)" 2>&1 WORKER -
Spawn worker and monitor as background tasks. Launch both in a single message (parallel
run_in_background: truecalls) with clear descriptions:# description: "Run dispatch worker: security-review" bash /tmp/worker--security-review.sh# description: "Monitoring progress: security-review" bash /tmp/monitor--security-review.shRecord both task IDs internally â you need them to distinguish worker vs monitor notifications. Do NOT report these IDs to the user (they are implementation details).
Worker Prompt Template
Write this to the temp file, replacing {task-id} with the actual task ID. Append the Context block (see below) before the closing line.
You have a plan file at .dispatch/tasks/{task-id}/plan.md containing a checklist.
Work through it top to bottom. For each item, do the work, update the plan file ([ ] â [x] with an optional note), and move to the next.
If you need to ask the user a question, write it to .dispatch/tasks/{task-id}/ipc/<NNN>.question (atomic write via temp file + mv; sequence from 001). Poll for a matching .answer file. When you receive the answer, write a .done marker and continue. If no answer arrives within 3 minutes, write your context to .dispatch/tasks/{task-id}/context.md, mark the item [?] with the question, and stop.
If you hit an unresolvable error, mark the item [!] with a description and stop.
When all items are checked, write a completion marker: touch .dispatch/tasks/{task-id}/ipc/.done â then your work is done.
Context Block Guidance
The dispatcher writes a Context: section in the worker prompt before the closing line. When writing this:
- State the outcome the user asked for, in their words. Don’t rephrase into implementation steps.
- List reference files the worker needs to read (if any).
- State constraints that aren’t obvious (e.g., “prefer main’s content on conflicts”, “read-only â don’t modify source”).
- Don’t teach tools. Don’t explain how to use
gh,git,grep, etc. The worker model knows its tools. - Don’t specify implementation. Say “merge the open docs PRs” not “run
gh pr merge <number> --merge“.
Task IDs
Short, descriptive, kebab-case: security-review, add-auth, fix-login-bug.
Step 3: Report and Return Control
After dispatching, tell the user only what matters:
- Which task was dispatched (the task ID)
- Which model is running it
- A brief summary of the plan (the checklist items)
- Then stop and wait
Keep the output clean. Example: “Dispatched security-review using opus. Plan: 1) Scan for secrets 2) Review auth logic …”
Do NOT report worker/monitor background task IDs, backend names, script paths, or other implementation details to the user.
Checking Progress
Progress is visible by reading the plan file. You can check it:
A. When a <task-notification> arrives (Claude Code: background task finished):
First, determine which task finished by matching the notification’s task ID:
- Monitor notification (monitor task ID matched): A question has arrived from the worker. Go to Handling Blocked Items â IPC Flow below.
- Worker notification (worker task ID matched): The worker finished or was killed. Read the plan file, report results.
cat .dispatch/tasks/<task-id>/plan.md
B. When the user asks (“status”, “check”, “how’s it going?”):
cat .dispatch/tasks/<task-id>/plan.md
Report the current state of each checklist item. Also check for any unanswered IPC questions:
ls .dispatch/tasks/<task-id>/ipc/*.question 2>/dev/null
C. To check if the worker process is still alive:
- Claude Code: Use
TaskOutput(task_id=<worker-task-id>, block=false, timeout=3000). - Other hosts: Check if the process is running (
ps aux | grep dispatch), or just read the plan file â if items are still being checked off, the worker is alive.
Reading the Plan File
When you read a plan file, interpret the markers:
- [x]= completed- [ ]= not yet started (or in progress if it’s the first unchecked item)- [?]= blocked â look for the explanation line below it, surface it to the user- [!]= error â look for the error description, report it
Adding Context to a Running Worker
If the user provides additional context after a worker has been dispatched (e.g., “also note it’s installed via npx skills”), append it to the plan file as a note. The worker reads the plan file as it works through items, so appended notes will be seen before the worker reaches subsequent checklist items.
# Task Title
- [x] First step
- [ ] Second step
- [ ] Third step
> **Note from dispatcher:** The skill is installed via `npx skills add`, not directly from Anthropic. Account for this in the output.
Do NOT attempt to inject context via the IPC directory. IPC is strictly worker-initiated â the worker writes questions, the dispatcher writes answers. Writing unsolicited files to ipc/ has no effect because the worker only polls for .answer files matching its own .question files.
Handling Blocked Items
There are two ways a question reaches the dispatcher: the IPC flow (primary) and the legacy fallback.
IPC Flow (monitor-triggered)
When the monitor’s <task-notification> arrives, a question is waiting. The worker is still alive, polling for an answer.
- Find the unanswered question â look for a
*.questionfile without a matching*.answer:ls .dispatch/tasks/<task-id>/ipc/ - Read the question file (e.g.,
.dispatch/tasks/<task-id>/ipc/001.question). - Surface the question to the user.
- Wait for the user’s answer.
- Write the answer atomically:
echo "<user's answer>" > .dispatch/tasks/<task-id>/ipc/001.answer.tmp mv .dispatch/tasks/<task-id>/ipc/001.answer.tmp .dispatch/tasks/<task-id>/ipc/001.answer - Respawn the monitor (the old one exited after detecting the question):
- The script at
/tmp/monitor--<task-id>.shalready exists â just re-spawn it withrun_in_background: true. - Record the new monitor task ID internally (do not report it to the user).
- The script at
The worker detects the answer, writes 001.done, and continues working â all without losing context.
Legacy Fallback ([?] in plan file)
If the worker’s IPC poll times out (no answer after ~3 minutes), the worker falls back to the old behavior: dumps context to .dispatch/tasks/<task-id>/context.md, marks the item [?], and exits.
When the worker’s <task-notification> arrives and the plan shows - [?]:
- Read the blocker explanation from the line below the item.
- Check if
.dispatch/tasks/<task-id>/context.mdexists â if so, the worker preserved its context before exiting. - Surface the question to the user.
- Wait for the user’s answer.
- Spawn a NEW worker with instructions:
- Read the plan file
- Read
context.mdfor the previous worker’s context (if it exists) - The answer to the blocked question is: “<user’s answer>”
- Continue from the blocked item onward
IPC Protocol Specification
The IPC system uses sequence-numbered files in .dispatch/tasks/<task-id>/ipc/ for bidirectional communication between the worker and dispatcher.
Directionality
IPC is worker-initiated only. The worker writes questions; the dispatcher writes answers to those questions. The dispatcher must never write unsolicited files to the IPC directory â the worker will not detect or process them.
To provide additional context to a running worker, append notes to the plan file instead (see Adding Context to a Running Worker above).
File naming
001.questionâ Worker’s question (plain text)001.answerâ Dispatcher’s answer (plain text)001.doneâ Acknowledgment from worker that it received the answer- Sequence numbers are zero-padded to 3 digits:
001,002,003, etc.
Atomic write pattern
All writes use a two-step pattern to prevent reading partial files:
- Write to
<filename>.tmp mv <filename>.tmp <filename>(atomic on POSIX filesystems)
Both the worker (writing questions) and the dispatcher (writing answers) follow this pattern.
Sequence numbering
The next sequence number is derived from the count of existing *.question files in the IPC directory, plus one. The worker determines this when it needs to ask a question.
Startup reconciliation
If the dispatcher restarts mid-conversation (e.g., user closes and reopens the session), it should scan the IPC directory for unanswered questions on any active task:
- List all task directories under
.dispatch/tasks/. - For each, check
ipc/for*.questionfiles without matching*.answerfiles. - If found, surface the question to the user and resume the IPC flow from step 3 onward.
This ensures questions are never silently lost.
Proactive Recovery
When a worker fails to start or errors immediately:
-
Check CLI availability:
which agent 2>/dev/null which claude 2>/dev/null which codex 2>/dev/null -
If the CLI is gone or auth fails:
- Tell the user: “The [cursor/claude/codex] CLI is no longer available.”
- List alternative models/backends still available in the config.
- Ask: “Want me to switch your default and retry with [alternative]?”
-
If the user agrees:
- Update
default:in config to the alternative model. - Re-dispatch the task with the new model.
- Update
-
If no alternatives exist:
- Tell the user to install a CLI (
agent,claude, orcodex) or fix their auth, and stop.
- Tell the user to install a CLI (
Parallel Tasks
For independent tasks, create separate plan files and spawn separate workers:
.dispatch/tasks/security-review/plan.mdâ worker A.dispatch/tasks/update-readme/plan.mdâ worker B
Both run concurrently. Check each plan file independently.
Sequential Dependencies
If task B depends on task A:
- Dispatch task A.
- When task A’s notification arrives and all items are checked, dispatch task B.
Error Handling
- [!]in plan file: report the error, ask user to retry or skip.- Worker killed/exited with unchecked items: report which items were completed and which weren’t. Ask if user wants to re-dispatch the remaining items. If worker errored immediately, go to Proactive Recovery.
- Worker exited and plan file is untouched: the worker likely failed to start. Check the output file from the notification for clues, then go to Proactive Recovery.
Cleanup
Task files persist in .dispatch/tasks/ for debugging and reference. The user can delete .dispatch/ to clean up.
Example Interaction
Normal flow (no questions)
User: /dispatch "do a security review of this project"
Dispatcher: [reads ~/.dispatch/config.yaml â default model: opus]
Dispatcher: [writes .dispatch/tasks/security-review/plan.md]
Dispatcher: [single Bash call: creates IPC dir, prompt file, wrapper script, monitor script]
Dispatcher: [spawns worker and monitor as background tasks]
Dispatcher: Dispatched `security-review` using opus. Plan:
1. Scan for hardcoded secrets
2. Review auth logic
3. Check dependencies
4. Audit injection risks
5. Write findings report
What else?
[<task-notification> arrives â worker finished]
Dispatcher: [reads plan.md â all items checked]
Dispatcher: `security-review` complete. Found 2 issues:
1. Hardcoded API key in config.ts:14
2. No rate limiting on /api/login endpoint
Full report at .dispatch/tasks/security-review/output.md
IPC flow (worker asks a question)
User: /dispatch "implement the feature described in requirements.txt"
Dispatcher: [writes plan, sets up scaffolding, spawns worker + monitor]
Dispatcher: Dispatched `impl-feature` using opus. Plan: ...
[<task-notification> arrives â monitor detected a question]
Dispatcher: [reads .dispatch/tasks/impl-feature/ipc/001.question]
Dispatcher: Worker is asking: "requirements.txt doesn't exist. What feature should I implement?"
User: Add a /health endpoint that returns JSON with uptime and version.
Dispatcher: [writes 001.answer atomically, respawns monitor]
Dispatcher: Answer sent. Worker is continuing.
[<task-notification> arrives â worker finished]
Dispatcher: [reads plan.md â all items checked]
Dispatcher: Done! /health endpoint implemented.
First-run setup
User: /dispatch "review the auth module"
Dispatcher: [no ~/.dispatch/config.yaml found â running first-run setup]
Dispatcher: [runs `which agent` â found, `which claude` â found]
Dispatcher: [runs `agent models` â 15 models detected]
Dispatcher: Found Cursor CLI (15 models) and Claude Code. Notable models:
- opus (Claude) â best for complex tasks
- sonnet (Claude) â fast and capable
- gpt-5.3-codex (GPT) â strong for code
AskUserQuestion: Which model should be your default?
1. opus (Recommended)
2. sonnet
3. gpt-5.3-codex
User: opus
Dispatcher: [generates ~/.dispatch/config.yaml with all 15 models, default: opus]
Dispatcher: Config created at ~/.dispatch/config.yaml with 15 models. Default: opus.
Dispatcher: [continues with the original task â dispatches security review using opus]
The key behavior: plan, dispatch, track progress via checklist, answer questions without losing context, never block.