ai-assisted-development
npx skills add https://github.com/peterbamuhigire/skills-web-dev --skill ai-assisted-development
Agent 安装分布
Skill 文档
Required Plugins
Superpowers plugin: MUST be active for all work using this skill. Use throughout the entire build pipeline â design decisions, code generation, debugging, quality checks, and any task where it offers enhanced capabilities. If superpowers provides a better way to accomplish something, prefer it over the default approach.
AI-Assisted Development Orchestration
Overview
Learn to orchestrate multiple AI agents (like Claude Code, custom sub-agents, or specialized AI tools) to work together effectively in software development.
This skill bridges prompting patterns + orchestration + sub-agent coordination for real-world AI-assisted development.
What you’ll learn:
- The 5 orchestration strategies for AI development
- AI-specific coordination patterns (Agent Handoff, Fan-Out/Fan-In, Human-in-the-Loop)
- Real-world examples (MADUUKA, BRIGHTSOMA apps)
Documentation Structure (Tier 2 Deep Dives):
- ð orchestration-strategies.md – The 5 core strategies with detailed examples
- ð ai-patterns.md – AI-specific orchestration patterns
- ð practical-examples.md – Real MADUUKA and BRIGHTSOMA projects
When to Use This Skill
â USE when:
- Coordinating multiple AI agents on a single project
- Planning complex features with AI assistance
- Creating workflows that involve AI + human collaboration
- Setting up multi-agent development pipelines
- Optimizing AI-assisted development processes
â DON’T USE when:
- Single simple task with one AI agent (just use that agent directly)
- Manual development without AI assistance
- Basic prompting (use
prompting-patterns-reference.mdinstead)
Core Concepts (Quick Reference)
1. AI Agent (Definition)
An AI agent is a specialized AI assistant that handles ONE category of work.
Examples:
- Planning Agent: Analyzes requirements, creates specs
- Coding Agent: Writes implementation code
- Testing Agent: Creates test cases
- Review Agent: Reviews code quality
- Documentation Agent: Writes documentation
Each agent has focused expertise and context.
2. Orchestration (for AI Development)
Orchestration = Coordinating multiple AI agents to work on a project together.
Example workflow:
Planning Agent: Create feature spec
â (spec output)
Coding Agent: Implement feature (uses spec as input)
â (code output)
Testing Agent: Create tests (uses code as input)
â (tests output)
Review Agent: Review everything (uses spec + code + tests)
â
Human: Approve and deploy
3. Execution Strategies
- Sequential: One agent after another (most common)
- Parallel: Multiple agents working simultaneously (50-70% faster)
- Conditional: Different agents based on project type
- Looping: Iterate until quality threshold met
- Retry: Re-run agent if output unsatisfactory
The 5 Orchestration Strategies (Summary)
ð See orchestration-strategies.md for complete details with code examples.
Strategy 1: Sequential AI Workflow
Use when: Each AI agent needs previous agent’s output
Pattern:
Agent 1 (Planning) â Spec
â
Agent 2 (Coding) â Code
â
Agent 3 (Testing) â Tests
â
Agent 4 (Review) â Feedback
Example: Feature development pipeline (planning â coding â testing â review)
Time: 60 minutes (15 + 25 + 15 + 5)
Strategy 2: Parallel AI Execution
Use when: AI agents work on independent components
Pattern:
âââ Agent 2a: Backend âââ
Agent 1 (Spec) ââ¼ââ Agent 2b: Frontend ââ¼ââ Agent 3 (Integration)
âââ Agent 2c: Docs ââââââ
Example: Full-stack feature (backend + frontend + docs simultaneously)
Time: 20 minutes parallel (vs 60 sequential) = 67% faster
Strategy 3: Conditional AI Routing
Use when: Different AI agents handle different project types
Pattern:
Analyze Project
â
ââ IF (legacy) â Refactoring Agent
ââ ELIF (greenfield) â Architecture Agent
ââ ELIF (API) â Integration Agent
ââ ELSE â Ask human
Example: Documentation generation based on project type
Strategy 4: Looping AI Iteration
Use when: AI agent needs to refine output until quality threshold met
Pattern:
ââââââââââââââââââââ
â Agent generates â
ââââââââââ¬ââââââââââ
â
â¼
ââââââââââââââââââââ
â Quality >= 80%? âââââ
â YES â Done â â
â NO â Refine âââââ
ââââââââââââââââââââ
(max 3 iterations)
Example: Code generation with quality loop (syntax â style â tests)
Strategy 5: Retry with Fallback
Use when: AI agent might fail due to external dependencies
Pattern:
Attempt 1 â Success? YES â Done
â FAIL (wait 5s)
Attempt 2 â Success? YES â Done
â FAIL (wait 10s)
Attempt 3 â Success? YES â Done
â FAIL
Fallback â Degraded mode
Example: External API integration (fetch schema, retry on timeout, use cache if all fail)
The 3 AI Orchestration Patterns (Summary)
ð See ai-patterns.md for complete details with code examples.
Pattern 1: Agent Handoff (Pipeline)
Use case: One AI agent completes work, passes output to next AI agent
Flow:
Agent A â Output â Agent B â Output â Agent C â Done
Example: Requirements Agent â Specification Agent â Implementation Agent
Key principle: Each agent’s output is next agent’s input
Pattern 2: Fan-Out/Fan-In (Parallel + Combine)
Use case: Split work across multiple AI agents, then combine results
Flow:
âââ Agent A âââ
Input (split) âââ¼ââ Agent B âââ¼ââ Combine â Output
âââ Agent C âââ
Example: Multi-component documentation (database + API + UI docs in parallel, then combine)
Speedup: 50-70% faster than sequential
Pattern 3: Human-in-the-Loop (Gated Approval)
Use case: AI agents generate work, human approves before continuing
Flow:
Agent 1 â Output â [HUMAN REVIEW] â Approved? YES â Agent 2
â NO
Revise
Example: Spec â [Review] â Code â [Review] â Tests â [Review] â Deploy
Benefits: Safety, quality control, compliance, learning
Quick Reference: When to Use Which
| Strategy/Pattern | Use When | Benefit |
|---|---|---|
| Sequential | Each agent needs previous output | Simple, predictable |
| Parallel | Independent components | 50-70% faster |
| Conditional | Different project types | Right agent for the job |
| Looping | Quality threshold must be met | High-quality output |
| Retry | External dependencies might fail | Graceful error handling |
| Agent Handoff | Pipeline of transformations | Clear traceability |
| Fan-Out/Fan-In | Parallel work + combine | Maximum speed |
| Human-in-the-Loop | High-risk or critical features | Safety + quality control |
Real-World Examples (Summary)
ð See practical-examples.md for complete detailed walkthroughs.
Example 1: MADUUKA – Franchise Inventory Sync
Project: Multi-tenant franchise inventory management
Orchestration used:
- Sequential (Requirements â Implementation â Testing â Review)
- Parallel (Database + API + Tests simultaneously)
- Human-in-the-Loop (3 approval gates)
Agents:
- Requirements Agent: Create spec (15 min)
- Database Agent: Schema + models (20 min) ââ
- API Agent: Endpoints + validation (20 min) âââ Parallel
- Testing Agent: Tests (20 min) ââââââââââââââ
- Integration Agent: Run tests (10 min)
- Review Agent: Quality check (15 min)
Result: 75 minutes (vs 115 sequential) = 35% faster
Example 2: BRIGHTSOMA – AI Exam Generation
Project: AI-powered exam question generator
Orchestration used:
- Looping (Generate questions until quality >= 80%)
- Retry (Handle AI API failures)
- Sequential (Generator â Rubrics â PDF)
Agents:
- Question Generator: 17 questions with quality loops (30 min)
- Validator: Check quality (embedded in loop)
- Rubric Generator: Grading rubrics (10 min)
- PDF Generator: Formatted exam + answer key (5 min)
Result: 45 minutes (vs 180 manual) = 75% faster + higher quality
Practical Workflow: How to Apply This Skill
Step 1: Analyze Your Task
Questions to ask:
- How many components does this feature have?
- Can any work be done in parallel?
- Are there external dependencies (APIs, databases)?
- Is this high-risk (needs human approval)?
- What’s the quality threshold?
Step 2: Choose Orchestration Strategies
Based on analysis:
- Sequential dependencies? â Use Sequential strategy
- Independent components? â Use Parallel strategy
- Different project types? â Use Conditional strategy
- Quality threshold? â Use Looping strategy
- External APIs? â Use Retry strategy
Combine multiple strategies for complex projects.
Step 3: Design Agent Workflow
Define agents:
- What does each agent do? (ONE job each)
- What input does each need?
- What output does each produce?
- What’s the execution order?
Example:
Agent 1: Planning Agent
Input: User requirements
Output: docs/specs/feature-spec.md
Execution: Sequential (first)
Agent 2a: Backend Agent
Input: docs/specs/feature-spec.md
Output: Backend code
Execution: Parallel with 2b and 2c
Agent 2b: Frontend Agent
Input: docs/specs/feature-spec.md
Output: Frontend code
Execution: Parallel with 2a and 2c
Agent 2c: Testing Agent
Input: docs/specs/feature-spec.md
Output: Test files
Execution: Parallel with 2a and 2b
Agent 3: Integration Agent
Input: Backend + Frontend + Tests
Output: Integrated feature
Execution: Sequential (after 2a, 2b, 2c)
Step 4: Write Clear Prompts
Use prompting patterns (see prompting-patterns-reference.md):
"[TASK]
FILE TO READ: [input file from previous agent]
CONTEXT: [Why this is needed, what it builds on]
ORCHESTRATION: [Sequential/Parallel/Conditional/Looping/Retry]
[Dependencies or parallel info]
CONSTRAINTS:
- [Technical constraint 1]
- [Limit 2]
- [Standard 3]
OUTPUT: [Expected output files/format]"
Step 5: Add Human Gates (if needed)
For high-risk work:
Agent â Output â [HUMAN REVIEW] â Approved? â Next agent
What to check:
- Security implications
- Business logic correctness
- Compliance requirements
- Performance concerns
Step 6: Execute and Monitor
Track:
- Which agent is running
- What output was produced
- Quality metrics (if looping)
- Time spent per agent
- Any failures or retries
Log everything for debugging and optimization.
Best Practices
DO:
â Break work into focused agents – Each agent does ONE job well â Parallelize when possible – 50-70% faster execution â Add quality loops – Don’t accept poor output â Include human gates – High-risk work needs approval â Handle failures gracefully – Retry with backoff, have fallbacks â Provide clear context – Each agent gets spec, input files, orchestration info â Log everything – Agent interactions, decisions, outputs â Combine strategies – Use multiple for complex projects
DON’T:
â Don’t over-orchestrate simple tasks – Sometimes 1 agent is enough â Don’t parallelize dependent work – Causes race conditions â Don’t skip quality validation – AI output needs verification â Don’t forget exit conditions – Loops must end â Don’t assume AI is perfect – Plan for failures â Don’t skip human review – Critical features need oversight
Integration with Other Skills
- feature-planning: Use AI agents to execute implementation plans
- prompting-patterns-reference: Better prompts = better agent output
- orchestration-patterns-reference: General orchestration concepts
- custom-sub-agents: Create specialized AI agents
Summary
AI-assisted development orchestration delivers:
- 30-75% faster development (parallelization + automation)
- Higher quality output (validation loops, human gates)
- Better consistency (AI follows patterns reliably)
- Reduced errors (validation catches issues early)
Key concepts:
- Break work into focused agents (ONE job each)
- Use 5 orchestration strategies (Sequential, Parallel, Conditional, Looping, Retry)
- Apply 3 AI patterns (Agent Handoff, Fan-Out/Fan-In, Human-in-the-Loop)
- Combine strategies for complex projects
- Always include quality validation and human oversight
Next steps:
- ð Read orchestration-strategies.md for detailed strategy examples
- ð Read ai-patterns.md for AI-specific patterns
- ð Read practical-examples.md for real MADUUKA and BRIGHTSOMA walkthroughs
- Apply to your own projects!
Related Skills:
feature-planning/– Create implementation plans that AI agents can executeprompting-patterns-reference.md– Better prompts for better AI outputorchestration-patterns-reference.md– General orchestration conceptscustom-sub-agents/– Create specialized AI agents
Last Updated: 2026-02-07 Line Count: ~490 lines (compliant with doc-standards.md)