workflow-management
2
总安装量
2
周安装量
#69735
全站排名
安装命令
npx skills add https://github.com/onesmartguy/next-level-real-estate --skill workflow-management
Agent 安装分布
opencode
2
gemini-cli
2
claude-code
2
github-copilot
2
codex
2
kimi-cli
2
Skill 文档
PayK12 Workflow Management
Streamline development workflows across the PayK12 multi-repository system with intelligent task coordination, cost tracking, and continuous improvement feedback. This skill provides patterns for workflow optimization, monitoring, and automation.
When to Use This Skill
- Monitoring
/bug-fixcommand execution and success rates - Analyzing token usage and cost optimization opportunities
- Coordinating multi-step tasks across repositories
- Tracking workflow health metrics and improvements
- Implementing workflow automation strategies
- Planning sprint work and task allocation
- Analyzing performance bottlenecks
- Managing developer productivity
When NOT to Use
- For specific repository development â Use repository-specific skills
- For infrastructure/deployment â Invoke
cloud-architectordeployment-engineer - For individual feature development â Use
nextjs-pro,dotnet-proagents - For security concerns â Invoke
security-auditoragent
Quick Reference
Workflow System Overview
PayK12 Workflow Stack:
âââ /bug-fix command (2120+ lines)
â âââ Phase 1: Analysis
â âââ Phase 2: Reproduction (Playwright)
â âââ Phase 3: Implementation
â âââ Phase 4: Testing
â âââ Phase 5: PR Creation
âââ Session logging & cost tracking
âââ Agent dispatch & coordination
âââ Continuous improvement feedback
Key Metrics to Track
- Success Rate: % of workflows that complete without manual intervention
- Iteration Count: Average iterations per bug fix (target: 1-2)
- Cost Per Bug: Total tokens used divided by bugs fixed
- Time Per Bug: Wall-clock time from start to merge
- Agent Utilization: Which agents are most frequently used
- Context Cache Hit Rate: Cached vs. fresh context loads
- Token Efficiency: Tokens used per artifact generated
Core Workflow Patterns
Pattern 1: Automated Bug Fix Workflow
The /bug-fix Command Flow:
User: /bug-fix PL-479
1. ANALYSIS PHASE
ââ Parse JIRA ticket PL-479
ââ Extract requirements
ââ Identify repository scope
ââ Assess complexity
ââ Create execution plan
2. REPRODUCTION PHASE
ââ Generate test case for bug
ââ Run Playwright tests (should fail)
ââ Capture failure evidence
ââ Document reproduction steps
ââ Create test baseline
3. IMPLEMENTATION PHASE
ââ Dispatch to appropriate agent
â ââ dotnet-pro for API changes
â ââ nextjs-pro for frontend changes
â ââ legacy-modernizer for legacy changes
â ââ multi-repo-fixer for cross-repo
ââ Implement fix
ââ Run local tests
ââ Update documentation
4. TESTING PHASE
ââ Run Playwright tests (should pass)
ââ Run unit tests
ââ Run integration tests
ââ Check code coverage
ââ Verify no regressions
5. PR CREATION PHASE
ââ Create merge request with:
â ââ Clear description
â ââ Testing evidence
â ââ Screenshots/traces if applicable
â ââ Auto-link to JIRA ticket
ââ Post CI/CD results
ââ Wait for reviews
ââ Merge when approved
FEEDBACK & ITERATION (up to 3 times)
ââ Monitor test failures
ââ Self-heal common issues
ââ Provide diagnostic information
ââ Attempt auto-fix or escalate
Success Indicators:
- â All tests pass (Playwright, unit, integration)
- â No code coverage regression
- â PR successfully created and auto-linked
- â Documentation updated
- â No manual intervention needed
Pattern 2: Cost Optimization Workflow
Token Usage Breakdown:
Average Cost Per Bug Fix:
Context Loading: 25,000 tokens (35%)
ââ Architecture context
ââ Repository structure
ââ Existing patterns
ââ Test infrastructure
Analysis Phase: 12,000 tokens (17%)
ââ JIRA ticket parsing
ââ Code review
ââ Planning
Reproduction Phase: 8,000 tokens (11%)
ââ Test generation
ââ Test execution analysis
ââ Evidence capture
Implementation Phase: 18,000 tokens (25%)
ââ Code writing
ââ Local testing
ââ Refinement
Testing Phase: 5,000 tokens (7%)
ââ Test monitoring
ââ Result analysis
ââ Coverage check
Total Average: 70,000 tokens (~$2.10/bug fix)
Optimization Opportunities:
ââ Cache context (save 35% â 25,000 tokens)
ââ Reuse test patterns (save 20% of reproduction)
ââ Parallel execution (reduce wall-clock time 30%)
ââ Early termination on simple bugs
Optimization Strategies:
- Context Caching (saves 8,750 tokens per workflow):
Before: Load context fresh each time
Cost: 25,000 tokens per bug
After: Cache and reuse context
Cost: 16,250 tokens (35% savings)
Action: Implement context-manager agent
Timeline: 6 weeks
ROI: Break-even after 5 bugs
- Parallel Execution (saves 30% wall-clock time):
Before: Sequential phases (1 â 2 â 3 â 4 â 5)
Time: ~45 minutes per bug
After: Parallel where possible
- Phase 2 & 3 overlap (testing while implementing)
- Phase 1 & 2 analysis done in parallel
Time: ~30 minutes per bug
Implementation: Update /bug-fix workflow
Timeline: 1 week
Impact: 15 more bugs/day throughput
- Pattern Reuse (saves tokens, improves speed):
First IDOR vulnerability: 70,000 tokens
Second IDOR vulnerability: 35,000 tokens (50% savings)
ââ Reuse test patterns and fixes
Action: Build pattern library for common bug types
Timeline: 2 weeks (after 10-15 bugs)
Savings: ~30% average cost reduction
Pattern 3: Workflow Health Monitoring
Health Score Calculation:
Overall Workflow Health = (S Ã 0.3) + (I Ã 0.25) + (C Ã 0.2) + (A Ã 0.25)
Where:
S = Success Rate (target: 95%+)
I = Iteration Efficiency (1-2 iterations ideal)
C = Cost Efficiency (tokens per bug)
A = Agent Accuracy (code quality)
Health Score Interpretation:
90-100 = Excellent â
(no action needed)
80-90 = Good â ï¸ (monitor, optimize when needed)
70-80 = Fair â ï¸ (identify bottlenecks)
< 70 = Poor â (investigation required)
Metrics Dashboard:
Last 30 Days Summary:
ââ Bugs Fixed: 47
ââ Success Rate: 91.5% (43/47)
ââ Avg Iterations: 1.4
ââ Avg Cost: $2.15 per bug
ââ Total Cost: $101.05
ââ Avg Time: 38 minutes
ââ Agent Accuracy: 94%
ââ Context Cache Hit Rate: 62%
Trend Analysis:
ââ Cost trending down (-12% vs prev month)
ââ Success rate improving (+5%)
ââ Speed improving (-7 min avg time)
ââ Cache efficiency improving (+8%)
Recommendations:
ââ Deploy context-manager (projected 35% cost savings)
ââ Implement parallel execution (30% speed improvement)
ââ Build IDOR pattern library (50% cost savings for security bugs)
ââ Add code review agent (improve accuracy to 98%)
Estimated Impact (if all implemented):
ââ Cost: $101/month â $52/month (48% savings)
ââ Speed: 38 min â 26 min (31% faster)
ââ Success: 91% â 97% (+6%)
ââ Throughput: 47 bugs â 72 bugs (+53%)
Multi-Repository Coordination
Cross-Repository Bug Fixes
Scenario: Bug requires changes in multiple repositories
Bug: Contact creation fails because validation differs between frontend and API
Step 1: Analysis
ââ Identify affected repositories:
â ââ repos/frontend (React validation)
â ââ repos/api (C# validation)
â ââ repos/legacy-api (legacy validation)
ââ Find root cause (one has different rules)
ââ Plan synchronization strategy
Step 2: Design Solution
ââ Decide on source of truth:
â ââ Option A: Shared validation schema
â ââ Option B: One repo leads, others follow
â ââ Option C: Message-based synchronization
ââ Determine update order
Step 3: Implementation Order
ââ First: Backend (API) - source of truth
ââ Second: Frontend (React) - sync with API
ââ Third: Legacy API - gradual migration
Step 4: Testing
ââ Test API validation changes
ââ Test Frontend integration with new API
ââ Test Legacy API still works (compatibility mode)
ââ End-to-end workflow test
Step 5: Deployment
ââ Deploy API changes first
ââ Monitor for issues
ââ Deploy frontend changes
ââ Monitor E2E tests
ââ Plan legacy-API deprecation
Coordination Patterns
Pattern 1: Sequential Deployment
repo/api â repo/frontend â (later) repos/legacy-api
Used when: Backward compatibility needed
Risk: Low (version gating)
Speed: Slower (staggered deploys)
Pattern 2: Parallel Deployment
repo/api ââ
âââ repo/frontend
repos/legacy-api ââ
Used when: Breaking changes or major refactor
Risk: Medium (coordination required)
Speed: Faster (parallel work)
Pattern 3: Feature Flag Driven
Deploy all changes with flags OFF
Enable flags gradually per region/user
Rollback by disabling flags
Used when: Zero-downtime deployment needed
Risk: Low (easy rollback)
Speed: Medium (flag toggling)
Automation & Self-Healing
Auto-Healing Strategy
Tier 1: Deterministic Fixes (High confidence)
Issue: Formatting violations
Fix: Auto-apply prettier/eslint
Confidence: 100%
Action: Auto-commit, notify user
Issue: Missing nullable type annotations
Fix: Add ? to type signature
Confidence: 98%
Action: Suggest, wait for approval
Tier 2: Heuristic Fixes (Medium confidence)
Issue: Test failing on assertion
Fix: Suggest mock adjustment
Confidence: 75%
Action: Create PR with suggestion, wait for review
Issue: API endpoint not found
Fix: Check version mismatch, suggest compatibility mode
Confidence: 70%
Action: Log issue, escalate to human
Tier 3: Manual Escalation (Low confidence)
Issue: Unexpected algorithm behavior
Fix: Escalate to human with diagnostics
Confidence: < 50%
Action: Provide full context, request human decision
Issue: Design decision conflict
Fix: Escalate with alternatives
Confidence: < 40%
Action: Request human judgment
Best Practices
DO â
- Monitor workflow metrics regularly (weekly)
- Implement incremental improvements (1 per sprint)
- Cache reusable context and patterns
- Run cost analysis monthly
- Maintain improvement backlog
- Document successful patterns
- Share patterns across team
- Automate repetitive tasks
- Monitor agent accuracy
- Plan for scale growth
DON’T â
- Don’t ignore efficiency metrics
- Don’t over-engineer before measuring
- Don’t skip documentation
- Don’t lose track of costs
- Don’t implement all optimizations at once
- Don’t ignore team feedback
- Don’t assume one size fits all bugs
- Don’t forget to measure improvements
- Don’t create technical debt for speed
- Don’t forget to update patterns
Related Resources
- Bug Fix Automation:
/bug-fixcommand (2120+ lines) - Session Tracking:
log-session.shscript - Context Management:
context-manager-integration-plan.md - Agent Coordination:
agent-organizeragent
Troubleshooting
| Issue | Indicator | Solution |
|---|---|---|
| High costs | > $3/bug average | Analyze token usage, implement caching |
| Low success rate | < 85% pass rate | Review agent accuracy, add patterns |
| Slow execution | > 60 min avg time | Profile phases, parallelize where possible |
| Cache misses | < 50% hit rate | Expand cache policies, reuse patterns |
| Manual escalations | > 10% of bugs | Improve auto-healing heuristics |
Getting Help
For workflow optimization:
- Invoke
product-manageragent for strategy - Invoke
performance-engineerfor bottleneck analysis - Invoke
agent-organizerfor coordination issues - Check
/docs/workflow-engine-guide.mdfor advanced topics