cause-and-effect
3
总安装量
2
周安装量
#56805
全站排名
安装命令
npx skills add https://github.com/glennguilloux/context-engineering-kit --skill cause-and-effect
Agent 安装分布
opencode
2
gemini-cli
2
command-code
2
github-copilot
2
goose
2
Skill 文档
Cause and Effect Analysis
Apply Fishbone (Ishikawa) diagram analysis to systematically explore all potential causes of a problem across multiple categories.
Description
Systematically examine potential causes across six categories: People, Process, Technology, Environment, Methods, and Materials. Creates structured “fishbone” view identifying contributing factors.
Usage
/cause-and-effect [problem_description]
Variables
- PROBLEM: Issue to analyze (default: prompt for input)
- CATEGORIES: Categories to explore (default: all six)
Steps
- State the problem clearly (the “head” of the fish)
- For each category, brainstorm potential causes:
- People: Skills, training, communication, team dynamics
- Process: Workflows, procedures, standards, reviews
- Technology: Tools, infrastructure, dependencies, configuration
- Environment: Workspace, deployment targets, external factors
- Methods: Approaches, patterns, architectures, practices
- Materials: Data, dependencies, third-party services, resources
- For each potential cause, ask “why” to dig deeper
- Identify which causes are contributing vs. root causes
- Prioritize causes by impact and likelihood
- Propose solutions for highest-priority causes
Examples
Example 1: API Response Latency
Problem: API responses take 3+ seconds (target: <500ms)
PEOPLE
ââ Team unfamiliar with performance optimization
ââ No one owns performance monitoring
ââ Frontend team doesn't understand backend constraints
PROCESS
ââ No performance testing in CI/CD
ââ No SLA defined for response times
ââ Performance regression not caught in code review
TECHNOLOGY
ââ Database queries not optimized
â ââ Why: No query analysis tools in place
ââ N+1 queries in ORM
â ââ Why: Eager loading not configured
ââ No caching layer
â ââ Why: Redis not in tech stack
ââ Synchronous external API calls
ââ Why: No async architecture in place
ENVIRONMENT
ââ Production uses smaller database instance than needed
ââ No CDN for static assets
ââ Single region deployment (high latency for distant users)
METHODS
ââ REST API design requires multiple round trips
ââ No pagination on large datasets
ââ Full object serialization instead of selective fields
MATERIALS
ââ Large JSON payloads (unnecessary data)
ââ Uncompressed responses
ââ Third-party API (payment gateway) is slow
ââ Why: Free tier with rate limiting
ROOT CAUSES:
- No performance requirements defined (Process)
- Missing performance monitoring tooling (Technology)
- Architecture doesn't support caching/async (Methods)
SOLUTIONS (Priority Order):
1. Add database indexes (quick win, high impact)
2. Implement Redis caching layer (medium effort, high impact)
3. Make external API calls async with webhooks (high effort, high impact)
4. Define and monitor performance SLAs (low effort, prevents regression)
Example 2: Flaky Test Suite
Problem: 15% of test runs fail, passing on retry
PEOPLE
ââ Test-writing skills vary across team
ââ New developers copy existing flaky patterns
ââ No one assigned to fix flaky tests
PROCESS
ââ Flaky tests marked as "known issue" and ignored
ââ No policy against merging with flaky tests
ââ Test failures don't block deployments
TECHNOLOGY
ââ Race conditions in async test setup
ââ Tests share global state
ââ Test database not isolated per test
ââ setTimeout used instead of proper waiting
ââ CI environment inconsistent (different CPU/memory)
ENVIRONMENT
ââ CI runner under heavy load
ââ Network timing varies (external API mocks flaky)
ââ Timezone differences between local and CI
METHODS
ââ Integration tests not properly isolated
ââ No retry logic for legitimate timing issues
ââ Tests depend on execution order
MATERIALS
ââ Test data fixtures overlap
ââ Shared test database polluted
ââ Mock data doesn't match production patterns
ROOT CAUSES:
- No test isolation strategy (Methods + Technology)
- Process accepts flaky tests (Process)
- Async timing not handled properly (Technology)
SOLUTIONS:
1. Implement per-test database isolation (high impact)
2. Replace setTimeout with proper async/await patterns (medium impact)
3. Add pre-commit hook blocking flaky test patterns (prevents new issues)
4. Enforce policy: flaky test = block merge (process change)
Example 3: Feature Takes 3 Months Instead of 3 Weeks
Problem: Simple CRUD feature took 12 weeks vs. 3 week estimate
PEOPLE
ââ Developer unfamiliar with codebase
ââ Key architect on vacation during critical phase
ââ Designer changed requirements mid-development
PROCESS
ââ Requirements not finalized before starting
ââ No code review for first 6 weeks (large diff)
ââ Multiple rounds of design revision
ââ QA started late (found issues in week 10)
TECHNOLOGY
ââ Codebase has high coupling (change ripple effects)
ââ No automated tests (manual testing slow)
ââ Legacy code required refactoring first
ââ Development environment setup took 2 weeks
ENVIRONMENT
ââ Staging environment broken for 3 weeks
ââ Production data needed for testing (compliance delay)
ââ Dependencies blocked by another team
METHODS
ââ No incremental delivery (big bang approach)
ââ Over-engineering (added future features "while we're at it")
ââ No design doc (discovered issues during implementation)
MATERIALS
ââ Third-party API changed during development
ââ Production data model different than staging
ââ Missing design assets (waited for designer)
ROOT CAUSES:
- No requirements lock-down before start (Process)
- Architecture prevents incremental changes (Technology)
- Big bang approach vs. iterative (Methods)
- Development environment not automated (Technology)
SOLUTIONS:
1. Require design doc + finalized requirements before starting (Process)
2. Implement feature flags for incremental delivery (Methods)
3. Automate dev environment setup (Technology)
4. Refactor high-coupling areas (Technology, long-term)
Notes
- Fishbone reveals systemic issues across domains
- Multiple causes often combine to create problems
- Don’t stop at first cause in each categoryâdig deeper
- Some causes span multiple categories (mark them)
- Root causes usually in Process or Methods (not just Technology)
- Use with
/whycommand for deeper analysis of specific causes - Prioritize solutions by: impact à feasibility ÷ effort
- Address root causes, not just symptoms