code-review
npx skills add https://github.com/carvalab/k-skills --skill code-review
Agent 安装分布
Skill 文档
Code Review
Perform a thorough senior-level code review on recent changes. Focus human review on complex logic, architecture, and edge cases. Delegate style enforcement to automated tools.
Related Skills:
kavak-documentation– Query for Kavak-specific patterns, logging/metrics standards, GitLab CI templates- Use
kavak-platform/plati_queryMCP tool to verify Kavak best practices when reviewing
Quick Start
# 1. Get review range
BASE_SHA=$(git rev-parse HEAD~1) # or: git merge-base main HEAD
HEAD_SHA=$(git rev-parse HEAD)
# 2. See what changed
git diff --stat $BASE_SHA..$HEAD_SHA
git diff --name-only $BASE_SHA..$HEAD_SHA --diff-filter=ACMR
# 3. After review, run quality gates (detect project type)
Quality gate commands by language:
| Language | Lint | Build | Test |
|---|---|---|---|
| Go | golangci-lint run |
go build ./... |
go test ./... |
| Node/TS | npm run lint |
npm run build |
npm test |
| Python | ruff check . |
python -m py_compile |
pytest |
| Java | ./mvnw checkstyle:check |
./mvnw compile |
./mvnw test |
When to Self-Review (Automated)
Mandatory triggers:
- After completing each task in multi-step development
- After implementing a major feature
- Before merging to main branch
Valuable triggers:
- When stuck (fresh perspective on own code)
- Before refactoring (baseline check)
- After fixing complex bugs
Review Mindset
- Thorough, not rushed – Read all related code before concluding
- Evidence-based – Trace execution paths, don’t assume bugs exist
- Fix, don’t just flag – Identify issues AND resolve them
- Small scope – Review <400 lines at a time for effectiveness
Workflow
1. Scope the Diff
Identify all changed files:
git diff --name-only HEAD~1 --diff-filter=ACMR
For each file in the list, skip any that produces no actual diff hunks.
2. Understand Intent & Requirements
Before reviewing code, understand what it should do:
- Original task/issue: What was requested?
- Product impact: What does this deliver for users?
- Acceptance criteria: What defines “done”?
Ask: “Does the implementation satisfy the requirements?”
3. Review Each File
For each changed file and each diff hunk, evaluate in context of the existing codebase.
MANDATORY: Trace Complete Execution Before Flagging Critical/Major
Before classifying any issue as Critical or Major, you MUST:
- Read the complete code path – not just the suspicious line
- Trace what actually happens – follow execution from start to end
- Verify the issue exists – confirm there is no defensive code preventing it
- Consider all scenarios – including error paths, cancellation, edge cases
4. Review Categories
| Category | What to Check |
|---|---|
| Design & Architecture | Fits system patterns, avoids coupling, clear separation |
| Complexity & Maintainability | Flat control flow, low complexity, DRY, no dead code |
| Functionality & Correctness | Valid/invalid inputs handled, edge cases covered |
| Readability & Naming | Intent-revealing names, comments explain WHY |
| Best Practices | Language/framework idioms, SOLID principles |
| Test Coverage | Success + failure paths tested |
| Standardization | Style guide conformance, zero new lint warnings |
| Security | Input validation, secrets management, OWASP Top 10 |
| Performance | No N+1 queries, efficient I/O, no memory leaks |
| Logging | State machine pattern, proper tense, actionable context |
| Metrics | No cardinality explosions, proper naming, documented |
Automation tip: Delegate style/formatting to linters (ESLint, Prettier). Focus human review on logic, architecture, and edge cases.
4.1 Logging Review
| Check | â Good | â Bad |
|---|---|---|
| Length | 50-200 chars | >200 (truncation) |
| Multiline | Single line | Multiple lines |
| Start | “Syncing orders” (progressive) | “Sync orders” |
| End OK | “Orders synced” (past) | “Syncing complete” |
| End Fail | “Failed to sync orders” | “Error” |
| Context | “DB timeout after 30s” | “Database error” |
4.2 Metrics Review
| Check | â Good | â Bad (CRITICAL) |
|---|---|---|
| Labels | status=2xx, method=GET |
user_id, order_id |
| Naming | api.request.total |
apiRequestTotal |
| Type | Counter (events), Histogram (latency) | Gauge for everything |
CRITICAL: High-cardinality labels (user_id, request_id, timestamp) cause monitoring system crashes.
5. Check Project Rules
MANDATORY: Check and enforce rules from:
.claude/CLAUDE.mdorCLAUDE.md(root).cursor/rules/folderAGENTS.md- Follow ALL patterns and conventions defined there
6. Report & Fix Issues
For each validated issue:
- File:
<path>:<line-range> - Issue: One-line summary
- Fix: Concise suggested change
Severity levels:
- Critical (P0): Security vulnerabilities, data loss, crashes
- Major (P1): Significant bugs, performance issues, architectural violations
- Minor (P2): Code style, minor improvements
- Enhancement (P3): Nice-to-have improvements
MANDATORY: Fix all issues immediately after identifying them.
- Start with highest priority (Critical â Major â Minor)
- For each issue:
- Fix the code
- Verify fix doesn’t break anything
- CHECKPOINT COMMIT:
git add -A && git commit -m "fix: brief description"
- Continue until ALL issues are resolved
7. Run Quality Gates
After all issues are fixed, run lint â build â test for project type (see Quick Start table).
If any gate fails, fix immediately and commit the fix.
8. Final Report
Provide structured output:
### Strengths
[What's well done - be specific with file:line references]
### Issues Found & Fixed
- **Critical**: [count] - [brief list]
- **Major**: [count] - [brief list]
- **Minor**: [count] - [brief list]
### Quality Gates
- Lint: â/â
- Typecheck: â/â
- Build: â/â
- Tests: â/â
### Assessment
**Ready to proceed?** [Yes / Yes with notes / No - needs fixes]
**Reasoning:** [1-2 sentence technical assessment]
If no issues: “Code review complete. No issues found. All quality gates pass. Ready to proceed.”
References
| Reference | Purpose |
|---|---|
references/checklist.md |
Detailed review checklist |
references/severity-guide.md |
How to classify issue severity |
references/common-issues.md |
Common issues (TypeScript/Node) |
references/common-issues-go.md |
Common issues (Go) |
references/security-owasp.md |
OWASP Top 10 security checklist |
references/feedback-guide.md |
How to give constructive feedback |
references/self-review-workflow.md |
Automated self-review during development |
Best Practices
- No assumptions – Read ALL related code before flagging issues
- Fix immediately – Don’t just report, fix and commit
- Checkpoint commits – Commit each fix separately for easy rollback
- Verify first – Trace execution paths before claiming bugs exist
- Project rules priority – Always check
.cursor/rules/andAGENTS.md - Be timely – Review promptly to avoid blocking teammates
- Limit scope – Review <400 lines at a time for effectiveness
Principle: Review â Fix â Verify â Commit. A review that only identifies problems without fixing them is incomplete.