software-clean-code-standard

📁 vasilyu1983/ai-agents-public 📅 Jan 23, 2026
50
总安装量
50
周安装量
#4252
全站排名
安装命令
npx skills add https://github.com/vasilyu1983/ai-agents-public --skill software-clean-code-standard

Agent 安装分布

claude-code 31
gemini-cli 30
opencode 30
antigravity 25
cursor 25

Skill 文档

Clean Code Standard — Quick Reference

This skill is the authoritative clean code standard for this repository’s shared skills. It defines stable rule IDs (CC-*), how to apply them in reviews, and how to extend them safely via language overlays and explicit exceptions.

Modern Best Practices (January 2026): Prefer small, reviewable changes and durable change context (https://google.github.io/eng-practices/review/developer/small-cls.html, https://google.github.io/eng-practices/review/developer/cl-descriptions.html). Use normative language consistently (RFC 2119: https://www.rfc-editor.org/rfc/rfc2119). Treat security-by-design and secure defaults as baseline (OWASP Top 10: https://owasp.org/www-project-top-ten/, NIST SSDF SP 800-218: https://csrc.nist.gov/pubs/sp/800/218/final). Build observable systems (OpenTelemetry: https://opentelemetry.io/docs/). For current tool choices, consult data/sources.json.


Quick Reference

Task Tool/Framework Command When to Use
Cite a standard CC-* rule ID N/A PR review comments, design discussions, postmortems
Categorize feedback CC-NAM, CC-ERR, CC-SEC, etc. N/A Keep feedback consistent without “style wars”
Add stack nuance Language overlay N/A When the base rule is too generic for a language/framework
Allow an exception Waiver record N/A When a rule must be violated with explicit risk
Reuse shared checklists assets/checklists/ N/A When you need product-agnostic review/release checklists
Reuse utility patterns utilities/ N/A When extracting shared auth/logging/errors/resilience/testing utilities

When to Use This Skill

  • Defining or enforcing clean code rules across teams and languages.
  • Reviewing code: cite CC-* IDs and avoid restating standards in reviews.
  • Building automation: map linters/CI gates to CC-* IDs.
  • Resolving recurring review debates: align on rule IDs, scope, and exceptions.

When NOT to Use This Skill

Decision Tree: Base Rule vs Overlay vs Exception

Feedback needed: [What kind of guidance is this?]
    ├─ Universal, cross-language rule? → Add/modify `CC-*` in `references/clean-code-standard.md`
    │
    ├─ Language/framework-specific nuance? → Add overlay entry referencing existing `CC-*`
    │
    └─ One-off constraint or temporary tradeoff?
        ├─ Timeboxed? → Add waiver with expiry + tracking issue
        └─ Permanent? → Propose a new rule or revise scope/exception criteria

Navigation

Resources

Templates

Utilities

Related Skills


Optional: AI/Automation

  • Map automation findings to CC-* IDs (linters, SAST, dependency scanning) so humans can review impact, not tooling noise.
  • Keep AI-assisted suggestions advisory; human reviewers approve/deny with rule citations (https://conventionalcomments.org/).

Trend Awareness Protocol

IMPORTANT: When users ask recommendation questions about clean code standards, linters, or code quality tools, you MUST use a web search capability (if available) to check current trends before answering. If web search is unavailable, say so and answer using data/sources.json, clearly flagging that the recommendation may be stale.

Trigger Conditions

  • “What’s the best linter for [language]?”
  • “What should I use for [code quality/static analysis]?”
  • “What’s the latest in clean code practices?”
  • “Current best practices for [code standards/formatting]?”
  • “Is [ESLint/Prettier/Biome] still relevant in 2026?”
  • “[Biome] vs [ESLint] vs [other]?”
  • “Best static analysis tool for [language]?”

Required Searches

  1. Search: "clean code best practices 2026"
  2. Search: "[specific linter] vs alternatives 2026"
  3. Search: "code quality tools trends 2026"
  4. Search: "[language] linter comparison 2026"

What to Report

After searching, provide:

  • Current landscape: What linters/formatters are popular NOW
  • Emerging trends: New tools, standards, or patterns gaining traction
  • Deprecated/declining: Tools/approaches losing relevance or support
  • Recommendation: Based on fresh data, not just static knowledge

Example Topics (verify with fresh search)

  • JavaScript/TypeScript linters (ESLint, Biome, oxlint)
  • Formatters (Prettier, dprint, Biome)
  • Python quality (Ruff, mypy, pylint)
  • Go linting (golangci-lint, staticcheck)
  • Rust analysis (clippy, cargo-deny)
  • Code quality metrics and reporting tools
  • AI-assisted code review tools