review-ai-writing
0
总安装量
1
周安装量
安装命令
npx skills add https://github.com/existential-birds/beagle --skill review-ai-writing
Agent 安装分布
amp
1
opencode
1
cursor
1
kimi-cli
1
codex
1
github-copilot
1
Skill 文档
AI Writing Detection for Developer Text
Detect patterns characteristic of AI-generated text in developer artifacts. These patterns reduce trust, add noise, and obscure meaning.
Pattern Categories
| Category | Reference | Key Signals |
|---|---|---|
| Content | references/content-patterns.md |
Promotional language, vague authority, formulaic structure, synthetic openers |
| Vocabulary | references/vocabulary-patterns.md |
AI word tiers, copula avoidance, rhetorical devices, synonym cycling, commit inflation |
| Formatting | references/formatting-patterns.md |
Boldface overuse, emoji decoration, heading restatement |
| Communication | references/communication-patterns.md |
Chat leaks, cutoff disclaimers, sycophantic tone, apologetic errors |
| Filler | references/filler-patterns.md |
Filler phrases, excessive hedging, generic conclusions |
| Code Docs | references/code-docs-patterns.md |
Tautological docstrings, narrating obvious code, “This noun verbs”, exhaustive enumeration |
Scope
Scan these artifact types:
| Artifact | File Patterns | Notes |
|---|---|---|
| Markdown docs | *.md |
READMEs, guides, changelogs |
| Docstrings | *.py, *.ts, *.js, *.go, *.swift, *.rs, *.java, *.kt, *.rb, *.ex |
Language-specific docstring formats |
| Code comments | Same as docstrings | Inline and block comments |
| Commit messages | git log output |
Use synthetic path git:commit:<sha> |
| PR descriptions | GitHub PR body | Use synthetic path git:pr:<number> |
What NOT to Scan
- Generated code (lock files, compiled output, vendor directories)
- Third-party content (copied license text, vendored docs)
- Code itself (variable names, string literals used programmatically)
- Test fixtures and mock data
Detection Rules
High-Confidence Signals (Always Flag)
These patterns are strong indicators of AI-generated text:
- Chat leaks â “Certainly!”, “I’d be happy to”, “Great question!”, “Here’s” as sentence opener
- Cutoff disclaimers â “As of my last update”, “I cannot guarantee”
- High-signal AI vocabulary â delve, utilize (as “use”), whilst, harnessing, paradigm, synergy
- “This noun verbs” in docstrings â “This function calculates”, “This method returns”
- Synthetic openers â “In today’s fast-paced”, “In the world of”
- Sycophantic code comments â “Excellent approach!”, “Great implementation!”
Medium-Confidence Signals (Flag in Context)
Flag when 2+ appear together or pattern is repeated:
- Low-signal AI vocabulary clusters â 3+ words from the low-signal list in one section
- Formulaic structure â Rigid intro-body-conclusion in a README section
- Heading restatement â First sentence after heading restates the heading
- Excessive hedging â “might potentially”, “could possibly”, “it seems like it may”
- Synonym cycling â Same concept called different names within one section
- Boldface overuse â More than 30% of sentences contain bold text
Low-Confidence Signals (Note Only)
Mention but don’t flag as issues:
- Emoji in technical docs â May be intentional project style
- Filler phrases â Some are common in human writing too
- Generic conclusions â May be appropriate for summary sections
- Commit inflation â Some teams prefer descriptive commits
False Positive Warnings
Do NOT flag these as AI-generated:
| Pattern | Why It’s Valid |
|---|---|
| “Ensure” in security docs | Standard term for security requirements |
| “Comprehensive” in test coverage discussion | Accurate technical descriptor |
| Formal tone in API reference docs | Expected register for reference material |
| “Leverage” in financial/business domain code | Domain-specific meaning, not AI filler |
| Bold formatting in CLI help text | Standard convention |
| Structured intro paragraphs in RFCs/ADRs | Expected format for these document types |
“This module provides” in Python __init__.py |
Idiomatic Python module docstring |
| Rhetorical questions in blog posts | Appropriate for informal content |
Integration
With beagle-core:review-verification-protocol
Before reporting any finding:
- Read the surrounding context (full paragraph or function)
- Confirm the pattern is AI-characteristic, not just formal writing
- Check if the project has established conventions that match the pattern
- Verify the suggestion improves clarity without changing meaning
With beagle-core:llm-artifacts-detection
Code-level patterns (tautological docstrings, obvious comments) overlap with llm-artifacts-detection‘s style criteria. When both skills are loaded:
review-ai-writingfocuses on writing style (how it reads)llm-artifacts-detectionfocuses on code artifacts (whether it should exist at all)- If
.beagle/llm-artifacts-review.jsonexists, skip findings already captured there
Output Format
Report each finding as:
[FILE:LINE] ISSUE_TITLE
- Category: content | vocabulary | formatting | communication | filler | code_docs
- Type: specific_pattern_name
- Original: "the problematic text"
- Suggestion: "the improved text" or "delete"
- Risk: Low | Medium
- Fix Safety: Safe | Needs review
Risk Levels
- Low â Filler phrases, obvious comments, emoji. Removing improves clarity with no meaning change.
- Medium â Vocabulary swaps, structural changes, docstring rewrites. Meaning could shift if done carelessly.
Fix Safety
- Safe â Mechanical replacement or deletion. No judgment needed.
- Needs review â Rewrite requires understanding context. Human should verify the replacement preserves intent.