rlm
4
总安装量
2
周安装量
#49120
全站排名
安装命令
npx skills add https://github.com/bowtiedswan/rlm-skill --skill rlm
Agent 安装分布
amp
2
cline
2
opencode
2
cursor
2
kimi-cli
2
codex
2
Skill 文档
Recursive Language Model (RLM) Skill
Core Philosophy
“Context is an external resource, not a local variable.”
When this skill is active, you are the Root Node of a Recursive Language Model system. Your job is NOT to read code, but to write programs (plans) that orchestrate sub-agents to read code.
Protocol: The RLM Loop
Phase 1: Choose Your Engine
Decide based on the nature of the data:
| Engine | Use Case | Tool |
|---|---|---|
| Native Mode | General codebase traversal, finding files, structure. | find, grep, bash |
| Strict Mode | Dense data analysis (logs, CSVs, massive single files). | python3 ~/.claude/skills/rlm/rlm.py |
Phase 2: Index & Filter (The “Peeking” Phase)
Goal: Identify relevant data without loading it.
- Native: Use
findorgrep -l. - Strict: Use
python3 .../rlm.py peek "query".- RLM Pattern: Grepping for import statements, class names, or definitions to build a list of relevant paths.
Phase 3: Parallel Map (The “Sub-Query” Phase)
Goal: Process chunks in parallel using fresh contexts.
- Divide: Split the work into atomic units.
- Strict Mode:
python3 .../rlm.py chunk --pattern "*.log"-> Returns JSON chunks.
- Strict Mode:
- Spawn: Use
background_taskto launch parallel agents.- Constraint: Launch at least 3-5 agents in parallel for broad tasks.
- Prompting: Give each background agent ONE specific chunk or file path.
- Format:
background_task(agent="explore", prompt="Analyze chunk #5 of big.log: {content}...")
Phase 4: Reduce & Synthesize (The “Aggregation” Phase)
Goal: Combine results into a coherent answer.
- Collect: Read the outputs from
background_task(viabackground_output). - Synthesize: Look for patterns, consensus, or specific answers in the aggregated data.
- Refine: If the answer is incomplete, perform a second RLM recursion on the specific missing pieces.
Critical Instructions
- NEVER use
cat *or read more than 3-5 files into your main context at once. - ALWAYS prefer
background_taskfor reading/analyzing file contents when the file count > 1. - Use
rlm.pyfor programmatic slicing of large files thatgrepcan’t handle well. - Python is your Memory: If you need to track state across 50 files, write a Python script (or use
rlm.py) to scan them and output a summary.
Example Workflow: “Find all API endpoints and check for Auth”
Wrong Way (Monolithic):
read src/api/routes.tsread src/api/users.ts- … (Context fills up, reasoning degrades)
RLM Way (Recursive):
- Filter:
grep -l "@Controller" src/**/*.ts-> Returns 20 files. - Map:
background_task(prompt="Read src/api/routes.ts. Extract all endpoints and their @Auth decorators.")background_task(prompt="Read src/api/users.ts. Extract all endpoints and their @Auth decorators.")- … (Launch all 20)
- Reduce:
- Collect all 20 outputs.
- Compile into a single table.
- Identify missing auth.
Recovery Mode
If background_task is unavailable or fails:
- Fall back to Iterative Python Scripting.
- Write a Python script that loads each file, runs a regex/AST check, and prints the result to stdout.
- Read the script’s stdout.