note-to-blog
npx skills add https://github.com/niracler/skill --skill note-to-blog
Agent 安装分布
Skill 文档
Note to Blog
ä» Obsidian Note ä»åºä¸çééååå¸çç¬è®°ï¼è¯ä¼°éé æ§ï¼æ¹ééé¢ï¼åééå¤çï¼å¿«éè½¬æ¢ / 深度ç ç©¶ï¼ï¼å¹¶è¡ Agent æ´¾åã
Prerequisites
| Tool | Type | Required | Install |
|---|---|---|---|
| Python 3 | cli | Yes | Pre-installed on macOS |
| PyYAML | cli | Yes | pip install pyyaml |
| writing-proofreading | skill | No | Included in npx skills add niracler/skill |
Do NOT proactively verify these tools on skill load. If a command fails due to a missing tool, directly guide the user through installation and configuration step by step.
When NOT to Use
- ç¬è®°åºå°äº 5 ç¯æ¶ï¼æå¨é颿´å¿«
- åªæ³è½¬æ¢åç¯å·²ç¡®å®çç¬è®° â ç´æ¥è¿è¡
<skill-dir>/scripts/note-to-blog.py convert "<path>" - å客è稿已åå¨ï¼åªéæ ¡å¯¹ â ä½¿ç¨ writing-proofreading
Script Location
All deterministic operations are handled by the Python script:
<skill-dir>/scripts/note-to-blog.py (collect / convert / state subcommands)
Path configuration is in user-config.md. All bash examples below use <PLACEHOLDER> â replace with values from user-config.md.
Workflow Overview
Step 1 Step 2 Step 3 Step 4 Step 5
Collect âââ¶ Level Select âââ¶ By Level âââ¶ Execute âââ¶ Summary
(script) (user) ââ L1 æµè§ (Agent Teams) (report)
ââ L2 æ¨è
ââ L3 æ·±æ¢
â
Interact âââ¶ track assign âââ¶ confirm
Step 1: Collect
Run the collect script with paths from user-config.md:
python3 <skill-dir>/scripts/note-to-blog.py collect \
--note-repo "<NOTE_REPO>" \
--blog-content "<BLOG_CONTENT>" \
--project-paths <PROJECT_PATHS> \
--history-file "<HISTORY_FILE>"
The script outputs a single JSON object to stdout containing:
candidates: all eligible notes with title, summary, char_count, outgoing_linksclusters: wikilink hub nodes (3+ inbound links) with related notespublished_posts: existing blog posts with title, tags, collectionsession_keywords: recent Claude Code session activity signalsstats: total_scanned, filtered_out, candidates_count
Read the JSON output and proceed to Step 2.
Step 2: Level Selection
Display data volume and offer Level choice:
collect 宿ï¼
åéç¬è®°: {candidates_count} ç¯ (æ»æ«æ {total_scanned}, è¿æ»¤ {filtered_out})
主é¢ç°: {clusters_count} 个 (3+ å¼ç¨ hub)
å¯é深度:
Level 1 æµè§ ç´æ¥å±ç¤ºåéåè¡¨ï¼æå¨éæ© 0 é¢å¤ token
Level 2 æ¨è LLM è¯ä¼° + 主é¢ç°åæ ~2k token â æ¨è
Level 3 æ·±æ¢ Level 2 + 读å hub ç¬è®°å
¨æ ~5k+ token
éæ© Level (1-3)?
Quick Reference
| Level | åç§° | è¯ä¼°æ¹å¼ | åç»æµç¨ |
|---|---|---|---|
| 1 | æµè§ | æ LLMï¼åéæåæ°éåº | ç¨æ·ç´é â å ¨é¨ fast track |
| 2 | æ¨è | LLM è¯ä¼°æè¦ + 主é¢ç° â 5-8 æ¨è | fast/deep track åé |
| 3 | æ·±æ¢ | Level 2 + 读å hub ç¬è®°å ¨æ | fast/deep trackï¼cluster æ¨èæ´åç¡® |
Recommendation logic
| åéæ° | clusters | æ¨è |
|---|---|---|
| ⤠10 | any | Level 1 |
| > 10 | 0 | Level 2 |
| > 10 | 1+ | Level 2 |
ç¨æ·æç¡®è¯´ãæ³åç°ä¸»é¢ããæä»ä¹å¯ä»¥æ´åçãæ¶ â æ¨è Level 3ã
Step 3: By Level
Level 1: Browse
Skip LLM evaluation. Display candidates sorted by char_count descending:
# æ é¢ åæ° 龿¥æ°
1 å
³äºåLLMæ¶ä»£ç代ç Review 3200 5
2 SSHç§é¥å å¯ 1200 2
3 Feedå
容é
è¯»å§¿å¿ 1800 3
...
User selects items by number. All selections go to fast track only (Level 1 does not offer deep track).
After selection, skip to Confirm & Execute below.
Level 2: Recommend
Make a single LLM evaluation using the prompt template from scoring-criteria.md.
Input: Construct the evaluation prompt with collect JSON data (candidates, clusters, published_posts, session_keywords).
Output: The LLM SHALL return a JSON array of 5-8 recommendations. See scoring-criteria.md for the full specification.
If the LLM response is not valid JSON, retry once with explicit format instructions.
After evaluation, proceed to Interact below.
Level 3: Deep Explore
Same as Level 2, but before calling the LLM, Read the full text of each cluster hub note and append it to the evaluation prompt.
For each cluster in the collect JSON where hub_path is not null:
- Read the hub note full text from the Note repository
- Append it to the LLM prompt under a
## Hub ç¬è®°å ¨æsection (see scoring-criteria.md for the Level 3 input format)
This gives the LLM actual content context for cluster recommendations instead of just metadata.
After evaluation, proceed to Interact below.
Interact (Level 2/3)
Present recommendations
Display the recommendation list as a mixed table:
# ç±»å æ é¢ éé
å ç®æ å·¥ä½é æ´»è· éå¤é£é©
1 åç¯ åLLMæ¶ä»£ä»£ç Review 92 blog å° â
â
â
æ
2 主é¢ç° ä¼é
çå²å¦ (9ç¯å
³è) 88 blog 大 â
æ
3 åç¯ SSHç§é¥å å¯ 85 til å° â æ
...
User actions
| Action | Example | Effect |
|---|---|---|
| Select + assign track | “1 å 3 å¿«é转æ¢ï¼2 走深度” | Queue items with track assignment |
| Override collection | “1 æ¾ til” | Change target collection |
| Batch skip | “4~6 è·³è¿ï¼reason: private” | Mark as skipped via state skip |
| See more | “è¿æå«çå” | Request additional recommendations |
| Check status | “ç¶æ” | Run state status |
On skip: run immediately:
python3 <skill-dir>/scripts/note-to-blog.py state skip "<path>" --reason "<reason>" \
--note-repo "<NOTE_REPO>"
Track assignment
| Track | When to use | What happens |
|---|---|---|
| Fast (å¿«é) | Independent, mostly complete notes | Script converts â Agent reviews â draft |
| Deep (深度) | Topic clusters or rough notes needing research | Agent reads all related notes â research report |
Default: effort: "å°" â fast; type: "cluster" or effort: "大" â deep. User decides.
Confirm & Execute
Display a confirmation summary (all Levels):
ç¡®è®¤éæ©ï¼
Fast track:
1. åLLMæ¶ä»£ä»£ç Review â blog/
3. SSHç§é¥å å¯ â til/
Deep track:
2. ä¼é
çå²å¦ (9ç¯å
³è) â blog/
å¼å§å¤çï¼
Wait for user confirmation, then dispatch.
Parallel dispatch
Dispatch N parallel Agents using the Task tool, one per selected item.
å ¶ä» Agent ç¯å¢ï¼ä»¥ä¸ Fast/Deep track ä»»å¡ç¸äºç¬ç«ï¼å¯æé¡ºåºä¾æ¬¡æ§è¡ã
æ»ç¼ (Main Agent)
âââ Task Agent 1: æç« A (fast track)
âââ Task Agent 2: æç« B (fast track)
âââ Task Agent 3: 主é¢ç° C (deep track)
Use the Task tool to launch all Agents in a single message. Each Agent should be a general-purpose subagent with a detailed prompt containing all the information it needs.
See agent-instructions.md for the complete Fast Track and Deep Track agent prompt templates.
State updates
Individual Agents do NOT update .note-to-blog.json directly. After all Agents complete, the main agent runs state updates sequentially:
python3 <skill-dir>/scripts/note-to-blog.py state draft "<note_path>" \
--target "<collection>/<slug>.md" \
--note-repo "<NOTE_REPO>"
Deep track items are NOT marked as drafted (they need further user decision).
Summary
After all Agents complete, present a unified summary:
Fast Track 宿ï¼
â åLLMæ¶ä»£ä»£ç Review â repos/bokushi/src/content/blog/llm-code-review.md
- è½¬æ¢æ£å¸¸ï¼æ é®é¢
â SSHç§é¥å å¯ â repos/bokushi/src/content/til/ssh-key-encryption.md
- åç° 1 个 TODO æ è®°éè¦æå¨å¤ç
Deep Track 宿ï¼
ð ä¼é
çå²å¦ (9ç¯å
³è)
- ç ç©¶æ¥åå·²çæ
- ä¸ä¸æ¥ï¼ a) æå¤§çº²åä½ b) ä¿®æ¹å¤§çº² c) æä¸å¤ç
ç¶ææ´æ°ï¼
drafted: N ç¯
è稿å为 hidden: trueï¼éè¦æå¨ review åæ¹ä¸º false åå¸ã
å»ºè®®ä½¿ç¨ /writing-proofreading è¿è¡å®¡æ ¡ã
åå¸åè¿è¡:
python3 <skill-dir>/scripts/note-to-blog.py state publish "<note_path>" --note-repo "<NOTE_REPO>"
Detailed References
- Path configuration: user-config.md
- LLM evaluation prompt and scoring: scoring-criteria.md
- Agent prompt templates: agent-instructions.md