listenhub
npx skills add https://github.com/marswaveai/skills --skill listenhub
Agent 安装分布
Skill 文档
Four modes, one entry point:
- Podcast â Two-person dialogue, ideal for deep discussions
- Explain â Single narrator + AI visuals, ideal for product intros
- TTS/Flow Speech â Pure voice reading, ideal for articles
- Image Generation â AI image creation, ideal for creative visualization
Users don’t need to remember APIs, modes, or parameters. Just say what you want.
â Hard Constraints (Inviolable)
The scripts are the ONLY interface. Period.
âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â AI Agent âââ¶ ./scripts/*.sh âââ¶ ListenHub API â
â â² â
â â â
â This is the ONLY path. â
â Direct API calls are FORBIDDEN. â
âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
MUST:
- Execute functionality ONLY through provided scripts in
**/skills/listenhub/scripts/ - Pass user intent as script arguments exactly as documented
- Trust script outputs; do not second-guess internal logic
MUST NOT:
- Write curl commands to ListenHub/Marswave API directly
- Construct JSON bodies for API calls manually
- Guess or fabricate speakerIds, endpoints, or API parameters
- Assume API structure based on patterns or web searches
- Hallucinate features not exposed by existing scripts
Why: The API is proprietary. Endpoints, parameters, and speakerIds are NOT publicly documented. Web searches will NOT find this information. Any attempt to bypass scripts will produce incorrect, non-functional code.
Script Location
Scripts are located at **/skills/listenhub/scripts/ relative to your working context.
Different AI clients use different dot-directories:
- Claude Code:
.claude/skills/listenhub/scripts/ - Other clients: may vary (
.cursor/,.windsurf/, etc.)
Resolution: Use glob pattern **/skills/listenhub/scripts/*.sh to locate scripts reliably, or resolve from the SKILL.md file’s own path.
Private Data (Cannot Be Searched)
The following are internal implementation details that AI cannot reliably know:
| Category | Examples | How to Obtain |
|---|---|---|
| API Base URL | api.marswave.ai/... |
â Cannot â internal to scripts |
| Endpoints | podcast/episodes, etc. |
â Cannot â internal to scripts |
| Speaker IDs | cozy-man-english, etc. |
â Call get-speakers.sh |
| Request schemas | JSON body structure | â Cannot â internal to scripts |
| Response formats | Episode ID, status codes | â Documented per script |
Rule: If information is not in this SKILL.md or retrievable via a script (like get-speakers.sh), assume you don’t know it.
Design Philosophy
Hide complexity, reveal magic.
Users don’t need to know: Episode IDs, API structure, polling mechanisms, credits, endpoint differences. Users only need: Say idea â wait a moment â get the link.
Environment
ListenHub API Key
API key stored in $LISTENHUB_API_KEY. Check on first use:
source ~/.zshrc 2>/dev/null; [ -n "$LISTENHUB_API_KEY" ] && echo "ready" || echo "need_setup"
If setup needed, guide user:
- Visit https://listenhub.ai/settings/api-keys
- Paste key (only the
lh_sk_...part) - Auto-save to ~/.zshrc
Image Generation API Key
Image generation uses the same ListenHub API key stored in $LISTENHUB_API_KEY.
Image generation output path defaults to the user downloads directory, stored in $LISTENHUB_OUTPUT_DIR.
On first image generation, the script auto-guides configuration:
- Visit https://listenhub.ai/settings/api-keys (requires subscription)
- Paste API key
- Configure output path (default: ~/Downloads)
- Auto-save to shell rc file
Security: Never expose full API keys in output.
Mode Detection
Auto-detect mode from user input:
â Podcast (1-2 speakers)
Supports single-speaker or dual-speaker podcasts. Debate mode requires 2 speakers.
Default mode: quick unless explicitly requested.
If speakers are not specified, call get-speakers.sh and select the first speakerId
matching the chosen language.
If reference materials are provided, pass them as --source-url or --source-text.
When the user only provides a topic (e.g., “I want a podcast about X”), proceed with:
- detect
languagefrom user input, - set
mode=quick, - choose one speaker via
get-speakers.shmatching the language, - create a single-speaker podcast without further clarification.
- Keywords: “podcast”, “chat about”, “discuss”, “debate”, “dialogue”
- Use case: Topic exploration, opinion exchange, deep analysis
- Feature: Two voices, interactive feel
â Explain (Explainer video)
- Keywords: “explain”, “introduce”, “video”, “explainer”, “tutorial”
- Use case: Product intro, concept explanation, tutorials
- Feature: Single narrator + AI-generated visuals, can export video
â TTS (Text-to-speech)
TTS defaults to FlowSpeech direct for single-pass text or URL narration.
Script arrays and multi-speaker dialogue belong to Speech as an advanced path, not the default TTS entry.
Text-to-speech input is limited to 10,000 characters; split or use a URL when longer.
- Keywords: “read aloud”, “convert to speech”, “tts”, “voice”
- Use case: Article to audio, note review, document narration
- Feature: Fastest (1-2 min), pure audio
Ambiguous “Convert to speech” Guidance
When the request is ambiguous (e.g., “convert to speech”, “read aloud”), apply:
- Default to FlowSpeech and prioritize
directto avoid altering content. - Input type: URL uses
type=url, plain text usestype=text. - Speaker: if not specified, call
get-speakersand pick the firstspeakerIdmatchinglanguage. - Switch to Speech only when multi-line scripts or multi-speaker dialogue is explicitly requested, and require
scripts.
Example guidance:
âThis request can use FlowSpeech with the default direct mode; switch to smart for grammar and punctuation fixes. For per-line speaker assignment, provide scripts and switch to Speech.â
â Image Generation
- Keywords: “generate image”, “draw”, “create picture”, “visualize”
- Use case: Creative visualization, concept art, illustrations
- Feature: AI image generation via Labnana API, multiple resolutions and aspect ratios
Reference Images via Image Hosts
When reference images are local files, upload to a known image host and use the direct image URL in --reference-images.
Recommended hosts: imgbb.com, sm.ms, postimages.org, imgur.com.
Direct image URLs should end with .jpg, .png, .webp, or .gif.
Default: If unclear, ask user which format they prefer.
Explicit override: User can say “make it a podcast” / “I want explainer video” / “just voice” / “generate image” to override auto-detection.
Interaction Flow
Step 1: Receive input + detect mode
â Got it! Preparing...
Mode: Two-person podcast
Topic: Latest developments in Manus AI
For URLs, identify type:
youtu.be/XXXâ convert tohttps://www.youtube.com/watch?v=XXX- Other URLs â use directly
Step 2: Submit generation
â Generation submitted
Estimated time:
⢠Podcast: 2-3 minutes
⢠Explain: 3-5 minutes
⢠TTS: 1-2 minutes
You can:
⢠Wait and ask "done yet?"
⢠Use check-status via scripts
⢠View outputs in product pages:
- Podcast: https://listenhub.ai/app/podcast
- Explain: https://listenhub.ai/app/explainer
- Text-to-Speech: https://listenhub.ai/app/text-to-speech
⢠Do other things, ask later
Internally remember Episode ID for status queries.
Step 3: Query status
When user says “done yet?” / “ready?” / “check status”:
- Success: Show result + next options
- Processing: “Still generating, wait another minute?”
- Failed: “Generation failed, content might be unparseable. Try another?”
Step 4: Show results
Podcast result:
â Podcast generated!
"{title}"
Episode: https://listenhub.ai/app/episode/{episodeId}
Duration: ~{duration} minutes
Download audio: provide audioUrl or audioStreamUrl on request
One-stage podcast creation generates an online task. When status is success, the episode detail already includes scripts and audio URLs. Download uses the returned audioUrl or audioStreamUrl without a second create call. Two-stage creation is only for script review or manual edits before audio generation.
Explain result:
â Explainer video generated!
"{title}"
Watch: https://listenhub.ai/app/explainer
Duration: ~{duration} minutes
Need to download audio? Just say so.
Image result:
â Image generated!
~/Downloads/labnana-{timestamp}.jpg
Image results are file-only and not shown in the web UI.
Important: Prioritize web experience. Only provide download URLs when user explicitly requests.
Script Reference
Scripts are shell-based. Locate via **/skills/listenhub/scripts/.
Dependency: jq is required for request construction.
The AI must ensure curl and jq are installed before invoking scripts.
â ï¸ Long-running Tasks: Generation may take 1-5 minutes. Use your CLI client’s native background execution feature:
- Claude Code: set
run_in_background: truein Bash tool - Other CLIs: use built-in async/background job management if available
Invocation pattern:
$SCRIPTS/script-name.sh [args]
Where $SCRIPTS = resolved path to **/skills/listenhub/scripts/
Podcast (One-Stage)
Default path. Use unless script review or manual editing is required.
$SCRIPTS/create-podcast.sh --query "The future of AI development" --language en --mode deep --speakers cozy-man-english
$SCRIPTS/create-podcast.sh --query "Analyze this article" --language en --mode deep --speakers cozy-man-english --source-url "https://example.com/article"
Multiple --source-url and --source-text arguments are supported to combine several references in one request.
Podcast (Two-Stage: Text â Review â Audio)
Advanced path. Use only when script review or edits are explicitly requested.
The entire value of two-stage generation is human review between stages. Skipping review reduces it to one-stage with extra latency â never do this.
Stage 1: Generate text content.
$SCRIPTS/create-podcast-text.sh --query "AI history" --language en --mode deep --speakers cozy-man-english,travel-girl-english
Review Gate (mandatory): After text generation completes, the agent MUST:
- Run
check-status.sh --waitto poll until completion. On exit code 2 (timeout or rate-limited), wait briefly and retry. - Save two files from the response:
~/Downloads/podcast-draft-<episode-id>.mdâ human-readable version assembled from the response fields (title,outline,sourceProcessResult.content, and thescriptsarray formatted as readable dialogue). This is for the user to review.~/Downloads/podcast-scripts-<episode-id>.jsonâ the raw{"scripts": [...]}object extracted from the response, exactly in the format thatcreate-podcast-audio.sh --scriptsexpects. This is the machine-readable source of truth for Stage 2.
- Inform the user that both files have been saved, and offer to open the markdown draft for review (use the
opencommand on macOS). - STOP and wait for explicit user approval before proceeding to Stage 2.
- On user approval:
- No changes: run
create-podcast-audio.sh --episode <id>without--scripts(server uses original). - With edits: the user may edit the JSON file directly, or describe changes for the agent to apply. Pass the modified file via
--scripts.
- No changes: run
The agent MUST NOT proceed to Stage 2 automatically. This is a hard constraint, not a suggestion.
Stage 2: Generate audio from reviewed/approved text.
# User approved without changes:
$SCRIPTS/create-podcast-audio.sh --episode "<episode-id>"
# User provided edits:
$SCRIPTS/create-podcast-audio.sh --episode "<episode-id>" --scripts modified-scripts.json
Speech (Multi-Speaker)
$SCRIPTS/create-speech.sh --scripts scripts.json
echo '{"scripts":[{"content":"Hello","speakerId":"cozy-man-english"}]}' | $SCRIPTS/create-speech.sh --scripts -
# scripts.json format:
# {
# "scripts": [
# {"content": "Script content here", "speakerId": "speaker-id"},
# ...
# ]
# }
Get Available Speakers
$SCRIPTS/get-speakers.sh --language zh
$SCRIPTS/get-speakers.sh --language en
Guidance:
- è¥ç¨æ·æªæå®é³è²ï¼å¿
é¡»å
è°ç¨
get-speakers.shè·åå¯ç¨å表ã - é»è®¤å¼å
åºï¼åä¸
languageå¹é çå表é¦ä¸ªspeakerIdä½ä¸ºé»è®¤é³è²ã
Response structure (for AI parsing):
{
"code": 0,
"data": {
"items": [
{
"name": "Yuanye",
"speakerId": "cozy-man-english",
"gender": "male",
"language": "zh"
}
]
}
}
Usage: When user requests specific voice characteristics (gender, style), call this script first to discover available speakerId values. NEVER hardcode or assume speakerIds.
Explain
$SCRIPTS/create-explainer.sh --content "Introduce ListenHub" --language en --mode info --speakers cozy-man-english
$SCRIPTS/generate-video.sh --episode "<episode-id>"
TTS
$SCRIPTS/create-tts.sh --type text --content "Welcome to ListenHub" --language en --mode smart --speakers cozy-man-english
Image Generation
$SCRIPTS/generate-image.sh --prompt "sunset over mountains" --size 2K --ratio 16:9
$SCRIPTS/generate-image.sh --prompt "style reference" --reference-images "https://example.com/ref1.jpg,https://example.com/ref2.png"
Supported sizes: 1K | 2K | 4K (default: 2K).
Supported aspect ratios: 16:9 | 1:1 | 9:16 | 2:3 | 3:2 | 3:4 | 4:3 | 21:9 (default: 16:9).
Reference images: comma-separated URLs, maximum 14.
Check Status
# Single-shot query
$SCRIPTS/check-status.sh --episode "<episode-id>" --type podcast
# Wait mode (recommended for automated polling)
$SCRIPTS/check-status.sh --episode "<episode-id>" --type podcast --wait
$SCRIPTS/check-status.sh --episode "<episode-id>" --type flow-speech --wait --timeout 60
$SCRIPTS/check-status.sh --episode "<episode-id>" --type explainer --wait --timeout 600
tts is accepted as an alias for flow-speech.
--wait mode handles polling internally with configurable limits.
Agents SHOULD use --wait instead of manual polling loops. On exit code 2, wait briefly and retry the command.
| Option | Default | Description |
|---|---|---|
--wait |
off | Enable polling mode |
--max-polls |
30 | Maximum poll attempts |
--timeout |
300 | Maximum total wait (seconds) |
--interval |
10 | Base poll interval (seconds) |
Exit codes: 0 = completed, 1 = failed, 2 = timeout or rate-limited (still pending, safe to retry after a short wait).
Language Adaptation
Automatic Language Detection: Adapt output language based on user input and context.
Detection Rules:
- User Input Language: If user writes in Chinese, respond in Chinese. If user writes in English, respond in English.
- Context Consistency: Maintain the same language throughout the interaction unless user explicitly switches.
- CLAUDE.md Override: If project-level CLAUDE.md specifies a default language, respect it unless user input indicates otherwise.
- Mixed Input: If user mixes languages, prioritize the dominant language (>50% of content).
Application:
- Status messages: “â Got it! Preparing…” (English) vs “â æ¶å°ï¼åå¤ä¸…” (Chinese)
- Error messages: Match user’s language
- Result summaries: Match user’s language
- Script outputs: Pass through as-is (scripts handle their own language)
Example:
User (Chinese): "çæä¸ä¸ªå
³äº AI çæå®¢"
AI (Chinese): "â æ¶å°ï¼åå¤å人æå®¢..."
User (English): "Make a podcast about AI"
AI (English): "â Got it! Preparing two-person podcast..."
Principle: Language is interface, not barrier. Adapt seamlessly to user’s natural expression.
AI Responsibilities
Black Box Principle
You are a dispatcher, not an implementer.
Your job is to:
- Understand user intent (what do they want to create?)
- Select the correct script (which tool fits?)
- Format arguments correctly (what parameters?)
- Execute and relay results (what happened?)
Your job is NOT to:
- Understand or modify script internals
- Construct API calls directly
- Guess parameters not documented here
- Invent features that scripts don’t expose
Mode-Specific Behavior
ListenHub modes (passthrough):
- Podcast/Explain/TTS/Speech â pass user input directly
- Server has full AI capability to process content
- If user needs specific speakers â call
get-speakers.shfirst to list options
Labnana mode (passthrough by default):
- Image Generation â pass the user’s prompt through as-is by default
- The generation model handles prompt interpretation; client-side rewriting is not required
Prompt Handling (Image Generation)
Default behavior: transparent forwarding. Pass the user’s prompt directly to the script without modification.
When to offer optimization:
- The user provides only a short topic or phrase (e.g., “a cat”), AND
- The user has not explicitly stated they want verbatim generation
In this case, ask whether the user would like help enriching the prompt. Do not optimize without confirmation.
When to never modify:
- The user pastes a long, structured, or detailed prompt â treat them as experienced
- The user explicitly says “use this prompt exactly” or similar
If the user agrees to optimization, the following techniques are available as reference:
Style: “cyberpunk” â add “neon lights, futuristic, dystopian”; “ink painting” â add “Chinese ink painting, traditional art style”
Scene: time of day, lighting conditions, weather
Quality: “highly detailed”, “8K quality”, “cinematic composition”
Rules when optimizing:
- Use English keywords (models trained on English)
- Show the optimized prompt transparently before submitting
- Keep the user’s core intent unchanged
- Do not over-stack terminology or add unwanted elements
â Generation submitted, about 2-3 minutes
You can: ⢠Wait and ask “done yet?” ⢠Check listenhub.ai/app/library
â Generation submitted, explainer videos take 3-5 minutes
Includes: Script + narration + AI visuals
â TTS submitted, about 1-2 minutes
Wait a moment, or ask “done yet?” to check
Prompt: Cyberpunk city at night, neon lights reflecting on wet streets, towering skyscrapers with holographic ads, flying vehicles, cinematic composition, highly detailed, 8K quality
Resolution: 2K (16:9)
â Image generated! ~/Downloads/labnana-20260121-143145.jpg
Prompt: a futuristic car Reference images: 1 Reference image URL: https://example.com/style-ref.jpg Resolution: 2K (16:9)
â Image generated! ~/Downloads/labnana-20260122-154230.jpg
“AI Revolution: From GPT to AGI”
Listen: https://listenhub.ai/app/podcast
Duration: ~8 minutes
Need to download? Just say so.