observability-analyze-session-logs
npx skills add https://github.com/dawiddutoit/custom-claude --skill observability-analyze-session-logs
Agent 安装分布
Skill 文档
Analyze Session Logs
IMPORTANT DISTINCTION: This skill analyzes Claude Code session transcripts (what Claude saw and thought), NOT application/production logs (what code executed). For application logs, use the analyze-logs skill instead.
Quick Start
Most common usage (list all messages):
python3 .claude/tools/utils/view_session_context.py <session-file.jsonl> --list
View context at specific point (by message index):
python3 .claude/tools/utils/view_session_context.py <session-file.jsonl> --message-index 10
View context at specific message (by UUID from logs/errors):
python3 .claude/tools/utils/view_session_context.py <session-file.jsonl> --uuid "3c8467d6-..."
When to Use This Skill
Invoke this skill when users mention:
- “why did Claude do X?”
- “what was in the context window?”
- “analyze session” / “check session logs”
- “debug Claude behavior”
- “token usage investigation”
- “context window exhaustion”
- “track agent delegation”
- Any mention of
.jsonlsession files
Use cases:
- Debugging decisions – “Why did Claude choose approach X?” â View context at decision point
- Token analysis – “Where are tokens being used?” â Track cache creation vs cache read
- Agent tracking – “How are agents being delegated?” â Follow sidechain messages
- Context exhaustion – “Why did Claude lose context?” â See context window growth
- Performance issues – “Why is Claude slow?” â Identify cache thrashing
What This Skill Does
Analyzes Claude Code session transcripts to provide forensic visibility into Claude’s internal state, decision-making process, context window content, and token usage patterns.
What it reveals:
- Context window content at any point in conversation
- Token usage breakdown (input, cache creation, cache read, output)
- Message chains (parent-child relationships)
- Agent delegation patterns (sidechain vs main thread)
- Context window growth over time
- Thinking blocks (Claude’s internal reasoning)
- Tool calls with parameters and results
Key Distinction:
- This skill: Analyzes Claude’s session transcripts â Shows “what Claude saw and thought”
- analyze-logs skill: Analyzes OpenTelemetry application logs â Shows “what code executed”
- Use both together for complete debugging
Session File Locations
User-level session files (Claude Code transcripts):
~/.claude/projects/-Users-{username}-{project-path}/{session-uuid}.jsonl
Find recent sessions:
ls -lt ~/.claude/projects/-Users-$(whoami)-*/*.jsonl | head -5
Instructions
Follow this workflow to analyze Claude Code session logs:
- Locate session file – Find .jsonl file in ~/.claude/projects/
- Choose analysis mode – List messages, view context at index, or view at UUID
- Execute analysis – Run view_session_context.py with appropriate flags
- Interpret results – Examine context window, token usage, and decision points
- Report findings – Explain Claude’s behavior with evidence from context
Analysis Workflow
Step 1: Locate Session File
# Find most recent session file
ls -lt ~/.claude/projects/-Users-$(whoami)-*/*.jsonl | head -5
# Or search by UUID from error messages
grep -r "uuid" ~/.claude/projects/*/
Step 2: Choose Analysis Mode
Mode 1: List All Messages (Overview)
python3 .claude/tools/utils/view_session_context.py <session.jsonl> --list
Use when: Initial investigation, finding specific messages, tracking token usage
Mode 2: Context Window at Specific Point
python3 .claude/tools/utils/view_session_context.py <session.jsonl> --message-index 10
Use when: Debugging specific decision, understanding available context, seeing thinking blocks
Mode 3: View Specific Message by UUID
python3 .claude/tools/utils/view_session_context.py <session.jsonl> --uuid "3c8467d6-..."
Use when: Error logs reference specific message UUID
Mode 4: View Raw JSON
python3 .claude/tools/utils/view_session_context.py <session.jsonl> --message-index 10 --raw
Use when: Need exact JSON structure, programmatic analysis
Step 3: Execute Analysis
Run the appropriate command using the Bash tool:
python3 .claude/tools/utils/view_session_context.py ~/.claude/projects/-Users-*/abc123.jsonl --list
Step 4: Interpret Results
For List Output:
- Check message count – How long was the session?
- Identify MAIN vs SIDE – SIDE indicates agent delegation
- Spot token patterns:
- High Cache Create = New context being cached
- High Cache Read = Good cache utilization
- High Cache Create repeatedly = Cache thrashing
- Find interesting points – Large output tokens, sudden cache creation, sidechains
- Note message indices for deeper investigation
For Context Window Output:
- Review message chain – Understand conversation flow
- Read THINKING blocks – See Claude’s internal reasoning
- Check TOOL calls – What tools were invoked and why
- Examine token breakdown (input, cache creation, cache read, output)
- Check total context size – Is it approaching 200k limit?
For Token Usage:
- Cache creation spike = Context changed significantly
- High cache read = Good utilization (cost effective)
- Low cache read = Cache misses (investigate why)
- Growing total context = Approaching 200k limit
Step 5: Report Findings
Always provide:
- Summary – Session length, main vs sidechain messages, total tokens
- Key insights – What explains the behavior?
- Specific examples – Quote relevant thinking blocks, tool calls
- Context evidence – Show what Claude had access to
- Suggested next steps – Additional investigation or fixes
Example response format:
Analyzed session abc123.jsonl (42 messages):
Key Findings:
1. Agent delegation at message 15 (â unit-tester)
- Context window at that point: 24,059 tokens
- Thinking: "Need comprehensive test coverage for new service"
- Agent had access to service implementation but not architectural context
2. Cache thrashing at messages 28-35
- Cache creation spiked to 18k tokens each message
- Context kept changing due to repeated file edits
- Suggestion: Batch edits to reduce cache invalidation
3. Context exhaustion at message 40
- Total context: 189,234 tokens (approaching 200k limit)
- Claude started summarizing instead of quoting full code
To investigate further:
- View agent delegation context: --message-index 15
- Examine cache thrashing: --message-index 30
Best Practices
- Start with List Mode – Always get the overview first to identify interesting points
- Identify Patterns – Look for high cache creation (10k+ tokens), SIDE messages, large outputs
- Use Message Index for Deep Dives – From list output, drill down to specific points
- Follow Agent Delegation Chains – When you see [SIDE], trace back to parent in main thread
- Track Token Usage – Good: high cache read (80%+), Bad: repeated cache creation (thrashing)
- Compare Before/After – When debugging changes, analyze sessions before and after fix
- Correlate with Code Execution Logs – Session logs show “what Claude thought”, OpenTelemetry logs show “what code executed”
- Save Important Sessions – Archive critical sessions to
.claude/artifacts/for future reference
Common Scenarios
“Why did Claude do X?”
- Find session file â List messages (
--list) - Identify message where decision was made
- View context at that point (
--message-index N) - Read THINKING blocks to see reasoning
- Explain: “Claude did X because context included Y but not Z”
Token Usage Investigation
- List messages to see token breakdown
- Identify spikes in cache_creation_input_tokens
- View context at spike points
- Determine what caused cache invalidation
- Suggest optimizations (batch edits, cache earlier)
Agent Delegation Analysis
- List messages to find sidechains ([SIDE])
- View sidechain message context
- Trace back to parent in main thread
- Compare context available in main vs sidechain
- Explain what agent could/couldn’t see
Context Window Exhaustion
- List messages to track context growth
- Identify where total context approaches 200k
- View context at exhaustion point
- Analyze what’s consuming space
- Suggest context optimization (new session, prune messages)
Integration with Other Skills
- analyze-logs – Combine session logs (what Claude thought) with execution logs (what code did)
- debug-test-failures – See what context was available when tests were written
- orchestrate-agents – Track actual delegation patterns vs expected
- check-progress-status – Understand how tasks were completed (decision points, delegations)
Supporting Files
-
references/reference.md – Technical depth:
- Session file format specification (JSONL structure)
- Data models and content block types
- Parent-child chain reconstruction algorithm
- Cache token types and pricing details
- Performance characteristics and limits
- Tool implementation details
- Understanding output format (session structure, token breakdown, message chains)
- Advanced usage patterns (session comparison, pattern detection, metrics)
- Troubleshooting guide (common errors, solutions)
- Complete command reference
-
templates/response-template.md – Response formatting:
- 6 response templates for different contexts
- List mode summary template
- Context window analysis template
- Agent delegation investigation template
- Token usage spike template
- Context exhaustion template
Requirements
Tools:
- Python 3.12+ (already in project)
- Session viewer:
.claude/tools/utils/view_session_context.py(bundled) - jq (optional, for advanced analysis):
brew install jq
Session files:
- User-level:
~/.claude/projects/-Users-{username}-{project-path}/{session-uuid}.jsonl - Generated automatically by Claude Code
Verification:
# Verify tool exists
ls .claude/tools/utils/view_session_context.py
# Verify session files exist
ls -l ~/.claude/projects/-Users-$(whoami)-*/
# Test the tool
python3 .claude/tools/utils/view_session_context.py \
$(ls -t ~/.claude/projects/-Users-$(whoami)-*/*.jsonl | head -1) --list
Quick Reference
| Goal | Command |
|---|---|
| List all messages | python3 .claude/tools/utils/view_session_context.py <session.jsonl> --list |
| View context at point | python3 .claude/tools/utils/view_session_context.py <session.jsonl> --message-index 10 |
| View by UUID | python3 .claude/tools/utils/view_session_context.py <session.jsonl> --uuid "abc123..." |
| View raw JSON | python3 .claude/tools/utils/view_session_context.py <session.jsonl> --message-index 10 --raw |
| Find recent sessions | ls -lt ~/.claude/projects/-Users-$(whoami)-*/*.jsonl | head -5 |
Key Messages:
- This skill provides X-ray vision into Claude’s decision-making
- Use when behavior is unexpected or token usage is unclear
- Complements OpenTelemetry logs (code execution) with context window visibility (what Claude thought)
- Essential for debugging complex agent orchestration
- Start with
--list, then drill down with--message-index
Remember: Session logs show what Claude had access to and how it reasoned. This is your forensic tool for understanding “why Claude did X.”