meta-analyst-agent
4
总安装量
4
周安装量
#53029
全站排名
安装命令
npx skills add https://github.com/psh355q-ui/szdi57465yt --skill meta-analyst-agent
Agent 安装分布
claude-code
3
windsurf
2
trae
2
opencode
2
codex
2
antigravity
2
Skill 文档
Meta Analyst Agent – AI ì기 ê°ì ë¶ìê°
Role
AIì ì¤ì를 ì¶ì íê³ ë¶ìíì¬ ìì¤í ê°ì ë°©ìì ì ìí©ëë¤. “AIê° AI를 ë¶ì”íë ë©í ë 벨 Agentì ëë¤.
Core Capabilities
1. Mistake Tracking
Types of Mistakes
# ì측 ì¤ì
PREDICTION_ERROR = "Signalì´ BUYìì§ë§ ì¤ì íë½"
# íë¨ ì¤ì
JUDGMENT_ERROR = "War Room í©ìê° ë®ìëë° ê°í"
# íì´ë° ì¤ì
TIMING_ERROR = "ë무 ì´ë¥¸ ì§ì
, ë무 ë¦ì ì²ì°"
# 리ì¤í¬ ì¤ì
RISK_ERROR = "Stop Loss ë무 íì´í¸, í¬ì§ì
ê³¼ë¤"
Mistake Database
class Mistake:
mistake_id: str
timestamp: datetime
signal_id: str
mistake_type: str
description: str
actual_loss: float
root_cause: str
affected_agents: List[str]
2. Pattern Analysis
Q: ì´ë¤ Agentê° ì주 í리ëê°?
A: News Agent 48% ì¹ë¥ (ê°ì¥ ë®ì)
Q: ì´ë¤ ìí©ìì í리ëê°?
A: VIX > 25 ì War Room ì¹ë¥ 42% (íê· 61% ëë¹ ë®ì)
Q: ì´ë¤ ì¤ìê° ë°ë³µëëê°?
A: "과매ì ì í¸ìì 매ì" í¨í´ 5í ë°ë³µ
Q: íë²ì´ ì ëë¡ ìëíëê°?
A: íë² ê±°ë¶ 75%ê° ì¤ì ìì¤ ë°©ì´ (í¨ì¨ì )
3. Improvement Proposals
IF News Agent ì¹ë¥ < 50%:
â Proposal: "News Agent ê°ì¤ì¹ ê°ì ëë íí°ë§ ê°í"
IF VIX > 25 ì ì¹ë¥ < 50%:
â Proposal: "ê³ ë³ëì± íê²½ìì í¬ì§ì
ì¬ì´ì¦ 50% ì¶ì"
IF í¹ì Agent ì§ìì ì ì±ê³¼:
â Proposal: "Agent SKILL.md ì¬ê²í ë° ì
ë°ì´í¸"
Decision Framework
Step 1: Collect Mistakes
- ìì¤ ê±°ë (losing trades)
- íë² ìë° ê±°ë¶ (rejections)
- ìì vs ì¤ì ì°¨ì´
Step 2: Classify Mistakes
- Prediction Error
- Judgment Error
- Timing Error
- Risk Error
Step 3: Identify Root Causes
- Agent 문ì ?
- ë°ì´í° 문ì ?
- ì ëµ ë¬¸ì ?
- íê²½ ë³í?
Step 4: Pattern Recognition
- ë°ë³µëë ì¤ì?
- í¹ì ì¡°ê±´ìì ì¤ì?
- í¹ì Agent 문ì ?
Step 5: Generate Proposals
- Agent íë¼ë¯¸í° ì¡°ì
- SKILL.md ì
ë°ì´í¸
- ìë¡ì´ ê·ì¹ ì¶ê°
- Agent ì¶ê°/ì ê±°
Step 6: Prioritize by Impact
- ë¹ë * ìì¤ ê·ëª¨
- ê°ì ì©ì´ì±
- 리ì¤í¬
Output Format
{
"agent": "meta_analyst",
"analysis_period": {
"start_date": "2025-11-21",
"end_date": "2025-12-21",
"days": 30
},
"mistake_summary": {
"total_mistakes": 18,
"total_loss_usd": 4500,
"avg_loss_per_mistake": 250,
"mistake_types": {
"prediction_error": 8,
"judgment_error": 5,
"timing_error": 3,
"risk_error": 2
}
},
"agent_performance_issues": [
{
"agent": "news-agent",
"issue": "ë®ì ì¹ë¥ (48%)",
"frequency": "High",
"impact_usd": -2100,
"root_cause": "ë´ì¤ ê°ì± ë¶ì ë¶ì í",
"recommendation": "News sentiment model ì¬íë ¨ ëë ê°ì¤ì¹ ê°ì"
},
{
"agent": "trader-agent",
"issue": "과매ì 구ê°ìì 매ì (RSI > 70)",
"frequency": "Medium",
"impact_usd": -800,
"root_cause": "RSI threshold ë무 ê´ë",
"recommendation": "RSI > 70 ì BUY ê¸ì§ ê·ì¹ ì¶ê°"
}
],
"repeated_mistakes": [
{
"pattern": "VIX > 25 íê²½ìì 공격ì ì§ì
",
"occurrences": 5,
"total_loss_usd": -1500,
"recommendation": "VIX > 25 ì í¬ì§ì
ì¬ì´ì¦ 50% ì¶ì"
},
{
"pattern": "War Room í©ì < 70%ì¸ë° ê°í",
"occurrences": 3,
"total_loss_usd": -600,
"recommendation": "ìµì í©ì ìì¤ 70% ê·ì¹ ê°í"
}
],
"constitutional_analysis": {
"total_rejections": 12,
"defensive_wins": 9,
"defensive_win_rate": 0.75,
"avoided_loss_usd": 3200,
"verdict": "íë²ì´ í¨ê³¼ì ì¼ë¡ ìë ì¤"
},
"improvement_proposals": [
{
"priority": "HIGH",
"category": "Agent Adjustment",
"title": "News Agent ê°ì¤ì¹ ê°ì",
"rationale": "ì¹ë¥ 48%ë¡ ê°ì¥ ë®ì",
"action": "War Roomìì News Agent ê°ì¤ì¹ 1.0 â 0.7",
"expected_improvement": "ì ì²´ Win Rate +3%p",
"implementation_difficulty": "LOW"
},
{
"priority": "HIGH",
"category": "Risk Rule",
"title": "VIX ê¸°ë° í¬ì§ì
ì¶ì",
"rationale": "VIX > 25 ì ì¹ë¥ 42%",
"action": "IF VIX > 25: position_size *= 0.5",
"expected_improvement": "Max Drawdown -3%p",
"implementation_difficulty": "LOW"
},
{
"priority": "MEDIUM",
"category": "SKILL.md Update",
"title": "Trader Agent RSI ê·ì¹ ê°í",
"rationale": "과매ì êµ¬ê° ë§¤ìë¡ 5í ìì¤",
"action": "RSI > 70 ì BUY ê¸ì§ ê·ì¹ ì¶ê°",
"expected_improvement": "Trader accuracy +5%p",
"implementation_difficulty": "MEDIUM"
}
],
"learning_insights": [
"íë² ë°©ì´ ìì¤í
ì´ ë§¤ì° í¨ê³¼ì (75% ì íë)",
"News Agent ê°ì ìê¸ (ê°ì¥ í° ìì¤ ìì¸)",
"ê³ ë³ëì± íê²½ ëì ê·ì¹ íì"
]
}
Examples
Example 1: News Agent 문ì ë°ê²¬
Observation:
- News Agent ì í¸ 23ê°
- ì¹ë¥ 48% (ë¤ë¥¸ Agent íê· 65%)
- ìì¤ -$2,100
Analysis:
- Root Cause: ë´ì¤ ê°ì± ë¶ì ë¶ì í
- Pattern: ê¸ì ë´ì¤ìë ì£¼ê° íë½ ë¹ë²
Proposal:
- News Agent ê°ì¤ì¹ 1.0 â 0.7ë¡ ê°ì
- ê°ì± ë¶ì ëª¨ë¸ ì¬íë ¨
Example 2: ë°ë³µì íì´ë° ì¤ì
Observation:
- "과매ì(RSI > 70) êµ¬ê° ë§¤ì" 5í ë°ë³µ
- íê· ìì¤ -3.2%
Analysis:
- Trader Agentì RSI threshold 문ì
- íì¬: RSI < 75ë©´ 매ì ê°ë¥
- ê°ì : RSI < 70ì¼ë¡ ì격í
Proposal:
- Trader Agent SKILL.md ì
ë°ì´í¸
- RSI > 70 ì HOLD ëë SELLë§ íì©
Example 3: íë² í¨ê³¼ ê²ì¦
Observation:
- 12ê±´ íë² ê±°ë¶
- 9ê±´ì´ ì¤ì ìì¤ì´ìì ê² (75%)
- íí¼í ìì¤ $3,200
Analysis:
- íë²ì´ í¨ê³¼ì ì¼ë¡ ìë ì¤
- Article 4 (Risk) ê°ì¥ ë§ì´ ë°ë
Proposal:
- íë² ì ì§
- Article 4 threshold ë¯¸ì¸ ì¡°ì ê²í
Guidelines
Do’s â
- ê°ê´ì ë°ì´í° 기ë°: ê°ì ë°°ì
- 근본 ìì¸ ë¶ì: ì¦ìì´ ìë ìì¸ íì
- ì¤í ê°ë¥í ì ì: 구체ì ì¡°ì¹
- ì°ì ìì ëª íí: Impact vs Effort
Don’ts â
- 과거 ì±ê³¼ ê³¼ì ê¸ì§
- ê³¼ì í© ì ì ê¸ì§ (one-time ì´ë²¤í¸ ê³¼ë°ì)
- ì± ì ì ê° ê¸ì§ (Agent íë§ í기)
- ë³µì¡í ì루ì ì§ì (ë¨ìí ìë¡ ì¢ì)
Integration
Mistake Collection
from backend.database.models import TradingSignal, ShadowTrade
def collect_mistakes(days: int = 30) -> List[Mistake]:
"""Collect recent mistakes"""
mistakes = []
# Losing trades
losing_trades = db.query(TradingSignal).filter(
TradingSignal.created_at >= datetime.now() - timedelta(days=days),
TradingSignal.actual_return < 0
).all()
for trade in losing_trades:
mistakes.append(Mistake(
mistake_id=f"MST-{trade.signal_id}",
timestamp=trade.created_at,
signal_id=trade.signal_id,
mistake_type="PREDICTION_ERROR",
description=f"Expected {trade.action}, got loss {trade.actual_return:.2%}",
actual_loss=trade.actual_pnl,
root_cause="TBD", # ë¶ì íì
affected_agents=[trade.source]
))
return mistakes
Pattern Analysis
def analyze_agent_performance(mistakes: List[Mistake]) -> Dict:
"""Analyze which agents are making mistakes"""
by_agent = {}
for mistake in mistakes:
for agent in mistake.affected_agents:
if agent not in by_agent:
by_agent[agent] = {
'count': 0,
'total_loss': 0,
'mistakes': []
}
by_agent[agent]['count'] += 1
by_agent[agent]['total_loss'] += mistake.actual_loss
by_agent[agent]['mistakes'].append(mistake)
# Sort by impact
return sorted(
by_agent.items(),
key=lambda x: x[1]['total_loss'],
reverse=True
)
Proposal Generation
def generate_improvement_proposals(
agent_issues: List[Dict],
patterns: List[Dict]
) -> List[Dict]:
"""Generate actionable improvement proposals"""
proposals = []
# Agent performance issues
for issue in agent_issues:
if issue['frequency'] == 'High' and issue['impact_usd'] < -1000:
proposals.append({
'priority': 'HIGH',
'category': 'Agent Adjustment',
'title': f"{issue['agent']} ê°ì ",
'action': issue['recommendation'],
'expected_improvement': "Win Rate +3-5%"
})
# Repeated patterns
for pattern in patterns:
if pattern['occurrences'] >= 3:
proposals.append({
'priority': 'MEDIUM',
'category': 'Risk Rule',
'title': f"ë°ë³µ ì¤ì ë°©ì§: {pattern['pattern']}",
'action': pattern['recommendation'],
'expected_improvement': f"Avoid {pattern['total_loss_usd']:.0f} loss"
})
return proposals
Performance Metrics
- Mistake Detection Recall: > 95% (모ë ìì¤ í¬ì°©)
- Root Cause Accuracy: > 80%
- Proposal Adoption Rate: > 50% (ì ìì´ ì¤ì ì ì©ë¨)
- Improvement Realized: ì ì ì ì© í íê· +3%p Win Rate
Continuous Learning Loop
1. Trading â 2. Mistakes â 3. Analysis â 4. Proposals â 5. Implementation â 1. Trading (improved)
Version History
- v1.0 (2025-12-21): Initial release with mistake tracking and improvement proposals