symbolic-equation
9
总安装量
9
周安装量
#32056
全站排名
安装命令
npx skills add https://github.com/lingzhi227/claude-skills --skill symbolic-equation
Agent 安装分布
codex
8
openclaw
7
gemini-cli
7
claude-code
7
github-copilot
7
kimi-cli
7
Skill 文档
Symbolic Equation Discovery
Discover interpretable scientific equations from data using LLM-guided evolutionary search.
Input
$0â Dataset description, variable names, and physical context
References
- LLM-SR patterns (prompts, evolution, sampling):
~/.claude/skills/symbolic-equation/references/llmsr-patterns.md
Workflow (from LLM-SR)
Step 1: Define Problem Specification
Create a specification with:
- Input variables: Physical quantities with types (e.g.,
x: np.ndarray,v: np.ndarray) - Output variable: Target quantity to predict
- Evaluation function: Fitness metric (typically negative MSE with parameter optimization)
- Physical context: Domain knowledge to guide equation discovery
# Example specification
@equation.evolve
def equation(x: np.ndarray, v: np.ndarray, params: np.ndarray) -> np.ndarray:
"""Describe the acceleration of a damped nonlinear oscillator."""
return params[0] * x
Step 2: Initialize Multi-Island Buffer
- Create N islands (default: 10) for population diversity
- Each island maintains independent clusters of equations
- Clusters group equations by performance signature
Step 3: Evolutionary Search Loop
Repeat until convergence or max samples:
- Select island: Random island selection
- Build prompt: Sample top equations from clusters (softmax-weighted by score)
- LLM proposes: Generate new equation as improved version
- Evaluate: Execute on test data, compute fitness score
- Register: Add to island’s cluster if valid
Step 4: Prompt Construction
Present previous equations as versioned sequence:
def equation_v0(x, v, params):
"""Initial version."""
return params[0] * x
def equation_v1(x, v, params):
"""Improved version of equation_v0."""
return params[0] * x + params[1] * v
def equation_v2(x, v, params):
"""Improved version of equation_v1."""
# LLM completes this
Step 5: Island Reset (Diversity Maintenance)
Periodically (default: every 4 hours):
- Sort islands by best score
- Reset bottom 50% of islands
- Seed each reset island with best equation from a surviving island
- Restart cluster sampling temperature
Step 6: Extract Best Equations
After search completes:
- Collect best equation from each island
- Rank by fitness score
- Simplify if possible (algebraic simplification)
- Report with physical interpretation
Cluster Sampling
Temperature-scheduled softmax over cluster scores:
temperature = T_init * (1 - (num_programs % period) / period)
probabilities = softmax(cluster_scores / temperature)
- Higher temperature â more exploration
- Lower temperature â more exploitation of best clusters
- Within clusters: shorter programs are preferred (Occam’s razor)
Rules
- Equations must use only standard mathematical operations
- Parameter optimization via scipy BFGS or Adam
- Fitness = negative MSE (higher is better)
- Timeout protection for equation evaluation
- No recursive equations allowed
- Physical interpretability is preferred over pure fit
Related Skills
- Upstream: data-analysis, math-reasoning
- Downstream: paper-writing-section
- See also: algorithm-design