llm-security

📁 semgrep/skills 📅 Jan 20, 2026
93
总安装量
93
周安装量
#2482
全站排名
安装命令
npx skills add https://github.com/semgrep/skills --skill llm-security

Agent 安装分布

claude-code 75
gemini-cli 70
codex 65
opencode 65
cursor 40

Skill 文档

LLM Security Guidelines (OWASP Top 10 for LLM 2025)

Comprehensive security rules for building secure LLM applications. Based on the OWASP Top 10 for Large Language Model Applications 2025 – the authoritative guide to LLM security risks.

How It Works

  1. When building or reviewing LLM applications, reference these security guidelines
  2. Each rule includes vulnerable patterns and secure implementations
  3. Rules cover the complete LLM application lifecycle: training, deployment, and inference

Categories

Critical Impact

  • LLM01: Prompt Injection – Prevent direct and indirect prompt manipulation
  • LLM02: Sensitive Information Disclosure – Protect PII, credentials, and proprietary data
  • LLM03: Supply Chain – Secure model sources, training data, and dependencies
  • LLM04: Data and Model Poisoning – Prevent training data manipulation and backdoors
  • LLM05: Improper Output Handling – Sanitize LLM outputs before downstream use

High Impact

  • LLM06: Excessive Agency – Limit LLM permissions, functionality, and autonomy
  • LLM07: System Prompt Leakage – Protect system prompts from disclosure
  • LLM08: Vector and Embedding Weaknesses – Secure RAG systems and embeddings
  • LLM09: Misinformation – Mitigate hallucinations and false outputs
  • LLM10: Unbounded Consumption – Prevent DoS, cost attacks, and model theft

Usage

Reference the rules in rules/ directory for detailed examples:

  • rules/prompt-injection.md – Prompt injection prevention (LLM01)
  • rules/sensitive-disclosure.md – Sensitive information protection (LLM02)
  • rules/supply-chain.md – Supply chain security (LLM03)
  • rules/data-poisoning.md – Data and model poisoning prevention (LLM04)
  • rules/output-handling.md – Output handling security (LLM05)
  • rules/excessive-agency.md – Agency control (LLM06)
  • rules/system-prompt-leakage.md – System prompt protection (LLM07)
  • rules/vector-embedding.md – RAG and embedding security (LLM08)
  • rules/misinformation.md – Misinformation mitigation (LLM09)
  • rules/unbounded-consumption.md – Resource consumption control (LLM10)
  • rules/_sections.md – Full index of all rules

Quick Reference

Vulnerability Key Prevention
Prompt Injection Input validation, output filtering, privilege separation
Sensitive Disclosure Data sanitization, access controls, encryption
Supply Chain Verify models, SBOM, trusted sources only
Data Poisoning Data validation, anomaly detection, sandboxing
Output Handling Treat LLM as untrusted, encode outputs, parameterize queries
Excessive Agency Least privilege, human-in-the-loop, minimize extensions
System Prompt Leakage No secrets in prompts, external guardrails
Vector/Embedding Access controls, data validation, monitoring
Misinformation RAG, fine-tuning, human oversight, cross-verification
Unbounded Consumption Rate limiting, input validation, resource monitoring

Key Principles

  1. Never trust LLM output – Validate and sanitize all outputs before use
  2. Least privilege – Grant minimum necessary permissions to LLM systems
  3. Defense in depth – Layer multiple security controls
  4. Human oversight – Require approval for high-impact actions
  5. Monitor and log – Track all LLM interactions for anomaly detection

References