agent-guardrails
npx skills add https://github.com/jzocb/agent-guardrails --skill agent-guardrails
Agent 安装分布
Skill 文档
Agent Guardrails
Mechanical enforcement for AI agent project standards. Rules in markdown are suggestions. Code hooks are laws.
Quick Start
cd your-project/
bash /path/to/agent-guardrails/scripts/install.sh
This installs the git pre-commit hook, creates a registry template, and copies check scripts into your project.
Enforcement Hierarchy
- Code hooks (git pre-commit, pre/post-creation checks) â 100% reliable
- Architectural constraints (registries, import enforcement) â 95% reliable
- Self-verification loops (agent checks own work) â 80% reliable
- Prompt rules (AGENTS.md, system prompts) â 60-70% reliable
- Markdown rules â 40-50% reliable, degrades with context length
Tools Provided
Scripts
| Script | When to Run | What It Does |
|---|---|---|
install.sh |
Once per project | Installs hooks and scaffolding |
pre-create-check.sh |
Before creating new .py files |
Lists existing modules/functions to prevent reimplementation |
post-create-validate.sh |
After creating/editing .py files |
Detects duplicates, missing imports, bypass patterns |
check-secrets.sh |
Before commits / on demand | Scans for hardcoded tokens, keys, passwords |
create-deployment-check.sh |
When setting up deployment verification | Creates .deployment-check.sh, checklist, and git hook template |
install-skill-feedback-loop.sh |
When setting up skill update automation | Creates detection, auto-commit, and git hook for skill updates |
Assets
| Asset | Purpose |
|---|---|
pre-commit-hook |
Ready-to-install git hook blocking bypass patterns and secrets |
registry-template.py |
Template __init__.py for project module registries |
References
| File | Contents |
|---|---|
enforcement-research.md |
Research on why code > prompts for enforcement |
agents-md-template.md |
Template AGENTS.md with mechanical enforcement rules |
deployment-verification-guide.md |
Full guide on preventing deployment gaps |
skill-update-feedback.md |
Meta-enforcement: automatic skill update feedback loop |
SKILL_CN.md |
Chinese translation of this document |
Usage Workflow
Setting up a new project
bash scripts/install.sh /path/to/project
Before creating any new .py file
bash scripts/pre-create-check.sh /path/to/project
Review the output. If existing functions cover your needs, import them.
After creating/editing a .py file
bash scripts/post-create-validate.sh /path/to/new_file.py
Fix any warnings before proceeding.
Setting up deployment verification
bash scripts/create-deployment-check.sh /path/to/project
This creates:
.deployment-check.sh– Automated verification scriptDEPLOYMENT-CHECKLIST.md– Full deployment workflow.git-hooks/pre-commit-deployment– Git hook template
Then customize:
- Add tests to
.deployment-check.shfor your integration points - Document your flow in
DEPLOYMENT-CHECKLIST.md - Install the git hook
See references/deployment-verification-guide.md for full guide.
Adding to AGENTS.md
Copy the template from references/agents-md-template.md and adapt to your project.
ä¸æææ¡£ / Chinese Documentation
See references/SKILL_CN.md for the full Chinese translation of this skill.
Common Agent Failure Modes
1. Reimplementation (Bypass Pattern)
Symptom: Agent creates “quick version” instead of importing validated code.
Enforcement: pre-create-check.sh + post-create-validate.sh + git hook
2. Hardcoded Secrets
Symptom: Tokens/keys in code instead of env vars.
Enforcement: check-secrets.sh + git hook
3. Deployment Gap
Symptom: Built feature but forgot to wire it into production. Users don’t receive benefit.
Example: Updated notify.py but cron still calls old version.
Enforcement: .deployment-check.sh + git hook
This is the hardest to catch because:
- Code runs fine when tested manually
- Agent marks task “done” after writing code
- Problem only surfaces when user complains
Solution: Mechanical end-to-end verification before allowing “done.”
4. Skill Update Gap (META – NEW)
Symptom: Built enforcement improvement in project but forgot to update the skill itself.
Example: Created deployment verification for Project A, but other projects don’t benefit because skill wasn’t updated.
Enforcement: install-skill-feedback-loop.sh â automatic detection + semi-automatic commit
This is a meta-failure mode because:
- It’s about enforcement improvements themselves
- Without fix: improvements stay siloed
- With fix: knowledge compounds automatically
Solution: Automatic detection of enforcement improvements with task creation and semi-automatic commits.
Key Principle
Don’t add more markdown rules. Add mechanical enforcement. If an agent keeps bypassing a standard, don’t write a stronger rule â write a hook that blocks it.
Corollary: If an agent keeps forgetting integration, don’t remind it â make it mechanically verify before commit.