create-agent-with-sanity-context
npx skills add https://github.com/sanity-io/agent-context --skill create-agent-with-sanity-context
Agent 安装分布
Skill 文档
Build an Agent with Sanity Context
Give AI agents intelligent access to your Sanity content. Unlike embedding-only approaches, Context MCP is schema-awareâagents can reason over your content structure, query with real field values, follow references, and combine structural filters with semantic search.
What this enables:
- Agents understand the relationships between your content types
- Queries use actual schema fields, not just text similarity
- Results respect your content model (categories, tags, references)
- Semantic search is available when needed, layered on structure
Note: Context MCP understands your schema structure but not your domain. You’ll provide domain context (what your content is for, how to use it) through the agent’s system prompt.
What You’ll Need
Before starting, gather these credentials:
| Credential | Where to get it |
|---|---|
| Sanity Project ID | Your sanity.config.ts or sanity.io/manage |
| Dataset name | Usually production â check your sanity.config.ts |
| Sanity API read token | Create at sanity.io/manage â Project â API â Tokens. See HTTP Auth docs |
| LLM API key | From your LLM provider (Anthropic, OpenAI, etc.) â any provider works |
How Context MCP Works
An MCP server that gives AI agents structured access to Sanity content. The core integration pattern:
- MCP Connection: HTTP transport to the Context MCP URL
- Authentication: Bearer token using Sanity API read token
- Tool Discovery: Get available tools from MCP client, pass to LLM
- System Prompt: Domain-specific instructions that shape agent behavior
MCP URL formats:
https://api.sanity.io/:apiVersion/agent-context/:projectId/:datasetâ Access all content in the datasethttps://api.sanity.io/:apiVersion/agent-context/:projectId/:dataset/:slugâ Access filtered content (requires agent context document with that slug)
The slug-based URL uses the GROQ filter defined in your agent context document to scope what content the agent can access. Use this for production agents that should only see specific content types.
The integration is simple: Connect to the MCP URL, get tools, use them. The reference implementation shows one way to do thisâadapt to your stack and LLM provider.
Available MCP Tools
| Tool | Purpose |
|---|---|
initial_context |
Get compressed schema overview (types, fields, document counts) |
groq_query |
Execute GROQ queries with optional semantic search |
schema_explorer |
Get detailed schema for a specific document type |
For development and debugging: The general Sanity MCP provides broader access to your Sanity project (schema deployment, document management, etc.). Useful during development but not intended for customer-facing applications.
Before You Start: Understand the User’s Situation
A complete integration has three distinct components that may live in different places:
| Component | What it is | Examples |
|---|---|---|
| 1. Studio Setup | Configure the context plugin and create agent context documents | Sanity Studio (separate repo or embedded) |
| 2. Agent Implementation | Code that connects to Context MCP and handles LLM interactions | Next.js API route, Express server, Python service, or any MCP-compatible client |
| 3. Frontend (Optional) | UI for users to interact with the agent | Chat widget, search interface, CLIâor none for backend services |
Studio setup and agent implementation are required. Frontend is optionalâmany agents run as backend services or integrate into existing UIs.
Ask the user which part they need help with:
- Components in different repos (most common): You may only have access to one component. Complete what you can, then tell the user what steps remain for the other repos.
- Co-located components: All three in the same projectâwork through them one at a time (Studio â Agent â Frontend).
- Already on step 2 or 3: If you can’t find a Studio in the codebase, ask the user if Studio setup is complete.
Also understand:
- Their stack: What framework/runtime? (Next.js, Remix, Node server, Python, etc.)
- Their AI library: Vercel AI SDK, LangChain, direct API calls, etc.
- Their domain: What will the agent help with? (Shopping, docs, support, search, etc.)
The reference patterns use Next.js + Vercel AI SDK, but adapt to whatever the user is working with.
Workflow
Quick Validation (Optional)
Before building an agent, you can validate MCP access directly using the base URL (no slug required):
curl -X POST https://api.sanity.io/YOUR_API_VERSION/agent-context/YOUR_PROJECT_ID/YOUR_DATASET \
-H "Authorization: Bearer $SANITY_API_READ_TOKEN" \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "tools/list", "id": 1}'
This confirms your token works and the MCP endpoint is reachable. The base URL gives access to all contentâuseful for testing before setting up content filters via agent context documents.
Step 1: Set up Sanity Studio
Configure the context plugin and create agent context documents to scope what content the agent can access.
See references/studio-setup.md
Step 2: Build the Agent (Adapt to user’s stack)
Already have an agent or MCP client? You just need to connect it to your Context MCP URL with a Bearer token. The tools will appear automatically.
Building from scratch? The reference implementation uses Next.js + Vercel AI SDK with Anthropic, but the pattern works with any LLM provider (OpenAI, local models, etc.). It’s comprehensiveâcovering everything from basic chat to advanced patterns. Start with the basics and add advanced patterns as needed.
See references/nextjs-agent.md
The reference covers:
- Core setup (required): MCP connection, authentication, basic chat route
- System prompts (required): Domain-specific instructions for your agent
- Frontend (optional): React chat component
- Advanced patterns (optional): Client-side tools, auto-continuation, custom rendering
GROQ with Semantic Search
Context MCP supports text::embedding() for semantic ranking:
*[_type == "article" && category == "guides"]
| score(text::embedding("getting started tutorial"))
| order(_score desc)
{ _id, title, summary }[0...10]
Always use order(_score desc) when using score() to get best matches first.
Adapting to Different Stacks
The MCP connection pattern is framework and LLM-agnostic. Whether Next.js, Remix, Express, or Python FastAPIâthe HTTP transport works the same. Any LLM provider that supports tool calling will work.
See references/nextjs-agent.md for:
- Framework-specific route patterns (Express, Remix, Python)
- AI library integrations (LangChain, direct API calls)
- System prompt examples for different domains (e-commerce, docs, support)
Best Practices
- Start simple: Build the basic integration first, then add advanced patterns as needed
- Schema design: Use descriptive field namesâagents rely on schema understanding
- GROQ queries: Always include
_idin projections so agents can reference documents - Content filters: Start broad, then narrow based on what the agent actually needs
- System prompts: Be explicit about forbidden behaviors and formatting rules
- Package versions: NEVER guess package versions. Always check the reference
package.jsonfiles or usenpm info <package> version. AI SDK and Sanity packages update frequentlyâoutdated versions will cause errors.
Troubleshooting
Context MCP returns errors or no schema
Context MCP requires your schema to be available server-side. This happens automatically when your Studio runs, but if it’s not working:
- Check Studio version: Ensure you’re on Sanity Studio v5.1.0 or later
- Open your Studio: Simply opening the Studio in a browser triggers schema deployment
- Verify deployment: After opening Studio, retry the MCP connection
Escape hatch: Deploy schema via Sanity MCP
If you’re on a cloud-only platform (Lovable, v0, Replit) without a local Studio, or if local Studio schema deployment isn’t working, you can deploy schemas using the Sanity MCP server’s deploy_schema tool.
To install the Sanity MCP (if you don’t have it already):
npx sanity@latest mcp configure
This configures the MCP for your AI editor (Claude Code, Cursor, VS Code, etc.). Once connected, ask your AI assistant to use the deploy_schema tool to deploy your content types.
Recommended approach: If you have a local Sanity Studio, deploying via the Studio is preferred:
- Local schema files (in
schemaTypes/) are the source of truth- Using
deploy_schemadirectly can create drift between your code and the deployed schema- Edit your local schema files and run
npx sanity schema deployinsteadUse this escape hatch when local deployment isn’t an option or isn’t working.
Other common issues
See references/nextjs-agent.md for:
- Token authentication errors
- Empty results / no documents found
- Tools not appearing