convex-agent
2
总安装量
2
周安装量
#67116
全站排名
安装命令
npx skills add https://github.com/polarcoding85/convex-agent-skillz --skill convex-agent
Agent 安装分布
amp
2
antigravity
2
mcpjam
1
claude-code
1
windsurf
1
zencoder
1
Skill 文档
Convex Agent Component
Build AI agents with persistent message history, tool calling, real-time streaming, and durable workflows.
Installation
npm install @convex-dev/agent
// convex/convex.config.ts
import { defineApp } from 'convex/server';
import agent from '@convex-dev/agent/convex.config';
const app = defineApp();
app.use(agent);
export default app;
Run npx convex dev to generate component code before defining agents.
Core Concepts
Agent Definition
// convex/agents.ts
import { Agent } from '@convex-dev/agent';
import { openai } from '@ai-sdk/openai';
import { components } from './_generated/api';
const supportAgent = new Agent(components.agent, {
name: 'Support Agent',
languageModel: openai.chat('gpt-4o-mini'),
textEmbeddingModel: openai.embedding('text-embedding-3-small'), // For vector search
instructions: 'You are a helpful support assistant.',
tools: { lookupAccount, createTicket },
stopWhen: stepCountIs(10) // Or use maxSteps: 10
});
Basic Usage (Two Approaches)
Approach 1: Direct generation (simpler)
import { createThread } from '@convex-dev/agent';
export const chat = action({
args: { prompt: v.string() },
handler: async (ctx, { prompt }) => {
const threadId = await createThread(ctx, components.agent);
const result = await agent.generateText(ctx, { threadId }, { prompt });
return result.text;
}
});
Approach 2: Thread object (more features)
export const chat = action({
args: { prompt: v.string() },
handler: async (ctx, { prompt }) => {
const { threadId, thread } = await agent.createThread(ctx);
const result = await thread.generateText({ prompt });
return { threadId, text: result.text };
}
});
Continue Existing Thread
export const continueChat = action({
args: { threadId: v.string(), prompt: v.string() },
handler: async (ctx, { threadId, prompt }) => {
// Message history included automatically
const result = await agent.generateText(ctx, { threadId }, { prompt });
return result.text;
}
});
Asynchronous Pattern (Recommended)
Best practice: save message in mutation, generate response asynchronously.
import { saveMessage } from '@convex-dev/agent';
// Step 1: Mutation saves message and schedules generation
export const sendMessage = mutation({
args: { threadId: v.string(), prompt: v.string() },
handler: async (ctx, { threadId, prompt }) => {
const { messageId } = await saveMessage(ctx, components.agent, {
threadId,
prompt
});
await ctx.scheduler.runAfter(0, internal.chat.generateResponse, {
threadId,
promptMessageId: messageId
});
return messageId;
}
});
// Step 2: Action generates response
export const generateResponse = internalAction({
args: { threadId: v.string(), promptMessageId: v.string() },
handler: async (ctx, { threadId, promptMessageId }) => {
await agent.generateText(ctx, { threadId }, { promptMessageId });
}
});
// Shorthand for Step 2:
export const generateResponse = agent.asTextAction();
Generation Methods
// Text generation
const result = await agent.generateText(ctx, { threadId }, { prompt });
// Structured output
const result = await agent.generateObject(
ctx,
{ threadId },
{
prompt: 'Extract user info',
schema: z.object({ name: z.string(), email: z.string() })
}
);
// Stream text (see STREAMING.md)
const result = await agent.streamText(ctx, { threadId }, { prompt });
// Multiple messages
const result = await agent.generateText(
ctx,
{ threadId },
{
messages: [
{ role: 'user', content: 'Context message' },
{ role: 'user', content: 'Actual question' }
]
}
);
Querying Messages
import { listUIMessages, paginationOptsValidator } from '@convex-dev/agent';
export const listMessages = query({
args: { threadId: v.string(), paginationOpts: paginationOptsValidator },
handler: async (ctx, args) => {
return await listUIMessages(ctx, components.agent, args);
}
});
React Hook:
import { useUIMessages } from '@convex-dev/agent/react';
const { results, status, loadMore } = useUIMessages(
api.chat.listMessages,
{ threadId },
{ initialNumItems: 20 }
);
Agent Configuration Options
const agent = new Agent(components.agent, {
name: 'Agent Name',
languageModel: openai.chat('gpt-4o-mini'),
textEmbeddingModel: openai.embedding('text-embedding-3-small'),
instructions: 'System prompt...',
tools: {
/* tools */
},
stopWhen: stepCountIs(10), // Or maxSteps: 10
// Context options (see CONTEXT.md)
contextOptions: {
recentMessages: 100,
excludeToolMessages: true,
searchOptions: { limit: 10, textSearch: false, vectorSearch: false }
},
// Storage options
storageOptions: { saveMessages: 'promptAndOutput' }, // 'all' | 'none'
// Handlers
usageHandler: async (ctx, { usage, model, provider, agentName }) => {},
contextHandler: async (ctx, { allMessages }) => allMessages,
rawRequestResponseHandler: async (ctx, { request, response }) => {},
// Call settings
callSettings: { maxRetries: 3, temperature: 1.0 }
});
Key References
- Streaming – Delta streaming, HTTP streaming, text smoothing
- Tools – Defining and using tools with Convex context
- Context – Customizing LLM context and RAG
- Threads – Thread management and deletion
- Messages – Message storage, ordering, UIMessage type
- Workflows – Durable multi-step workflows
- Human Agents – Mixing human and AI responses
- Files – Images and files in messages
- RAG – Retrieval-augmented generation patterns
- Rate Limiting – Controlling request rates
- Usage Tracking – Token usage and billing
- Debugging – Troubleshooting and playground
Best Practices
- Define agents at module level – Reuse across functions
- Use
userIdon threads – Enables cross-thread search and per-user data - Set appropriate
stopWhen/maxSteps– Prevents runaway tool loops - Use
promptMessageIdfor async – Enables safe retries without duplicates - Save messages in mutations – Use optimistic updates, schedule actions
- Use
textEmbeddingModelfor RAG – Required for vector search - Handle streaming via deltas – Better UX than HTTP streaming alone