pica-openai-agents
npx skills add https://github.com/picahq/skills --skill pica-openai-agents
Agent 安装分布
Skill 文档
PICA MCP Integration with the OpenAI Agents SDK
PICA provides a unified API platform that connects AI agents to third-party services (CRMs, email, calendars, databases, etc.) through MCP tool calling.
PICA MCP Server
PICA exposes its capabilities through an MCP server distributed as @picahq/mcp. It uses stdio transport â it runs as a local subprocess via npx.
MCP Configuration
{
"mcpServers": {
"pica": {
"command": "npx",
"args": ["@picahq/mcp"],
"env": {
"PICA_SECRET": "your-pica-secret-key"
}
}
}
}
- Package:
@picahq/mcp(run vianpx, no install needed) - Auth:
PICA_SECRETenvironment variable (obtain from the PICA dashboard https://app.picaos.com/settings/api-keys) - Transport: stdio (standard input/output)
Environment Variable
Always store the PICA secret in an environment variable, never hardcode it:
PICA_SECRET=sk_test_...
OPENAI_API_KEY=sk-...
Add them to .env.local (or equivalent) and document in .env.example.
Using PICA with the OpenAI Agents SDK
The OpenAI Agents SDK (@openai/agents) has first-class MCP support via MCPServerStdio. No additional MCP client package is needed â the SDK handles tool discovery, conversion, and execution automatically.
Required packages
pnpm add @openai/agents zod
@openai/agents: Main SDK (includesMCPServerStdio,Agent,run)zod: Required by the SDK (v4+)
Before implementing: look up the latest docs
The OpenAI Agents SDK API may change between versions. Always check the latest docs first:
- Docs: https://openai.github.io/openai-agents-js/
- MCP guide: https://openai.github.io/openai-agents-js/guides/mcp/
- GitHub: https://github.com/openai/openai-agents-js
Integration pattern
- Create an MCP server using
MCPServerStdiowithcommand: "npx",args: ["@picahq/mcp"] - Connect the server via
await mcpServer.connect() - Create an Agent with
mcpServers: [mcpServer]â tools are discovered automatically - Run the agent with
run(agent, input, { stream: true })â the SDK handles the full agent loop (tool calls, execution, multi-step) - Stream events by iterating the result â handle
raw_model_stream_eventfor text deltas andrun_item_stream_eventfor tool calls - Close the MCP server when done via
await mcpServer.close()
When passing environment variables, spread process.env so the subprocess inherits PATH and other system vars:
env: {
...(process.env as Record<string, string>),
PICA_SECRET: process.env.PICA_SECRET!,
}
Minimal example
import { Agent, run, MCPServerStdio } from "@openai/agents";
const mcpServer = new MCPServerStdio({
name: "PICA MCP Server",
command: "npx",
args: ["@picahq/mcp"],
env: {
...(process.env as Record<string, string>),
PICA_SECRET: process.env.PICA_SECRET!,
},
});
await mcpServer.connect();
try {
const agent = new Agent({
name: "PICA Assistant",
model: "gpt-4o-mini",
instructions: "You are a helpful assistant.",
mcpServers: [mcpServer],
});
// Non-streaming
const result = await run(agent, "List my connected integrations");
console.log(result.finalOutput);
// Streaming
const streamResult = await run(agent, "List my connected integrations", {
stream: true,
});
for await (const event of streamResult) {
if (event.type === "raw_model_stream_event") {
const data = event.data as Record<string, unknown>;
if (data.type === "response.output_text.delta") {
process.stdout.write(data.delta as string);
}
}
}
await streamResult.completed;
} finally {
await mcpServer.close();
}
Streaming SSE events for a chat UI
When building a Next.js API route, stream responses as SSE events using a ReadableStream. Emit events in this format for compatibility with the PythonChat frontend component:
{ type: "text", content: "..." }â streamed text chunks{ type: "tool_start", name: "tool_name", input: "..." }â tool execution starting{ type: "tool_end", name: "tool_name", output: "..." }â tool execution result{ type: "error", content: "..." }â error messagesdata: [DONE]â stream finished
Handling streaming events
The SDK emits three event types when streaming:
| Event Type | Purpose | Key Fields |
|---|---|---|
raw_model_stream_event |
Raw model token deltas | data.type, data.delta |
run_item_stream_event |
Tool calls, outputs, messages | item.rawItem.type, item.rawItem.* |
agent_updated_stream_event |
Agent switched (handoff) | agent.name |
For text streaming, match data.type === "response.output_text.delta" and read data.delta.
For tool events, check item.rawItem.type:
"function_call"â tool was invoked (hascall_id,name,arguments)"function_call_output"â tool returned (hascall_id,output, but nonameâ track names via aMap<call_id, name>)
Important: run_item_stream_event may fire multiple times for the same tool call (created, in-progress, completed). Use a Set<call_id> to deduplicate tool_start events.
Fallback: After the stream loop completes, check result.finalOutput â if no text deltas were streamed (e.g., the model returned a single non-streamed response), send finalOutput as a text event.
Multi-turn input format
Pass conversation history as an array of message objects:
const input = messages.map((m: { role: string; content: string }) => ({
role: m.role as "user" | "assistant",
content: m.content,
}));
const result = await run(agent, input, { stream: true });
Checklist
When setting up PICA MCP with the OpenAI Agents SDK:
-
@openai/agentsis installed -
zod(v4+) is installed -
OPENAI_API_KEYis set in.env.local -
PICA_SECRETis set in.env.local -
.env.exampledocuments bothOPENAI_API_KEYandPICA_SECRET -
MCPServerStdiousescommand: "npx",args: ["@picahq/mcp"] - Full
process.envis spread into the MCP server’senvoption -
mcpServer.connect()is called before creating the agent - Agent has
mcpServers: [mcpServer]â tools are auto-discovered -
run()is called with{ stream: true }for streaming responses -
result.completedis awaited after iterating the stream - Fallback to
result.finalOutputif no text deltas were streamed - Tool call names are tracked by
call_id(output events lackname) - Tool start events are deduplicated with a
Set<call_id> -
mcpServer.close()is called in afinallyblock