tanstack-ai-vue-skilld

📁 harlan-zw/vue-ecosystem-skills 📅 10 days ago
16
总安装量
16
周安装量
#21079
全站排名
安装命令
npx skills add https://github.com/harlan-zw/vue-ecosystem-skills --skill tanstack-ai-vue-skilld

Agent 安装分布

github-copilot 14
opencode 13
gemini-cli 13
amp 13
codex 13
kimi-cli 13

Skill 文档

TanStack/ai @tanstack/ai-vue

Version: 0.5.4 (Feb 2026) Deps: @tanstack/ai-client@0.4.5 Tags: latest: 0.5.4 (Feb 2026)

References: Docs — API reference, guides • GitHub Issues — bugs, workarounds, edge cases • GitHub Discussions — Q&A, patterns, recipes • Releases — changelog, breaking changes, new APIs

API Changes

This section documents version-specific API changes — prioritize recent major/minor releases.

  • BREAKING: Adapter functions split — v0.1.0 split monolithic adapters into activity-specific functions (e.g., openaiText('gpt-4o'), openaiImage()) to enable optimal tree-shaking source

  • BREAKING: Options flattened — common parameters like temperature, maxTokens, and topP moved from nested options object to top-level configuration since v0.1.0 source

  • BREAKING: modelOptions — providerOptions renamed to modelOptions in v0.1.0 for clarity; contains model-specific configurations and is fully type-safe source

  • BREAKING: toServerSentEventsStream — toResponseStream renamed in v0.1.0; now returns a ReadableStream instead of a Response, requiring manual response creation source

  • BREAKING: Embeddings removed — the embedding() function and associated adapters were removed in v0.1.0 to focus on chat and agentic workflows source

  • NEW: status property — useChat added a status ref in v0.4.0 to track the generation lifecycle: ready, submitted, streaming, or error source

  • NEW: Multimodal support — v0.5.0 introduced support for multiple modalities (images, audio, video, documents) via the MultimodalContent type in sendMessage source

  • NEW: agentLoopStrategy — replaced maxIterations with a strategy pattern in v0.1.0, using helpers like maxIterations(n), untilFinishReason(), or combineStrategies() source

  • NEW: chatCompletion() — added in v0.1.0 for promise-based results without the automatic tool execution loop used by chat() source

  • NEW: Tool Handling — useChat exposed addToolResult and addToolApprovalResponse for manual management of tool outputs and user approvals

  • NEW: toHttpStream — introduced in v0.1.0 to support newline-delimited JSON (NDJSON) streaming as an alternative to Server-Sent Events source

  • NEW: fetchHttpStream — connection adapter added to @tanstack/ai-client for consuming NDJSON streams in useChat source

  • NEW: geminiSpeech (experimental) — experimental text-to-speech support for Google Gemini models added in v0.5.0 source

  • NEW: Video generation (experimental) — experimental support for video generation via openaiVideo and fal adapters introduced in v0.1.0 source

Also changed: standard-schema support v0.2.0 · useId integration (Vue 3.5+) · initialMessages option · ToolCallManager class · fetchServerSentEvents adapter

Best Practices

  • Import specific activity and adapter functions instead of entire namespaces to ensure optimal tree-shaking and minimize bundle size source
// Preferred
import { chat } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai'

// Avoid - pulls in all activities and adapters
import * as ai from '@tanstack/ai'
  • Use toServerSentEventsResponse on the server to automatically handle SSE headers, protocol framing, and the “[DONE]” termination chunk source
export async function POST(req: Request) {
  const stream = chat({ adapter: openaiText('gpt-5.2'), messages })
  return toServerSentEventsResponse(stream)
}
  • Prefer fetchServerSentEvents or fetchHttpStream connection adapters in useChat for built-in protocol parsing and state synchronization source

  • Define tools using toolDefinition with Zod schemas to enable full end-to-end TypeScript inference and runtime validation source

const getWeather = toolDefinition({
  name: 'get_weather',
  inputSchema: z.object({ city: z.string() }),
  outputSchema: z.object({ temp: z.number() })
})
  • Use .client() implementations for browser-only operations and pass the base toolDefinition to the server chat() call to trigger automatic execution source

  • Group client tools with clientTools() and createChatClientOptions() to enable precise type narrowing for tool names and schemas in messages source

const tools = clientTools(uiTool.client(fn), storageTool.client(fn))
const options = createChatClientOptions({ connection, tools })
const { messages } = useChat(options) // messages parts are now narrowed!
  • Pass the model name directly to the adapter factory to enable model-specific type safety and autocomplete for modelOptions source
// TypeScript enforces options supported only by gpt-5
const stream = chat({
  adapter: openaiText('gpt-5'),
  modelOptions: { text: { type: 'json_schema', ... } }
})
  • Subscribe to aiEventClient with { withEventTarget: true } in production to capture internal events for observability and timeline reconstruction source

  • Pass all related tools to a single chat() call to allow the model to autonomously manage multi-step reasoning cycles (Agentic Cycle) source

  • Leverage Vue’s reactivity by passing a reactive object to the body property of useChat to update request parameters without recreating the client

const model = ref('gpt-5.2')
const { sendMessage } = useChat({
  connection,
  body: computed(() => ({ model: model.value }))
})