tendencia
npx skills add https://docs.tendencia.ai
Agent 安装分布
Skill 文档
Capabilities
Tendencia enables agents to build and deploy sophisticated conversational AI systems with advanced memory, tool integration, and workflow orchestration. The platform provides a complete stack for creating intelligent chatbots and voicebots that can understand context, remember conversations, execute custom actions, and escalate to humans when needed.
Skills
Trend Analysis & Predictions
Analyze Trends
client.trends.analyze(topic, timeframe, region)– Analyze trending topics with sentiment analysis and growth metrics- Returns: trend score (0-100), sentiment classification, growth rate percentage, related topics with relevance scores
- Supports multiple timeframes (7d, 30d, etc.) and global/regional analysis
Forecast Future Trends
client.predictions.forecast(topic, horizon, confidence_level)– Predict trend trajectories with confidence intervals- Returns: forecast score, lower/upper bounds, peak date prediction
- Supports visualization with
prediction.plot().save(filename) - Configurable confidence levels (0.95 = 95% confidence)
Batch Processing
client.trends.batch_analyze(topics, timeframe)– Analyze multiple topics simultaneouslyclient.trends.history(topic, start_date, end_date, interval)– Get historical trend data with monthly/daily intervals
Real-Time Alerts
client.alerts.create(topic, condition, channels, webhook_url)– Create alerts for trend thresholds- Supports email and webhook notification channels
- Conditions: “score > 80”, “sentiment == negative”, etc.
Conversational AI & Agents
Agent Creation & Configuration
- Build agents with Mastra framework using
new Agent()with instructions, model, memory, and tools - Support for multiple AI providers via Vercel AI SDK: OpenAI (GPT-4o, GPT-4 Turbo), Anthropic (Claude Opus/Sonnet/Haiku), Google (Gemini), Groq
- Dynamic model selection based on runtime context (user tier, request type)
- Streaming responses for real-time user experience
Memory System (3-Tier)
- Short-term Memory: Last N messages in conversation (configurable window, default 20)
- Working Memory: Persistent user profile with structured schema (name, preferences, account info, context)
- Semantic Recall: RAG-based search across entire conversation history using embeddings (top-K retrieval with context)
- All memory types stored in Turso (LibSQL) with automatic persistence
Tool Integration
- Create custom tools with
createTool()for business logic execution - Built-in tool examples: knowledge base search, ticket creation, order status lookup, human escalation
- Tools receive context and return structured outputs
- Automatic tool calling based on agent instructions and conversation context
Workflow Orchestration
- Multi-step workflows with
createWorkflow()and.then()chaining - Example workflows: automatic triage (classify â route â update profile), auto-resolution (search KB â attempt resolve â create ticket if needed)
- Conditional branching based on classification results
- Parallel processing capabilities
Human-in-the-Loop & Escalation
Escalation Management
escalateToHumanTool– Escalate conversations to human agents- Automatic escalation triggers: critical urgency, negative sentiment, explicit user request, failed resolution attempts
- Escalation ID generation and tracking
- Estimated wait time calculation by urgency level (critical: 2 min, high: 5 min, medium: 15 min, low: 30 min)
Escalation API
POST /api/escalate– Create escalation with reason, urgency, conversation summaryGET /api/escalate?escalationId=...– Check escalation status (pending/in-progress/resolved)- Returns escalation ID, estimated wait time, agent availability status
Runtime Context & Personalization
Dynamic Behavior
- Runtime context system for per-request customization
- User tier-based model selection (free: gpt-4o-mini, pro: gpt-4o, enterprise: claude-opus)
- Language and timezone preferences
- Feature flags (enable-tools, enable-escalation, verbose-responses)
Context-Aware Responses
- Instructions adapt based on user tier and language
- Response verbosity adjusts (concise for free tier, detailed for enterprise)
- Tool availability controlled per tier
- Priority support for enterprise users
API & SDK Access
REST API Endpoints
POST /api/chat– Send message and receive streaming responseGET /api/chat/history?userId=...&threadId=...– Retrieve conversation historyPOST /api/user/profile– Update user profile/working memoryGET /api/user/profile?userId=...– Retrieve user profilePOST /api/escalate– Create human escalationGET /api/escalate?escalationId=...– Check escalation status
SDK Support
- Python SDK:
from tendencia import TendencIA - JavaScript/TypeScript SDK:
import { TendencIA } from '@tendencia/sdk' - Vercel AI SDK integration for unified model access
- Environment variable configuration for API keys
Authentication
- API key-based authentication via
TENDENCIA_API_KEY - Environment variable configuration
- Rate limiting by user tier (free: 1,000 requests/day)
Data & Storage
Turso Database (LibSQL)
- Edge-first distributed SQLite with global replication
- Native Mastra integration via
@mastra/libsql - Stores: conversation threads, messages, working memory, embeddings, tool execution logs
- Vector search for semantic recall
- Ultra-low latency with automatic geo-replication
Supabase Integration
- PostgreSQL for relational data
- Authentication system (email, OAuth, magic links, MFA)
- File storage for multimedia (images, audio, video, PDFs)
- pgvector for additional semantic search
- Realtime subscriptions for reactive UI updates
Batch & Advanced Operations
Batch Analysis
- Process multiple topics simultaneously
- Historical data retrieval with configurable intervals
- Regional and language-specific analysis
- Confidence interval calculations
Error Handling
- Exception types:
RateLimitError,InvalidRequestError,APIError - Retry mechanisms with exponential backoff
- Graceful degradation with fallback models
Workflows
Building a Support Chatbot
-
Setup Project
- Create Next.js project with TypeScript
- Install Mastra core, AI SDK providers, Turso client
- Configure environment variables (database URL, API keys)
-
Configure Database
- Create Turso database instance
- Initialize LibSQL storage and vector DB
- Set up connection pooling
-
Define Memory Schema
- Create user profile schema with Zod (name, preferences, account info, context)
- Configure memory options (lastMessages: 20, semanticRecall: topK 5, workingMemory enabled)
- Set embedding model for semantic recall
-
Create Tools
- Implement knowledge base search tool
- Create ticket creation tool with ID generation
- Add order status lookup tool
- Build escalation tool for human handoff
-
Build Agent
- Define agent instructions with role and capabilities
- Select AI model (with tier-based strategy)
- Attach memory configuration
- Register all tools
-
Create Workflows
- Triage workflow: classify issue â determine urgency â route to appropriate handler
- Auto-resolution workflow: search KB â attempt resolution â create ticket if needed
-
Implement API Routes
- Chat endpoint with streaming response
- History retrieval endpoint
- Profile management endpoints
- Escalation endpoints
-
Build UI
- Chat component with message display
- Input field with send button
- Escalation panel for human handoff
- Real-time message streaming
-
Deploy
- Configure production environment variables
- Deploy to Vercel with automatic scaling
- Set up monitoring and logging
Analyzing Market Trends
-
Initialize Client
- Create TendencIA client with API key
- Set up error handling for rate limits
-
Analyze Single Topic
- Call
trends.analyze()with topic, timeframe (7d/30d), region - Extract trend score, sentiment, growth rate
- Retrieve related topics with relevance scores
- Call
-
Forecast Future Trends
- Use
predictions.forecast()with 30-day horizon - Set confidence level (0.95 for 95% confidence)
- Get forecast score, confidence interval, peak date
- Generate visualization
- Use
-
Batch Process Multiple Topics
- Prepare list of topics to analyze
- Call
batch_analyze()for simultaneous processing - Compare scores and sentiments across topics
-
Set Up Monitoring
- Create alerts for score thresholds
- Configure webhook notifications
- Monitor sentiment changes over time
-
Historical Analysis
- Retrieve historical data with
history() - Analyze trend evolution over months/years
- Identify seasonal patterns
- Retrieve historical data with
Integration
AI Model Providers
- OpenAI: GPT-4o, GPT-4 Turbo, GPT-4o Mini
- Anthropic: Claude Opus, Claude Sonnet, Claude Haiku
- Google: Gemini 2.0 Flash, Gemini 1.5 Pro, Gemini 1.5 Flash
- Groq: Fast inference models
- Seamless switching via Vercel AI SDK without code changes
Backend Services
- Convex: Real-time backend for flow construction and state management
- Supabase: Authentication, file storage, PostgreSQL database
- Vercel: Edge Functions for API deployment and geo-replication
External Systems
- Knowledge base integration via search tools
- CRM/ticketing system integration via custom tools
- Order management system queries
- Webhook notifications for escalations and alerts
Deployment Platforms
- Vercel for Next.js applications
- Docker containerization support
- Automatic scaling based on demand
- Global edge network for low-latency responses
Context
Architecture Overview Tendencia uses a modular, layered architecture: Client Layer (Next.js/React) â API Gateway (Vercel Edge) â Mastra Framework (agents, memory, tools, workflows) â Data Layer (Turso for memory, Supabase for auth/storage) â AI Providers (via Vercel AI SDK).
Memory Model The three-tier memory system enables sophisticated context awareness: short-term (recent messages), working memory (persistent user profile), and semantic recall (RAG across full history). This allows agents to personalize responses, avoid repeating questions, and reference past interactions.
Tool Execution Tools are the mechanism for agents to take action beyond conversation. They receive structured inputs, execute business logic, and return results that inform the agent’s response. Tools can query databases, call APIs, create tickets, or trigger escalations.
Escalation Strategy Human-in-the-loop is critical for complex issues. Escalation triggers include: explicit user requests, critical urgency, negative sentiment, or failed resolution attempts. Escalations are tracked with IDs and estimated wait times based on urgency.
Model Selection Strategy Use Vercel AI SDK to dynamically select models based on user tier, request complexity, or cost optimization. Free users get economical models (gpt-4o-mini), pro users get balanced models (gpt-4o), enterprise users get premium models (claude-opus). This optimizes cost while maintaining quality.
Rate Limiting & Quotas Free tier: 1,000 requests/day. Pro tier: unlimited requests. Rate limiting is enforced per user/API key. Implement exponential backoff for retries on rate limit errors.
Streaming & Real-Time
Streaming responses improve perceived performance and enable real-time user experience. Use agent.stream() for token-by-token response delivery. Combine with Supabase realtime for reactive UI updates.
Production Considerations Deploy with environment-specific configuration. Use Turso for edge-optimized memory storage. Implement comprehensive logging and monitoring. Set up alerts for escalations and errors. Use Sentry for error tracking. Monitor API latency and token usage for cost optimization.
For additional documentation and navigation, see: https://docs.tendencia.ai/llms.txt