langchain agent starter kit
npx skills add https://github.com/langchain-ai/langchain-skills --skill 'LangChain Agent Starter Kit'
Skill 文档
It answers two questions that every project must resolve upfront:
- Which framework should I use? â LangChain, LangGraph, or Deep Agents
- What do I need to install? â packages, versions, and environment setup
Load this skill first. Once you’ve made these decisions, invoke the framework-specific skill for implementation details.
Step 1 â Pick Your Framework
The three frameworks are layered, not competing. Each builds on the one below:
âââââââââââââââââââââââââââââââââââââââââââ
â Deep Agents â â batteries included
â (planning, memory, skills, files) â
âââââââââââââââââââââââââââââââââââââââââââ¤
â LangGraph â â custom orchestration
â (nodes, edges, state, persistence) â
âââââââââââââââââââââââââââââââââââââââââââ¤
â LangChain â â foundation
â (models, tools, prompts, RAG) â
âââââââââââââââââââââââââââââââââââââââââââ
Answer these questions in order to land on the right choice:
| Question | Yes â | No â |
|---|---|---|
| User wants to include or needs planning, persistent memory, complex task management, long running tasks, out-of-the-box file management, or on-demand skills? Or out of the box middleware? | Deep Agents | â |
| Need custom control flow and higher degrees of dereminism â specified loops, branching, determined parallel workers, or manually instrumented human-in-the-loop? | LangGraph | â |
| Single-purpose agent with a fixed set of tools? | LangChain (create_agent) |
â |
| ReACT style agent? | LangChain (LCEL / chain) | â |
Higher layers depend on lower ones only when necessary â which means you can mix them. A LangGraph graph can be a subagent inside Deep Agents; LangChain tools work inside both.
| LangChain | LangGraph | Deep Agents | |
|---|---|---|---|
| Control flow | Fixed (tool loop) | Custom (graph) | Managed (middleware) |
| Middleware layer | Callbacks only | â None | â Explicit, configurable |
| Planning | â | Manual | â TodoListMiddleware |
| File management | â | Manual | â FilesystemMiddleware |
| Persistent memory | â | With checkpointer | â MemoryMiddleware |
| Subagent delegation | â | Manual | â SubAgentMiddleware |
| On-demand skills | â | â | â SkillsMiddleware |
| Human-in-the-loop | â | Manual interrupt | â HumanInTheLoopMiddleware |
| Custom graph edges | â | â Full control | Limited |
| Setup complexity | Low | Medium | Low |
| Next skill to load | langchain-agents |
langgraph-fundamentals |
deep-agents-core |
Middleware is a concept specific to LangChain (callbacks) and Deep Agents (explicit middleware layer). LangGraph has no middleware â behavior is wired directly into nodes and edges. If a user asks for built in hooks, route to LangChain or DeepAgents
Deep Agents built-in middleware
Deep Agents ships with a built-in middleware layer â six components pre-wired out of the box, with the ability to add your own. The first three are always active; the rest are opt-in via configuration:
| Middleware | Always on? | What it gives the agent |
|---|---|---|
TodoListMiddleware |
â | write_todos tool â breaks work into a tracked task list |
FilesystemMiddleware |
â | ls, read_file, write_file, edit_file, glob, grep tools |
SubAgentMiddleware |
â | task tool â delegates subtasks to named subagents |
SkillsMiddleware |
Opt-in | Loads SKILL.md files on demand from a configured skills directory |
MemoryMiddleware |
Opt-in | Long-term memory across sessions via a Store instance |
HumanInTheLoopMiddleware |
Opt-in | Pauses execution and requests human approval before specified tool calls |
You configure middleware â you don’t implement it. See deep-agents-core for setup details.
Step 2 â Set Up Your Dependencies
Environment requirements
| Python | TypeScript / Node | |
|---|---|---|
| Runtime | Python 3.10+ | Node.js 20+ |
| LangChain | 1.0+ (LTS) | 1.0+ (LTS) |
| LangSmith SDK | >= 0.1.99 | >= 0.1.99 |
Always use LangChain 1.0+. LangChain 0.3 is maintenance-only until December 2026 â do not start new projects on it.
Core packages â always required
| Package | Role | Version |
|---|---|---|
langchain |
Agents, chains, retrieval | >=1.0,<2.0 |
langchain-core |
Base types & interfaces | >=1.0,<2.0 |
langsmith |
Tracing, evaluation, datasets | >=0.1.99 |
| Package | Role | Version |
|---|---|---|
@langchain/core |
Base types & interfaces (peer dep â install explicitly) | ^1.0.0 |
langchain |
Agents, chains, retrieval | ^1.0.0 |
langsmith |
Tracing, evaluation, datasets | ^0.1.99 |
Orchestration â add based on your framework choice
| Framework | Python | TypeScript |
|---|---|---|
| LangGraph | langgraph>=1.0,<2.0 |
@langchain/langgraph ^1.0.0 |
| Deep Agents | deepagents (depends on LangGraph; installs it as a transitive dep) |
deepagents |
Model providers â pick the one(s) you use
| Provider | Python | TypeScript |
|---|---|---|
| OpenAI | langchain-openai |
@langchain/openai |
| Anthropic | langchain-anthropic |
@langchain/anthropic |
| Google Gemini | langchain-google-genai |
@langchain/google-genai |
| Mistral | langchain-mistralai |
@langchain/mistralai |
| Groq | langchain-groq |
@langchain/groq |
| Cohere | langchain-cohere |
@langchain/cohere |
| AWS Bedrock | langchain-aws |
@langchain/aws |
| Azure AI | langchain-azure-ai |
@langchain/azure-openai |
| Ollama (local) | langchain-ollama |
@langchain/ollama |
| Hugging Face | langchain-huggingface |
â |
| Fireworks AI | langchain-fireworks |
â |
| Together AI | langchain-together |
â |
Common tools & retrieval â add as needed
| Package | Adds | Notes |
|---|---|---|
langchain-tavily / @langchain/tavily |
Tavily web search | Keep at latest; frequently updated for compatibility |
langchain-text-splitters |
Text chunking | Semver; keep current |
langchain-chroma / @langchain/community |
Chroma vector store | Dedicated integration package; keep at latest |
langchain-pinecone / @langchain/pinecone |
Pinecone vector store | Dedicated integration package; keep at latest |
langchain-qdrant / @langchain/qdrant |
Qdrant vector store | Dedicated integration package; keep at latest |
faiss-cpu |
FAISS vector store (Python only, local) | Via langchain-community |
langchain-community / @langchain/community |
1000+ integrations fallback | Python: NOT semver â pin to minor series |
langsmith[pytest] |
pytest plugin | Requires langsmith>=0.3.4 |
Prefer dedicated integration packages over
langchain-communitywhen one exists â they are independently versioned and more stable. Keep tool packages (Tavily, vector stores) at latest since they release compatibility fixes alongside core updates.
Dependency templates
Add your model provider:
langchain-openai | langchain-anthropic | langchain-google-genai | …
Add tools/retrieval as needed:
langchain-tavily | langchain-chroma | langchain-text-splitters | …
</python>
</ex-langgraph-python>
<ex-langgraph-typescript>
<typescript>
LangGraph project â provider-agnostic starting point.
```json
{
"dependencies": {
"@langchain/core": "^1.0.0",
"langchain": "^1.0.0",
"@langchain/langgraph": "^1.0.0",
"langsmith": "^0.1.99"
}
}
Add your model provider:
langchain-openai | langchain-anthropic | langchain-google-genai | …
</python>
</ex-deepagents-python>
<ex-deepagents-typescript>
<typescript>
Deep Agents project â provider-agnostic starting point.
```json
{
"dependencies": {
"deepagents": "latest",
"@langchain/core": "^1.0.0",
"langchain": "^1.0.0",
"langsmith": "^0.1.99"
}
}
Step 3 â Set Your Environment Variables
Model provider â set the one(s) you use
OPENAI_API_KEY= ANTHROPIC_API_KEY= GOOGLE_API_KEY= MISTRAL_API_KEY= GROQ_API_KEY= COHERE_API_KEY= FIREWORKS_API_KEY= TOGETHER_API_KEY= HUGGINGFACEHUB_API_TOKEN=
Common tool/retrieval services
TAVILY_API_KEY= PINECONE_API_KEY=
</environment-variables>
---
## Step 4 â Load the Right Skill Next To Dive Deeper