The fastest way to build production-ready multi-agent apps with LangGraph.
7 patterns. 5 providers. One command.
Quick Start • Patterns • Providers • Examples • API • Contributing
Building multi-agent systems with LangGraph means writing the same boilerplate over and over — setting up supervisors, wiring handoff tools, configuring providers, adding persistence. This starter kit gives you all of that out of the box so you can focus on your agent logic, not infrastructure.
npx create-langgraph-appWhat you get:
- Pick your LLM provider (OpenAI, Anthropic, Google, Groq, or local Ollama)
- Choose which agent patterns you need
- Get a ready-to-run project with tests, types, and a Fastify server
Or clone the full kit with all 7 patterns included.
graph TD
A[LangGraph Starter Kit] --> B["CLI Demo · npm run dev"]
A --> C["HTTP Server · npm run dev:http"]
A --> D[LangGraph Studio]
B --> P
C --> P
D --> P
subgraph P["7 Agent Patterns"]
P1[Supervisor]
P2[Swarm]
P3[HITL]
P4[Structured Output]
P5[Research Agent]
P6[RAG]
P7[Customer Support]
end
subgraph F["Agent Factory"]
F1["makeAgent()"]
F2["makeSupervisor()"]
F3["makeSwarm()"]
end
subgraph T["Tools"]
T1[Local]
T2[Web Search]
T3[RAG]
T4[Support]
T5[MCP]
end
P --> L
F --> L
T --> L
subgraph L["5 LLM Providers"]
L1[OpenAI]
L2[Anthropic]
L3[Google]
L4[Groq]
L5["Ollama · local"]
end
| Feature | Description | |
|---|---|---|
| Patterns | 7 Agent Patterns | Swarm, Supervisor, HITL, Structured Output, Research, RAG, Customer Support |
| CLI | Scaffolder | npx create-langgraph-app — interactive project generator |
| Providers | 5 LLM Providers | OpenAI, Anthropic, Google, Groq, Ollama — switch with one env var |
| Tools | MCP Integration | Connect external tools via Model Context Protocol |
| Server | HTTP + SSE | Fastify server with invoke, streaming, resume, and thread history |
| Debug | LangGraph Studio | langgraph.json included for visual graph debugging |
| Observe | LangSmith Tracing | Full observability with one env var |
| Persist | Memory + Postgres | In-memory for dev, PostgreSQL-ready for production |
| Deploy | Docker + CI | Docker Compose with Postgres, GitHub Actions CI |
| Test | 34+ Tests | Tools, config, agents — all tested with vitest |
npx create-langgraph-appInteractive CLI — pick your provider, choose your patterns, get a project:
╔═══════════════════════════════════════╗
║ create-langgraph-app ║
╚═══════════════════════════════════════╝
Project name (my-langgraph-app): my-agents
LLM provider?
1. OpenAI (gpt-4o-mini)
2. Anthropic (Claude Sonnet)
3. Google (Gemini 2.0 Flash)
4. Groq (Llama 3.3 70B)
5. Ollama (local, no API key)
Which patterns?
1. Supervisor
2. Swarm
3. Human-in-the-Loop
4. Structured Output
5. RAG
Done! cd my-agents && npm run dev
git clone https://github.com/ac12644/langgraph-starter-kit.git
cd langgraph-starter-kit
npm install
cp .env.example .env # Add your API key
npm run dev # Run all 7 patterns
npm run dev:http # Start HTTP server on :3000Switch providers with one env var. Each has a sensible default model:
| Provider | LLM_PROVIDER |
Default Model | API Key |
|---|---|---|---|
| OpenAI | openai |
gpt-4o-mini | OPENAI_API_KEY |
| Anthropic | anthropic |
claude-sonnet-4-20250514 | ANTHROPIC_API_KEY |
google |
gemini-2.0-flash | GOOGLE_API_KEY |
|
| Groq | groq |
llama-3.3-70b-versatile | GROQ_API_KEY |
| Ollama | ollama |
llama3.2 | None (runs locally) |
# .env — just two lines to switch
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...A central coordinator routes tasks to specialized workers. Best for: structured workflows with clear task delegation.
curl -X POST http://localhost:3000/supervisor/invoke \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "sum 10 and 15, then write a summary"}]}'Agents hand off to each other peer-to-peer using transfer tools. Best for: open-ended conversations where the right agent depends on context.
curl -X POST http://localhost:3000/swarm/invoke \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "talk to bob then add 5 and 7"}]}'Pauses the graph for human approval before dangerous actions. Best for: high-stakes operations — deletions, payments, emails.
# Trigger an action that needs approval
curl -X POST http://localhost:3000/interrupt/invoke \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "delete record rec_2"}], "thread_id": "hitl-1"}'
# Approve it
curl -X POST http://localhost:3000/interrupt/resume \
-H "Content-Type: application/json" \
-d '{"thread_id": "hitl-1", "decision": "yes"}'Returns typed JSON validated by Zod. Best for: extracting structured data — summaries, classifications, entities.
curl -X POST http://localhost:3000/analyst/invoke \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "Analyze: Revenue grew 25% but churn increased 8%"}]}'Web search + URL scraping coordinated by a supervisor. Best for: gathering and synthesizing information from the web.
curl -X POST http://localhost:3000/researcher/invoke \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "Research multi-agent AI systems"}]}'In-memory vector store with semantic search. Best for: answering questions about your own documents/knowledge base.
curl -X POST http://localhost:3000/rag/invoke \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "What is the supervisor pattern?"}]}'Multi-agent support system with a router that delegates to billing, tech support, and returns specialists. Includes escalation to human operators. Best for: customer-facing products with different support domains.
curl -X POST http://localhost:3000/support/invoke \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "I am customer C-1002. I was charged $29.99 but my plan is free. Can you help?"}]}'Every app supports SSE for real-time token streaming:
curl -N http://localhost:3000/supervisor/stream \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "what is 2+2?"}]}'Extend your agents with external tools via Model Context Protocol:
cp mcp-servers.example.json mcp-servers.json
# Edit mcp-servers.json with your MCP server configs
# Set MCP_SERVERS_PATH=./mcp-servers.json in .envSupports both stdio (local) and http (remote) transports. Tools are auto-injected into swarm and supervisor apps.
langgraph dev # Visual graph debuggingLANGCHAIN_TRACING_V2=true
LANGSMITH_API_KEY=ls_...
LANGSMITH_PROJECT=langgraph-starter-kit| Route | Method | Description |
|---|---|---|
/:app/invoke |
POST | Invoke agent, return final result |
/:app/stream |
POST | SSE token streaming |
/:app/resume |
POST | Resume paused graph (HITL) |
/:app/threads/:id |
GET | Get thread state |
/:app/threads/:id/history |
GET | Full state history |
/health |
GET | Health check |
Apps: swarm supervisor interrupt analyst researcher rag support
Real-world agent apps with full documentation:
| Example | Description | Patterns |
|---|---|---|
| Customer Support Bot | Billing, tech support, returns routing with human escalation | Supervisor, HITL |
| Research Agent | Web search + report writing pipeline | Supervisor |
| RAG Agent | Document indexing + semantic retrieval | Supervisor, RAG |
Each example has its own README with architecture diagrams, tool reference, usage examples, and customization guide.
src/
├── config/
│ ├── env.ts # Environment + provider validation
│ ├── llm.ts # Multi-provider LLM factory
│ ├── embeddings.ts # Multi-provider embeddings factory
│ └── checkpointer.ts # Memory (dev) / Postgres (prod)
├── tools/
│ ├── local.ts # Built-in tools (add, multiply, echo)
│ ├── web.ts # Web search + URL scraping
│ ├── rag.ts # Vector store + retrieval
│ ├── support.ts # Customer support tools
│ └── mcp.ts # MCP external tool loader
├── agents/
│ ├── factory.ts # makeAgent() — agent builder
│ ├── supervisor.ts # makeSupervisor() wrapper
│ ├── swarm.ts # makeSwarm() wrapper
│ └── handoff.ts # createHandoffTool() — transfers
├── apps/
│ ├── supervisor.ts # Supervisor pattern
│ ├── swarm.ts # Swarm pattern
│ ├── interrupt.ts # Human-in-the-loop
│ ├── analyst.ts # Structured output
│ ├── researcher.ts # Research agent
│ ├── rag.ts # RAG agent
│ └── support.ts # Customer support bot
├── server/index.ts # Fastify HTTP server
└── index.ts # CLI demo
examples/
├── customer-support/ # Full customer support bot docs
├── research-agent/ # Research agent docs
└── rag-agent/ # RAG agent docs
# Docker Compose (includes Postgres)
docker compose up
# Standalone Docker
docker build -t langgraph-starter .
docker run -p 3000:3000 --env-file .env langgraph-starter| Platform | How |
|---|---|
| Railway | Deploy with this repo URL |
| Render | Connect repo — uses render.yaml |
| Docker | docker compose up anywhere |
Create a file, wire it up, done:
// src/apps/my-agent.ts
import { llm } from "../config/llm";
import { makeAgent } from "../agents/factory";
import { makeSupervisor } from "../agents/supervisor";
export function createMyApp() {
const agent = makeAgent({
name: "my_agent",
llm,
tools: [/* your tools */],
system: "You are a helpful assistant.",
});
return makeSupervisor({
agents: [agent],
llm,
outputMode: "last_message",
supervisorName: "my_supervisor",
});
}Register in src/server/index.ts and you're live.
Contributions are welcome! Whether it's a new agent pattern, bug fix, documentation improvement, or just a typo — every bit helps.
See CONTRIBUTING.md for guidelines.
First time contributing? Look for issues labeled good first issue.
- Questions? Open a Discussion
- Bug? File an Issue
- Want to contribute? See CONTRIBUTING.md
- Like it? Give it a star — it helps others find the project
Apache License 2.0 — same license as LangChain. Use it freely in personal and commercial projects.