| layout | title | nav_order | has_children |
|---|---|---|---|
default |
AG2 Tutorial |
73 |
true |
Build collaborative AI agent systems with AG2, the community-driven successor to AutoGen.
AG2View Repo is the community-driven successor to Microsoft's AutoGen framework. It provides a powerful, open-source platform for building AI agents that can collaborate to solve complex tasks through natural conversation.
| Aspect | AG2 | Microsoft AutoGen |
|---|---|---|
| Governance | Open community-driven | Microsoft-controlled |
| Development | Community contributions | Microsoft team |
| License | Apache 2.0 | MIT |
| Focus | Multi-agent collaboration | Part of Microsoft Agent Framework |
| Repository | github.com/ag2ai/ag2 | github.com/microsoft/autogen |
Note: The original AutoGen creators transitioned to AG2 to promote open governance. Microsoft continues developing AutoGen as part of their Agent Framework.
flowchart TD
A[User Task] --> B[AssistantAgent]
B <--> C[UserProxyAgent]
B --> D{Needs Tools?}
D -->|Yes| E[Code Execution]
D -->|Yes| F[Function Calling]
D -->|No| G[Direct Response]
E --> H[Results]
F --> H
G --> H
H --> I{Task Complete?}
I -->|No| B
I -->|Yes| J[Final Output]
C --> K[Human Feedback]
K --> B
classDef agent fill:#e1f5fe,stroke:#01579b
classDef decision fill:#f3e5f5,stroke:#4a148c
classDef action fill:#fff3e0,stroke:#ef6c00
classDef output fill:#e8f5e8,stroke:#1b5e20
class A,B,C agent
class D,I decision
class E,F,G,K action
class H,J output
| Agent | Purpose | Key Features |
|---|---|---|
| AssistantAgent | AI-powered task solving | LLM-based reasoning, tool use |
| UserProxyAgent | Human interaction | Code execution, human feedback |
| GroupChatManager | Multi-agent coordination | Conversation flow, speaker selection |
| ConversableAgent | Base class for custom agents | Flexible conversation patterns |
- Two-Agent Chat - Simple back-and-forth between agents
- Sequential Chat - Ordered multi-agent workflows
- Group Chat - Multiple agents collaborating
- Nested Chat - Hierarchical agent structures
- Chapter 1: Getting Started - Installation, setup, and your first AG2 agents
- Chapter 2: Agent Types - Understanding and configuring different agents
- Chapter 3: Conversation Patterns - Two-agent, sequential, and group chats
- Chapter 4: Code Execution - Secure code execution with Docker
- Chapter 5: Tool Integration - Function calling and external tools
- Chapter 6: Group Chat - Multi-agent collaboration and speaker selection
- Chapter 7: Advanced Patterns - Nested chats, caching, and optimization
- Chapter 8: Production Deployment - Scaling, monitoring, and best practices
- Chapter 9: Enterprise Operations - Governance, evals, safety, and cost/perf tuning
- Create Conversational Agents that collaborate through natural dialogue
- Implement Code Execution safely with Docker sandboxing
- Build Multi-Agent Systems with group chat coordination
- Integrate External Tools through function calling
- Design Complex Workflows with nested and sequential patterns
- Handle Human-in-the-Loop scenarios effectively
- Deploy Production Systems with proper scaling and monitoring
- Python 3.9+
- OpenAI API key (or other LLM provider)
- Docker (for code execution)
- Basic understanding of LLMs
# Install AG2
pip install ag2
# Or with all extras
pip install ag2[all]from ag2 import AssistantAgent, UserProxyAgent
# Create an assistant agent
assistant = AssistantAgent(
name="assistant",
llm_config={
"model": "gpt-4",
"api_key": "your-api-key"
}
)
# Create a user proxy agent
user_proxy = UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER", # Automatic mode
code_execution_config={
"work_dir": "coding",
"use_docker": True
}
)
# Start a conversation
user_proxy.initiate_chat(
assistant,
message="Write a Python function to calculate fibonacci numbers and test it."
)from ag2 import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager
# Create specialized agents
researcher = AssistantAgent(
name="Researcher",
system_message="You are a research assistant. Search for and analyze information.",
llm_config=llm_config
)
writer = AssistantAgent(
name="Writer",
system_message="You are a technical writer. Create clear, well-structured content.",
llm_config=llm_config
)
critic = AssistantAgent(
name="Critic",
system_message="You review content for accuracy and suggest improvements.",
llm_config=llm_config
)
user_proxy = UserProxyAgent(
name="User",
human_input_mode="TERMINATE",
code_execution_config=False
)
# Create group chat
group_chat = GroupChat(
agents=[user_proxy, researcher, writer, critic],
messages=[],
max_round=10
)
manager = GroupChatManager(groupchat=group_chat, llm_config=llm_config)
# Start research task
user_proxy.initiate_chat(
manager,
message="Research and write a brief summary about quantum computing applications."
)- Chapters 1-3: Setup, agent types, and basic conversations
- Build simple two-agent systems
- Chapters 4-6: Code execution, tools, and group chat
- Create collaborative multi-agent workflows
- Chapters 7-8: Advanced patterns and production deployment
- Master enterprise-grade agent systems
Ready to build multi-agent AI systems? Let's begin with Chapter 1: Getting Started!
Generated for Awesome Code Docs
- Start Here: Chapter 1: Getting Started with AG2
- Back to Main Catalog
- Browse A-Z Tutorial Directory
- Search by Intent
- Explore Category Hubs
- Chapter 1: Getting Started with AG2
- Chapter 2: Agent Types & Configuration
- Chapter 3: Conversation Patterns
- Chapter 4: Code Execution & Security
- Chapter 5: Tool Integration & Function Calling
- Chapter 6: Group Chat & Multi-Agent Collaboration
- Chapter 7: Advanced Patterns & Optimization
- Chapter 8: Production Deployment & Scaling
- Chapter 9: Enterprise Operations & Advanced Patterns
Generated by AI Codebase Knowledge Builder