Production-ready agent templates to build and deploy LLM-powered agents. Run locally (e.g. with Ollama/Llama Stack) or deploy to Red Hat OpenShift. Each agent has step-by-step docs.
Agents are organized by framework. Pick one and follow its README:
| Framework | Agent | Description |
|---|---|---|
| LangGraph | ReAct Agent | General-purpose agent using a ReAct loop: it reasons and calls tools (e.g. search, math) step by step. Built with LangGraph and LangChain. |
| LangGraph | Agentic RAG | RAG agent that indexes documents in a vector store (Milvus) and retrieves relevant chunks to augment the LLM's answers with your own data. |
| LangGraph | ReAct + DB Memory | ReAct agent with PostgreSQL-backed conversation memory. Full chat history is persisted in the database while a FIFO sliding window keeps only the last N messages in the LLM context. |
| LangGraph | Human-in-the-Loop | ReAct agent with a human approval step. The agent pauses before executing tool calls and waits for user confirmation, enabling oversight of critical actions. |
| LlamaIndex | WebSearch Agent | Agent built on LlamaIndex that uses a web search tool to query the internet and use the results in its answers. |
| CrewAI | WebSearch Agent | CrewAI-based agent with a web search tool to query the internet and answer user questions. |
| Vanilla Python | OpenAI Responses Agent | Minimal agent with no framework: only the OpenAI Python client and an Action/Observation loop with tools. Use with OpenAI or any compatible API. |
| AutoGen | MCP Agent | AutoGen AssistantAgent with MCP tools over SSE (e.g. churn prediction, math tools), FastAPI /chat/completions. |
| Google ADK | ADK Agent | General-purpose agent using Google ADK 2.0 with LiteLLM to route inference through a LlamaStack-compatible endpoint. |
| Langflow | Simple Tool Calling Agent | Tool-calling agent built with Langflow's visual flow builder. Calls external APIs as tools and reasons over results. Includes Langfuse v3 tracing. Runs locally via podman-compose. |
| A2A | LangGraph + CrewAI Agent | Multi-agent system using the Agent-to-Agent (A2A) protocol. A LangGraph orchestrator delegates tasks to a CrewAI worker agent. Uses a dedicated Helm chart. |
Agents in this repository can support two deployment modes:
- Run agents on your local machine
- Use Llama Stack server with Ollama for model serving
- Ideal for development, testing, and experimentation
- No cloud infrastructure required
- Deploy agents to Red Hat OpenShift Cluster
- Containerized deployment with Kubernetes
- Production-grade scaling and monitoring
- CI/CD ready
agentic-starter-kits/
βββ agents/
β βββ langgraph/
β β βββ react_agent/ # LangGraph ReAct agent
β β βββ agentic_rag/ # LangGraph RAG agent with Milvus
β β βββ react_with_database_memory/ # LangGraph ReAct + PostgreSQL memory
β β βββ human_in_the_loop/ # LangGraph Human-in-the-Loop agent
β βββ crewai/
β β βββ websearch_agent/ # CrewAI web search agent
β βββ llamaindex/
β β βββ websearch_agent/ # LlamaIndex web search agent
β βββ vanilla_python/
β β βββ openai_responses_agent/ # OpenAI Responses API (no framework)
β βββ autogen/
β β βββ mcp_agent/ # AutoGen + MCP (SSE)
β βββ google/
β β βββ adk/ # Google ADK 2.0 agent
β βββ langflow/
β β βββ simple_tool_calling_agent/ # Langflow tool-calling agent
β βββ a2a/
β βββ langgraph_crewai_agent/ # A2A multi-agent (LangGraph + CrewAI)
βββ tests/
β βββ behavioral/ # Behavioral eval suite (shared infra)
βββ charts/
β βββ agent/ # Shared Helm chart for all standard agents
β βββ a2a-langgraph-crewai/ # Dedicated Helm chart for A2A agent
βββ docs/ # Guides: local dev, deployment, contributing
βββ pyproject.toml # Test deps & pytest config
βββ README.md
- Start Here: Read this README to understand the overall structure and install core dependencies
- Choose an Agent: Select an agent from the
agents/directory based on your use case - Follow Agent README: Navigate to the agent's directory and follow its specific README for:
- Agent-specific dependencies installation
- Configuration and setup
- Local development or OpenShift deployment
- Usage examples and API endpoints
Run this script to set up repo stuff with a use of UV and python 3.12
Download repo
git clone https://github.com/red-hat-data-services/agentic-starter-kitsGet into root dir
cd agentic-starter-kitsInstall UV
curl -LsSf https://astral.sh/uv/install.sh | shBehavioral eval suite that tests agents over HTTP against their shared OpenAI-compatible API. Tests are organized by capability so they apply to any agent.
Tests require a running agent. Set the target URL via environment variables:
| Env var | Test scope |
|---|---|
AGENT_URL |
Cross-agent tests (api_contract, adversarial) |
REACT_AGENT_URL |
LangGraph ReAct agent tests |
VANILLA_PYTHON_AGENT_URL |
Vanilla Python agent tests |
uv pip install -e ".[test]"
AGENT_URL=https://my-agent.example.com pytest tests/behavioral/ -vSee tests/behavioral/ for full details.
- Local Development β Ollama + Llama Stack setup
- OpenShift Deployment β Helm-based deployment guide
- Adding a New Agent β How to contribute a new agent template
- Adding Behavioral Tests β How to add test coverage for an agent
- Llama Stack Documentation: https://llama-stack.readthedocs.io/
- Ollama Documentation: https://docs.ollama.com/
- OpenShift Documentation: https://docs.openshift.com/
- Kubernetes: https://kubernetes.io/docs/
Contributions are welcome! Please see individual agent READMEs for specific guidelines.
MIT License
Copyright (c) 2026
