Skip to content

red-hat-data-services/agentic-starter-kits

Agentic Starter Kits

Agentic Starter Kits

Purpose

Production-ready agent templates to build and deploy LLM-powered agents. Run locally (e.g. with Ollama/Llama Stack) or deploy to Red Hat OpenShift. Each agent has step-by-step docs.

Agents

Agents are organized by framework. Pick one and follow its README:

Framework Agent Description
LangGraph ReAct Agent General-purpose agent using a ReAct loop: it reasons and calls tools (e.g. search, math) step by step. Built with LangGraph and LangChain.
LangGraph Agentic RAG RAG agent that indexes documents in a vector store (Milvus) and retrieves relevant chunks to augment the LLM's answers with your own data.
LangGraph ReAct + DB Memory ReAct agent with PostgreSQL-backed conversation memory. Full chat history is persisted in the database while a FIFO sliding window keeps only the last N messages in the LLM context.
LangGraph Human-in-the-Loop ReAct agent with a human approval step. The agent pauses before executing tool calls and waits for user confirmation, enabling oversight of critical actions.
LlamaIndex WebSearch Agent Agent built on LlamaIndex that uses a web search tool to query the internet and use the results in its answers.
CrewAI WebSearch Agent CrewAI-based agent with a web search tool to query the internet and answer user questions.
Vanilla Python OpenAI Responses Agent Minimal agent with no framework: only the OpenAI Python client and an Action/Observation loop with tools. Use with OpenAI or any compatible API.
AutoGen MCP Agent AutoGen AssistantAgent with MCP tools over SSE (e.g. churn prediction, math tools), FastAPI /chat/completions.
Google ADK ADK Agent General-purpose agent using Google ADK 2.0 with LiteLLM to route inference through a LlamaStack-compatible endpoint.
Langflow Simple Tool Calling Agent Tool-calling agent built with Langflow's visual flow builder. Calls external APIs as tools and reasons over results. Includes Langfuse v3 tracing. Runs locally via podman-compose.
A2A LangGraph + CrewAI Agent Multi-agent system using the Agent-to-Agent (A2A) protocol. A LangGraph orchestrator delegates tasks to a CrewAI worker agent. Uses a dedicated Helm chart.

Deployment Options

Agents in this repository can support two deployment modes:

πŸ–₯️ Local Development

  • Run agents on your local machine
  • Use Llama Stack server with Ollama for model serving
  • Ideal for development, testing, and experimentation
  • No cloud infrastructure required

☁️ Production Deployment

  • Deploy agents to Red Hat OpenShift Cluster
  • Containerized deployment with Kubernetes
  • Production-grade scaling and monitoring
  • CI/CD ready

Repository Structure

agentic-starter-kits/
β”œβ”€β”€ agents/
β”‚   β”œβ”€β”€ langgraph/
β”‚   β”‚   β”œβ”€β”€ react_agent/              # LangGraph ReAct agent
β”‚   β”‚   β”œβ”€β”€ agentic_rag/             # LangGraph RAG agent with Milvus
β”‚   β”‚   β”œβ”€β”€ react_with_database_memory/ # LangGraph ReAct + PostgreSQL memory
β”‚   β”‚   └── human_in_the_loop/       # LangGraph Human-in-the-Loop agent
β”‚   β”œβ”€β”€ crewai/
β”‚   β”‚   └── websearch_agent/         # CrewAI web search agent
β”‚   β”œβ”€β”€ llamaindex/
β”‚   β”‚   └── websearch_agent/         # LlamaIndex web search agent
β”‚   β”œβ”€β”€ vanilla_python/
β”‚   β”‚   └── openai_responses_agent/  # OpenAI Responses API (no framework)
β”‚   β”œβ”€β”€ autogen/
β”‚   β”‚   └── mcp_agent/               # AutoGen + MCP (SSE)
β”‚   β”œβ”€β”€ google/
β”‚   β”‚   └── adk/                     # Google ADK 2.0 agent
β”‚   β”œβ”€β”€ langflow/
β”‚   β”‚   └── simple_tool_calling_agent/ # Langflow tool-calling agent
β”‚   └── a2a/
β”‚       └── langgraph_crewai_agent/  # A2A multi-agent (LangGraph + CrewAI)
β”œβ”€β”€ tests/
β”‚   └── behavioral/                  # Behavioral eval suite (shared infra)
β”œβ”€β”€ charts/
β”‚   β”œβ”€β”€ agent/                       # Shared Helm chart for all standard agents
β”‚   └── a2a-langgraph-crewai/        # Dedicated Helm chart for A2A agent
β”œβ”€β”€ docs/                            # Guides: local dev, deployment, contributing
β”œβ”€β”€ pyproject.toml                   # Test deps & pytest config
└── README.md

How to Use This Repository

  1. Start Here: Read this README to understand the overall structure and install core dependencies
  2. Choose an Agent: Select an agent from the agents/ directory based on your use case
  3. Follow Agent README: Navigate to the agent's directory and follow its specific README for:
    • Agent-specific dependencies installation
    • Configuration and setup
    • Local development or OpenShift deployment
    • Usage examples and API endpoints

Pre-requisitions to run that repo

Run this script to set up repo stuff with a use of UV and python 3.12

Download repo

git clone https://github.com/red-hat-data-services/agentic-starter-kits

Get into root dir

cd agentic-starter-kits

Install UV

curl -LsSf https://astral.sh/uv/install.sh | sh

Behavioral Tests

Behavioral eval suite that tests agents over HTTP against their shared OpenAI-compatible API. Tests are organized by capability so they apply to any agent.

Tests require a running agent. Set the target URL via environment variables:

Env var Test scope
AGENT_URL Cross-agent tests (api_contract, adversarial)
REACT_AGENT_URL LangGraph ReAct agent tests
VANILLA_PYTHON_AGENT_URL Vanilla Python agent tests
uv pip install -e ".[test]"
AGENT_URL=https://my-agent.example.com pytest tests/behavioral/ -v

See tests/behavioral/ for full details.


Documentation

Additional Resources

Contributing

Contributions are welcome! Please see individual agent READMEs for specific guidelines.

License

MIT License

Copyright (c) 2026

About

Starter kits for building and deploying AI agents. Run interactively locally or deploy to Red Hat OpenShift (including RHOAI) via LlamaStack.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors