Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/decisions/0001-agent-run-response.md
Original file line number Diff line number Diff line change
Expand Up @@ -498,7 +498,7 @@ We need to decide what AIContent types, each agent response type will be mapped
| Google ADK | **Approach 1** Both [input and output schemas can be specified for LLM Agents](https://google.github.io/adk-docs/agents/llm-agents/#structuring-data-input_schema-output_schema-output_key) at construction time. This option is specific to this agent type and other agent types do not necessarily support |
| AWS (Strands) | **Approach 2** Supports a special invocation method called [structured_output](https://strandsagents.com/latest/documentation/docs/api-reference/python/agent/agent/#strands.agent.agent.Agent.structured_output) |
| LangGraph | **Approach 1** Supports [configuring an agent](https://langchain-ai.github.io/langgraph/agents/agents/?h=structured#6-configure-structured-output) at agent construction time, and a [structured response](https://langchain-ai.github.io/langgraph/agents/run_agents/#output-format) can be retrieved as a special property on the agent response |
| Agno | **Approach 1** Supports [configuring an agent](https://docs.agno.com/examples/getting-started/structured-output) at agent construction time |
| Agno | **Approach 1** Supports [configuring an agent](https://docs.agno.com/input-output/structured-output/agent) at agent construction time |
| A2A | **Informal Approach 2** Doesn't formally support schema negotiation, but [hints can be provided via metadata](https://a2a-protocol.org/latest/specification/#97-structured-data-exchange-requesting-and-providing-json) at invocation time |
| Protocol Activity | Supports returning [Complex types](https://github.com/microsoft/Agents/blob/main/specs/activity/protocol-activity.md#complex-types) but no support for requesting a type |

Expand Down
8 changes: 8 additions & 0 deletions python/samples/03-workflows/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,6 +160,14 @@ Sequential orchestration uses a few small adapter nodes for plumbing:
These may appear in event streams (executor_invoked/executor_completed). They're analogous to
concurrent’s dispatcher and aggregator and can be ignored if you only care about agent activity.

### AzureOpenAIResponsesClient vs AzureAIAgent

Workflow and orchestration samples use `AzureOpenAIResponsesClient` rather than the CRUD-style `AzureAIAgent` client. The key difference:

- **`AzureOpenAIResponsesClient`** — A lightweight client that uses the underlying Agent Service V2 (Responses API) for non-CRUD-style agents. Orchestrations use this client because agents are created locally and do not require server-side lifecycle management (create/update/delete). This is the recommended client for orchestration patterns (Sequential, Concurrent, Handoff, GroupChat, Magentic).

- **`AzureAIAgent`** — A CRUD-style client for server-managed agents. Use this when you need persistent, server-side agent definitions with features like file search, code interpreter sessions, or thread management provided by the Azure AI Agent Service.

### Environment Variables

Workflow samples that use `AzureOpenAIResponsesClient` expect:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,11 @@
"""

import asyncio
import os
from pathlib import Path
from typing import Any

from agent_framework.azure import AzureOpenAIChatClient
from agent_framework.azure import AzureOpenAIResponsesClient
from agent_framework.declarative import WorkflowFactory
from azure.identity import AzureCliCredential
from pydantic import BaseModel, Field
Expand Down Expand Up @@ -196,8 +197,12 @@ def format_order_confirmation(order_data: dict[str, Any], order_calculation: dic

async def main():
"""Run the agent to function tool workflow."""
# Create Azure OpenAI client
chat_client = AzureOpenAIChatClient(credential=AzureCliCredential())
# Create Azure OpenAI Responses client
chat_client = AzureOpenAIResponsesClient(
project_endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
deployment_name=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
credential=AzureCliCredential(),
)

# Create the order analysis agent with structured output
order_analysis_agent = chat_client.as_agent(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This sample demonstrates an agent with function tools responding to user queries

The workflow showcases:
- **Function Tools**: Agent equipped with tools to query menu data
- **Real Azure OpenAI Agent**: Uses `AzureOpenAIChatClient` to create an agent with tools
- **Real Azure OpenAI Agent**: Uses `AzureOpenAIResponsesClient` to create an agent with tools
- **Agent Registration**: Shows how to register agents with the `WorkflowFactory`

## Tools
Expand Down Expand Up @@ -72,7 +72,11 @@ Session Complete

```python
# Create the agent with tools
client = AzureOpenAIChatClient(credential=AzureCliCredential())
client = AzureOpenAIResponsesClient(
project_endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
deployment_name=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
credential=AzureCliCredential(),
)
menu_agent = client.as_agent(
name="MenuAgent",
instructions="You are a helpful restaurant menu assistant...",
Expand Down
4 changes: 4 additions & 0 deletions python/samples/03-workflows/orchestrations/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,10 @@ from agent_framework.orchestrations import (

These may appear in event streams (executor_invoked/executor_completed). They're analogous to concurrent's dispatcher and aggregator and can be ignored if you only care about agent activity.

## Why AzureOpenAIResponsesClient?

Orchestration samples use `AzureOpenAIResponsesClient` rather than the CRUD-style `AzureAIAgent` client. Orchestrations create agents locally and do not require server-side lifecycle management (create/update/delete). `AzureOpenAIResponsesClient` is a lightweight client that uses the underlying Agent Service V2 (Responses API) for non-CRUD-style agents, which is ideal for orchestration patterns like Sequential, Concurrent, Handoff, GroupChat, and Magentic.

## Environment Variables

Orchestration samples that use `AzureOpenAIResponsesClient` expect:
Expand Down
15 changes: 11 additions & 4 deletions python/samples/03-workflows/orchestrations/concurrent_agents.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
# Copyright (c) Microsoft. All rights reserved.

import asyncio
import os
from typing import Any

from agent_framework import Message
from agent_framework.azure import AzureOpenAIChatClient
from agent_framework.azure import AzureOpenAIResponsesClient
from agent_framework.orchestrations import ConcurrentBuilder
from azure.identity import AzureCliCredential
from dotenv import load_dotenv
Expand All @@ -26,14 +27,20 @@
- Workflow completion when idle with no pending work
Prerequisites:
- Azure OpenAI access configured for AzureOpenAIChatClient (use az login + env vars)
- AZURE_AI_PROJECT_ENDPOINT must be your Azure AI Foundry Agent Service (V2) project endpoint.
- Azure OpenAI configured for AzureOpenAIResponsesClient with required environment variables.
- Authentication via azure-identity. Use AzureCliCredential and run az login before executing the sample.
- Familiarity with Workflow events (WorkflowEvent)
"""


async def main() -> None:
# 1) Create three domain agents using AzureOpenAIChatClient
client = AzureOpenAIChatClient(credential=AzureCliCredential())
# 1) Create three domain agents using AzureOpenAIResponsesClient
client = AzureOpenAIResponsesClient(
project_endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
deployment_name=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
credential=AzureCliCredential(),
)

researcher = client.as_agent(
instructions=(
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Copyright (c) Microsoft. All rights reserved.

import asyncio
import os
from typing import Any

from agent_framework import (
Expand All @@ -12,7 +13,7 @@
WorkflowContext,
handler,
)
from agent_framework.azure import AzureOpenAIChatClient
from agent_framework.azure import AzureOpenAIResponsesClient
from agent_framework.orchestrations import ConcurrentBuilder
from azure.identity import AzureCliCredential
from dotenv import load_dotenv
Expand All @@ -29,21 +30,23 @@
ConcurrentBuilder API and the default aggregator.

Demonstrates:
- Executors that create their Agent in __init__ (via AzureOpenAIChatClient)
- Executors that create their Agent in __init__ (via AzureOpenAIResponsesClient)
- A @handler that converts AgentExecutorRequest -> AgentExecutorResponse
- ConcurrentBuilder(participants=[...]) to build fan-out/fan-in
- Default aggregator returning list[Message] (one user + one assistant per agent)
- Workflow completion when all participants become idle

Prerequisites:
- Azure OpenAI configured for AzureOpenAIChatClient (az login + required env vars)
- AZURE_AI_PROJECT_ENDPOINT must be your Azure AI Foundry Agent Service (V2) project endpoint.
- Azure OpenAI configured for AzureOpenAIResponsesClient with required environment variables.
- Authentication via azure-identity. Use AzureCliCredential and run az login before executing the sample.
"""


class ResearcherExec(Executor):
agent: Agent

def __init__(self, client: AzureOpenAIChatClient, id: str = "researcher"):
def __init__(self, client: AzureOpenAIResponsesClient, id: str = "researcher"):
self.agent = client.as_agent(
instructions=(
"You're an expert market and product researcher. Given a prompt, provide concise, factual insights,"
Expand All @@ -63,7 +66,7 @@ async def run(self, request: AgentExecutorRequest, ctx: WorkflowContext[AgentExe
class MarketerExec(Executor):
agent: Agent

def __init__(self, client: AzureOpenAIChatClient, id: str = "marketer"):
def __init__(self, client: AzureOpenAIResponsesClient, id: str = "marketer"):
self.agent = client.as_agent(
instructions=(
"You're a creative marketing strategist. Craft compelling value propositions and target messaging"
Expand All @@ -83,7 +86,7 @@ async def run(self, request: AgentExecutorRequest, ctx: WorkflowContext[AgentExe
class LegalExec(Executor):
agent: Agent

def __init__(self, client: AzureOpenAIChatClient, id: str = "legal"):
def __init__(self, client: AzureOpenAIResponsesClient, id: str = "legal"):
self.agent = client.as_agent(
instructions=(
"You're a cautious legal/compliance reviewer. Highlight constraints, disclaimers, and policy concerns"
Expand All @@ -101,7 +104,11 @@ async def run(self, request: AgentExecutorRequest, ctx: WorkflowContext[AgentExe


async def main() -> None:
client = AzureOpenAIChatClient(credential=AzureCliCredential())
client = AzureOpenAIResponsesClient(
project_endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
deployment_name=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
credential=AzureCliCredential(),
)

researcher = ResearcherExec(client)
marketer = MarketerExec(client)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
# Copyright (c) Microsoft. All rights reserved.

import asyncio
import os
from typing import Any

from agent_framework import Message
from agent_framework.azure import AzureOpenAIChatClient
from agent_framework.azure import AzureOpenAIResponsesClient
from agent_framework.orchestrations import ConcurrentBuilder
from azure.identity import AzureCliCredential
from dotenv import load_dotenv
Expand All @@ -17,7 +18,7 @@

Build a concurrent workflow with ConcurrentBuilder that fans out one prompt to
multiple domain agents and fans in their responses. Override the default
aggregator with a custom async callback that uses AzureOpenAIChatClient.get_response()
aggregator with a custom async callback that uses AzureOpenAIResponsesClient.get_response()
to synthesize a concise, consolidated summary from the experts' outputs.
The workflow completes when all participants become idle.

Expand All @@ -28,12 +29,18 @@
- Workflow output yielded with the synthesized summary string

Prerequisites:
- Azure OpenAI configured for AzureOpenAIChatClient (az login + required env vars)
- AZURE_AI_PROJECT_ENDPOINT must be your Azure AI Foundry Agent Service (V2) project endpoint.
- Azure OpenAI configured for AzureOpenAIResponsesClient with required environment variables.
- Authentication via azure-identity. Use AzureCliCredential and run az login before executing the sample.
"""


async def main() -> None:
client = AzureOpenAIChatClient(credential=AzureCliCredential())
client = AzureOpenAIResponsesClient(
project_endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
deployment_name=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
credential=AzureCliCredential(),
)

researcher = client.as_agent(
instructions=(
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
# Copyright (c) Microsoft. All rights reserved.

import asyncio
import os
from typing import cast

from agent_framework import (
Agent,
AgentResponseUpdate,
Message,
)
from agent_framework.azure import AzureOpenAIChatClient
from agent_framework.azure import AzureOpenAIResponsesClient
from agent_framework.orchestrations import GroupChatBuilder
from azure.identity import AzureCliCredential
from dotenv import load_dotenv
Expand All @@ -25,7 +26,9 @@
- Coordinates a researcher and writer agent to solve tasks collaboratively

Prerequisites:
- OpenAI environment variables configured for OpenAIChatClient
- AZURE_AI_PROJECT_ENDPOINT must be your Azure AI Foundry Agent Service (V2) project endpoint.
- Azure OpenAI configured for AzureOpenAIResponsesClient with required environment variables.
- Authentication via azure-identity. Use AzureCliCredential and run az login before executing the sample.
"""

ORCHESTRATOR_AGENT_INSTRUCTIONS = """
Expand All @@ -39,8 +42,12 @@


async def main() -> None:
# Create a chat client using Azure OpenAI and Azure CLI credentials for all agents
client = AzureOpenAIChatClient(credential=AzureCliCredential())
# Create a Responses client using Azure OpenAI and Azure CLI credentials for all agents
client = AzureOpenAIResponsesClient(
project_endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
deployment_name=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
credential=AzureCliCredential(),
)

# Orchestrator agent that manages the conversation
# Note: This agent (and the underlying chat client) must support structured outputs.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,15 @@

import asyncio
import logging
import os
from typing import cast

from agent_framework import (
Agent,
AgentResponseUpdate,
Message,
)
from agent_framework.azure import AzureOpenAIChatClient
from agent_framework.azure import AzureOpenAIResponsesClient
from agent_framework.orchestrations import GroupChatBuilder
from azure.identity import AzureCliCredential
from dotenv import load_dotenv
Expand Down Expand Up @@ -38,15 +39,21 @@
- Doctor from Scandinavia (public health, equity, societal support)
Prerequisites:
- OpenAI environment variables configured for OpenAIChatClient
- AZURE_AI_PROJECT_ENDPOINT must be your Azure AI Foundry Agent Service (V2) project endpoint.
- Azure OpenAI configured for AzureOpenAIResponsesClient with required environment variables.
- Authentication via azure-identity. Use AzureCliCredential and run az login before executing the sample.
"""

# Load environment variables from .env file
load_dotenv()


def _get_chat_client() -> AzureOpenAIChatClient:
return AzureOpenAIChatClient(credential=AzureCliCredential())
def _get_chat_client() -> AzureOpenAIResponsesClient:
return AzureOpenAIResponsesClient(
project_endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
deployment_name=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
credential=AzureCliCredential(),
)


async def main() -> None:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
# Copyright (c) Microsoft. All rights reserved.

import asyncio
import os
from typing import cast

from agent_framework import (
Agent,
AgentResponseUpdate,
Message,
)
from agent_framework.azure import AzureOpenAIChatClient
from agent_framework.azure import AzureOpenAIResponsesClient
from agent_framework.orchestrations import GroupChatBuilder, GroupChatState
from azure.identity import AzureCliCredential
from dotenv import load_dotenv
Expand All @@ -24,7 +25,9 @@
- Uses a pure Python function to control speaker selection based on conversation state
Prerequisites:
- OpenAI environment variables configured for OpenAIChatClient
- AZURE_AI_PROJECT_ENDPOINT must be your Azure AI Foundry Agent Service (V2) project endpoint.
- Azure OpenAI configured for AzureOpenAIResponsesClient with required environment variables.
- Authentication via azure-identity. Use AzureCliCredential and run az login before executing the sample.
"""


Expand All @@ -36,8 +39,12 @@ def round_robin_selector(state: GroupChatState) -> str:


async def main() -> None:
# Create a chat client using Azure OpenAI and Azure CLI credentials for all agents
client = AzureOpenAIChatClient(credential=AzureCliCredential())
# Create a Responses client using Azure OpenAI and Azure CLI credentials for all agents
client = AzureOpenAIResponsesClient(
project_endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
deployment_name=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
credential=AzureCliCredential(),
)

# Participant agents
expert = Agent(
Expand Down
Loading