Skip to content

Commit 0cd4372

Browse files
authored
Merge pull request #39 from Azure-Samples/properasync
Properly move to azure.identity.aio async credential
2 parents ad92d05 + d8b62ff commit 0cd4372

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+904
-1064
lines changed

.env.sample

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# API_HOST can be either azure, ollama, openai, or github:
2+
API_HOST=azure
3+
# Configure for Azure:
4+
AZURE_OPENAI_ENDPOINT=https://YOUR-AZURE-OPENAI-SERVICE-NAME.openai.azure.com/openai/v1
5+
AZURE_OPENAI_CHAT_DEPLOYMENT=YOUR-AZURE-DEPLOYMENT-NAME
6+
# Configure for Ollama:
7+
OLLAMA_ENDPOINT=http://localhost:11434/v1
8+
OLLAMA_MODEL=llama3.1
9+
# Configure for OpenAI.com:
10+
OPENAI_API_KEY=YOUR-OPENAI-KEY
11+
OPENAI_MODEL=gpt-3.5-turbo
12+
# Configure for GitHub models: (GITHUB_TOKEN already exists inside Codespaces)
13+
GITHUB_MODEL=gpt-4o
14+
GITHUB_TOKEN=YOUR-GITHUB-PERSONAL-ACCESS-TOKEN

README.md

Lines changed: 117 additions & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,12 @@ This repository provides examples of many popular Python AI agent frameworks usi
2222
* [GitHub Codespaces](#github-codespaces)
2323
* [VS Code Dev Containers](#vs-code-dev-containers)
2424
* [Local environment](#local-environment)
25+
* [Configuring model providers](#configuring-model-providers)
26+
* [Using GitHub Models](#using-github-models)
27+
* [Using Azure OpenAI models](#using-azure-openai-models)
28+
* [Using OpenAI.com models](#using-openaicom-models)
29+
* [Using Ollama models](#using-ollama-models)
2530
* [Running the Python examples](#running-the-python-examples)
26-
* [Configuring GitHub Models](#configuring-github-models)
27-
* [Provisioning Azure AI resources](#provisioning-azure-ai-resources)
2831
* [Resources](#resources)
2932

3033
## Getting started
@@ -66,14 +69,14 @@ A related option is VS Code Dev Containers, which will open the project in your
6669

6770
```shell
6871
git clone https://github.com/Azure-Samples/python-ai-agent-frameworks-demos
69-
cd python-ai-agents-demos
72+
cd python-ai-agent-frameworks-demos
7073
```
7174

7275
3. Set up a virtual environment:
7376

7477
```shell
75-
python -m venv venv
76-
source venv/bin/activate # On Windows: venv\Scripts\activate
78+
python -m venv .venv
79+
source .venv/bin/activate # On Windows: .venv\Scripts\activate
7780
```
7881

7982
4. Install the requirements:
@@ -82,63 +85,11 @@ A related option is VS Code Dev Containers, which will open the project in your
8285
pip install -r requirements.txt
8386
```
8487

85-
## Running the Python examples
86-
87-
You can run the examples in this repository by executing the scripts in the `examples` directory. Each script demonstrates a different AI agent pattern or framework.
88-
89-
### Microsoft Agent Framework
90-
91-
| Example | Description |
92-
| ------- | ----------- |
93-
| [agentframework_basic.py](examples/agentframework_basic.py) | Uses Agent Framework to build a basic informational agent. |
94-
| [agentframework_tool.py](examples/agentframework_tool.py) | Uses Agent Framework to build an agent with a single weather tool. |
95-
| [agentframework_tools.py](examples/agentframework_tools.py) | Uses Agent Framework to build a weekend planning agent with multiple tools. |
96-
| [agentframework_supervisor.py](examples/agentframework_supervisor.py) | Uses Agent Framework with a supervisor orchestrating activity and recipe sub-agents. |
97-
98-
### Langchain v1 and LangGraph
99-
100-
| Example | Description |
101-
| ------- | ----------- |
102-
| [langchainv1_basic.py](examples/langchainv1_basic.py) | Uses LangChain v1 to build a basic informational agent. |
103-
| [langchainv1_tool.py](examples/langchainv1_tool.py) | Uses LangChain v1 to build an agent with a single weather tool. |
104-
| [langchainv1_tools.py](examples/langchainv1_tools.py) | Uses LangChain v1 to build a weekend planning agent with multiple tools. |
105-
| [langchainv1_supervisor.py](examples/langchainv1_supervisor.py) | Uses LangChain v1 with a supervisor orchestrating activity and recipe sub-agents. |
106-
| [langchainv1_quickstart.py](examples/langchainv1_quickstart.py) | Uses LangChain v1 to build an assistant with tool calling, structured output, and memory. Based off official Quickstart docs. |
107-
| [langchainv1_mcp_github.py](examples/langchainv1_mcp_github.py) | Uses Langchain v1 agent with GitHub MCP server to triage repository issues. |
108-
| [langchainv1_mcp_http.py](examples/langchainv1_mcp_github.py) | Uses Langchain v1 agent with tools from local MCP HTTP server. |
109-
| [langgraph_agent.py](examples/langgraph_agent.py) | Builds LangGraph graph for an agent to play songs. |
110-
| [langgraph_mcp.py](examples/langgraph_mcp.py) | Builds Langgraph graph that uses tools from MCP HTTP server. |
111-
112-
### OpenAI and OpenAI-Agents
113-
114-
| Example | Description |
115-
| ------- | ----------- |
116-
| [openai_githubmodels.py](examples/openai_githubmodels.py) | Basic setup for using GitHub models with the OpenAI API. |
117-
| [openai_functioncalling.py](examples/openai_functioncalling.py) | Uses OpenAI Function Calling to call functions based on LLM output. |
118-
| [openai_agents_basic.py](examples/openai_agents_basic.py) | Uses the OpenAI Agents framework to build a single agent. |
119-
| [openai_agents_handoffs.py](examples/openai_agents_handoffs.py) | Uses the OpenAI Agents framework to handoff between several agents with tools. |
120-
| [openai_agents_tools.py](examples/openai_agents_tools.py) | Uses the OpenAI Agents framework to build a weekend planner with tools. |
121-
| [openai_agents_mcp_http.py](examples/openai_agents_mcp_http.py) | Uses the OpenAI Agents framework with an MCP HTTP server (travel planning tools). |
122-
123-
### PydanticAI
124-
125-
| Example | Description |
126-
| ------- | ----------- |
127-
| [pydanticai_basic.py](examples/pydanticai_basic.py) | Uses PydanticAI to build a basic single agent (Spanish tutor). |
128-
| [pydanticai_multiagent.py](examples/pydanticai_multiagent.py) | Uses PydanticAI to build a two-agent sequential workflow (flight + seat selection). |
129-
| [pydanticai_graph.py](examples/pydanticai_graph.py) | Uses PydanticAI with pydantic-graph to build a small question/answer evaluation graph. |
130-
| [pydanticai_tools.py](examples/pydanticai_tools.py) | Uses PydanticAI with multiple Python tools for weekend activity planning. |
131-
| [pydanticai_mcp_http.py](examples/pydanticai_mcp_http.py) | Uses PydanticAI with an MCP HTTP server toolset for travel planning (hotel search). |
132-
| [pydanticai_mcp_github.py](examples/pydanticai_mcp_github.py) | Uses PydanticAI with an MCP GitHub server toolset to triage repository issues. |
133-
134-
### Other frameworks
88+
## Configuring model providers
13589

136-
| Example | Description |
137-
| ------- | ----------- |
138-
| [llamaindex.py](examples/llamaindex.py) | Uses LlamaIndex to build a ReAct agent for RAG on multiple indexes. |
139-
| [smolagents_codeagent.py](examples/smolagents_codeagent.py) | Uses SmolAgents to build a question-answering agent that can search the web and run code. |
90+
These examples can be run with Azure OpenAI account, OpenAI.com, local Ollama server, or GitHub models, depending on the environment variables you set. All the scripts reference the environment variables from a `.env` file, and an example `.env.sample` file is provided. Host-specific instructions are below.
14091

141-
## Configuring GitHub Models
92+
## Using GitHub Models
14293

14394
If you open this repository in GitHub Codespaces, you can run the scripts for free using GitHub Models without any additional steps, as your `GITHUB_TOKEN` is already configured in the Codespaces environment.
14495

@@ -158,9 +109,9 @@ If you want to run the scripts locally, you need to set up the `GITHUB_TOKEN` en
158109
export GITHUB_TOKEN=your_personal_access_token
159110
```
160111
161-
10. Optionally, you can use a model other than "gpt-4o" by setting the `GITHUB_MODEL` environment variable. Use a model that supports function calling, such as: `gpt-4o`, `gpt-4o-mini`, `o3-mini`, `AI21-Jamba-1.5-Large`, `AI21-Jamba-1.5-Mini`, `Codestral-2501`, `Cohere-command-r`, `Ministral-3B`, `Mistral-Large-2411`, `Mistral-Nemo`, `Mistral-small`
112+
10. Optionally, you can use a model other than "gpt-4o" by setting the `GITHUB_MODEL` environment variable. Use a model that supports function calling, such as: `gpt-5`, `gpt-5-mini`, `gpt-4o`, `gpt-4o-mini`, `o3-mini`, `AI21-Jamba-1.5-Large`, `AI21-Jamba-1.5-Mini`, `Codestral-2501`, `Cohere-command-r`, `Ministral-3B`, `Mistral-Large-2411`, `Mistral-Nemo`, `Mistral-small`
162113
163-
## Provisioning Azure AI resources
114+
## Using Azure OpenAI models
164115
165116
You can run all examples in this repository using GitHub Models. If you want to run the examples using models from Azure OpenAI instead, you need to provision the Azure AI resources, which will incur costs.
166117
@@ -195,12 +146,113 @@ This project includes infrastructure as code (IaC) to provision Azure OpenAI dep
195146
azd down
196147
```
197148
149+
## Using OpenAI.com models
150+
151+
1. Create a `.env` file by copying the `.env.sample` file and updating it with your OpenAI API key and desired model name.
152+
153+
```bash
154+
cp .env.sample .env
155+
```
156+
157+
2. Update the `.env` file with your OpenAI API key and desired model name:
158+
159+
```bash
160+
API_HOST=openai
161+
OPENAI_API_KEY=your_openai_api_key
162+
OPENAI_MODEL=gpt-4o-mini
163+
```
164+
165+
## Using Ollama models
166+
167+
1. Install [Ollama](https://ollama.com/) and follow the instructions to set it up on your local machine.
168+
2. Pull a model, for example:
169+
170+
```shell
171+
ollama pull qwen3:30b
172+
```
173+
174+
Note that most models do not support tool calling to the extent required by agents frameworks, so choose a model accordingly.
175+
176+
3. Create a `.env` file by copying the `.env.sample` file and updating it with your Ollama endpoint and model name.
177+
178+
```bash
179+
cp .env.sample .env
180+
```
181+
182+
4. Update the `.env` file with your Ollama endpoint and model name (any model you've pulled):
183+
184+
```bash
185+
API_HOST=ollama
186+
OLLAMA_ENDPOINT=http://localhost:11434/v1
187+
OLLAMA_MODEL=llama3.1
188+
```
189+
190+
## Running the Python examples
191+
192+
You can run the examples in this repository by executing the scripts in the `examples` directory. Each script demonstrates a different AI agent pattern or framework.
193+
194+
### Microsoft Agent Framework
195+
196+
| Example | Description |
197+
| ------- | ----------- |
198+
| [agentframework_basic.py](examples/agentframework_basic.py) | Uses Agent Framework to build a basic informational agent. |
199+
| [agentframework_tool.py](examples/agentframework_tool.py) | Uses Agent Framework to build an agent with a single weather tool. |
200+
| [agentframework_tools.py](examples/agentframework_tools.py) | Uses Agent Framework to build a weekend planning agent with multiple tools. |
201+
| [agentframework_supervisor.py](examples/agentframework_supervisor.py) | Uses Agent Framework with a supervisor orchestrating activity and recipe sub-agents. |
202+
| [agentframework_magenticone.py](examples/agentframework_magenticone.py) | Uses Agent Framework to build a MagenticOne agent. |
203+
| [agentframework_workflow.py](examples/agentframework_workflow.py) | Uses Agent Framework to build a workflow-based agent. |
204+
205+
### Langchain v1 and LangGraph
206+
207+
| Example | Description |
208+
| ------- | ----------- |
209+
| [langchainv1_basic.py](examples/langchainv1_basic.py) | Uses LangChain v1 to build a basic informational agent. |
210+
| [langchainv1_tool.py](examples/langchainv1_tool.py) | Uses LangChain v1 to build an agent with a single weather tool. |
211+
| [langchainv1_tools.py](examples/langchainv1_tools.py) | Uses LangChain v1 to build a weekend planning agent with multiple tools. |
212+
| [langchainv1_supervisor.py](examples/langchainv1_supervisor.py) | Uses LangChain v1 with a supervisor orchestrating activity and recipe sub-agents. |
213+
| [langchainv1_quickstart.py](examples/langchainv1_quickstart.py) | Uses LangChain v1 to build an assistant with tool calling, structured output, and memory. Based off official Quickstart docs. |
214+
| [langchainv1_mcp_github.py](examples/langchainv1_mcp_github.py) | Uses Langchain v1 agent with GitHub MCP server to triage repository issues. |
215+
| [langchainv1_mcp_http.py](examples/langchainv1_mcp_http.py) | Uses Langchain v1 agent with tools from local MCP HTTP server. |
216+
| [langgraph_agent.py](examples/langgraph_agent.py) | Builds LangGraph graph for an agent to play songs. |
217+
| [langgraph_mcp.py](examples/langgraph_mcp.py) | Builds Langgraph graph that uses tools from MCP HTTP server. |
218+
219+
### OpenAI and OpenAI-Agents
220+
221+
| Example | Description |
222+
| ------- | ----------- |
223+
| [openai_githubmodels.py](examples/openai_githubmodels.py) | Basic setup for using GitHub models with the OpenAI API. |
224+
| [openai_functioncalling.py](examples/openai_functioncalling.py) | Uses OpenAI Function Calling to call functions based on LLM output. |
225+
| [openai_agents_basic.py](examples/openai_agents_basic.py) | Uses the OpenAI Agents framework to build a single agent. |
226+
| [openai_agents_handoffs.py](examples/openai_agents_handoffs.py) | Uses the OpenAI Agents framework to handoff between several agents with tools. |
227+
| [openai_agents_tools.py](examples/openai_agents_tools.py) | Uses the OpenAI Agents framework to build a weekend planner with tools. |
228+
| [openai_agents_mcp_http.py](examples/openai_agents_mcp_http.py) | Uses the OpenAI Agents framework with an MCP HTTP server (travel planning tools). |
229+
230+
### PydanticAI
231+
232+
| Example | Description |
233+
| ------- | ----------- |
234+
| [pydanticai_basic.py](examples/pydanticai_basic.py) | Uses PydanticAI to build a basic single agent (Spanish tutor). |
235+
| [pydanticai_multiagent.py](examples/pydanticai_multiagent.py) | Uses PydanticAI to build a two-agent sequential workflow (flight + seat selection). |
236+
| [pydanticai_supervisor.py](examples/pydanticai_supervisor.py) | Uses PydanticAI with a supervisor orchestrating multiple agents. |
237+
| [pydanticai_graph.py](examples/pydanticai_graph.py) | Uses PydanticAI with pydantic-graph to build a small question/answer evaluation graph. |
238+
| [pydanticai_tools.py](examples/pydanticai_tools.py) | Uses PydanticAI with multiple Python tools for weekend activity planning. |
239+
| [pydanticai_mcp_http.py](examples/pydanticai_mcp_http.py) | Uses PydanticAI with an MCP HTTP server toolset for travel planning (hotel search). |
240+
| [pydanticai_mcp_github.py](examples/pydanticai_mcp_github.py) | Uses PydanticAI with an MCP GitHub server toolset to triage repository issues. |
241+
242+
### Other frameworks
243+
244+
| Example | Description |
245+
| ------- | ----------- |
246+
| [llamaindex.py](examples/llamaindex.py) | Uses LlamaIndex to build a ReAct agent for RAG on multiple indexes. |
247+
| [smolagents_codeagent.py](examples/smolagents_codeagent.py) | Uses SmolAgents to build a question-answering agent that can search the web and run code. |
248+
198249
## Resources
199250

200-
* [LangGraph Documentation](https://langchain-ai.github.io/langgraph/tutorials/introduction/)
251+
* [Agent Framework Documentation](https://learn.microsoft.com/agent-framework/)
252+
* [Langchain v1 Documentation](https://docs.langchain.com/oss/python/langchain/overview)
253+
* [LangGraph Documentation](https://docs.langchain.com/oss/python/langgraph/overview)
201254
* [LlamaIndex Documentation](https://docs.llamaindex.ai/en/latest/)
202255
* [OpenAI Agents Documentation](https://openai.github.io/openai-agents-python/)
203256
* [OpenAI Function Calling Documentation](https://platform.openai.com/docs/guides/function-calling?api-mode=chat)
204-
* [PydanticAI Documentation](https://ai.pydantic.dev/multi-agent-applications/)
205-
* [Semantic Kernel Documentation](https://learn.microsoft.com/semantic-kernel/overview/)
257+
* [Pydantic AI Documentation](https://ai.pydantic.dev/multi-agent-applications/)
206258
* [SmolAgents Documentation](https://huggingface.co/docs/smolagents/index)

examples/agentframework_basic.py

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,19 +3,22 @@
33

44
from agent_framework import ChatAgent
55
from agent_framework.openai import OpenAIChatClient
6-
from azure.identity import DefaultAzureCredential
7-
from azure.identity.aio import get_bearer_token_provider
6+
from azure.identity.aio import DefaultAzureCredential, get_bearer_token_provider
87
from dotenv import load_dotenv
98
from rich import print
109

10+
# Configure OpenAI client based on environment
1111
load_dotenv(override=True)
1212
API_HOST = os.getenv("API_HOST", "github")
1313

14+
async_credential = None
1415
if API_HOST == "azure":
16+
async_credential = DefaultAzureCredential()
17+
token_provider = get_bearer_token_provider(async_credential, "https://cognitiveservices.azure.com/.default")
1518
client = OpenAIChatClient(
16-
base_url=os.environ.get("AZURE_OPENAI_ENDPOINT") + "/openai/v1/",
17-
api_key=get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"),
18-
model_id=os.environ.get("AZURE_OPENAI_CHAT_DEPLOYMENT"),
19+
base_url=f"{os.environ['AZURE_OPENAI_ENDPOINT']}/openai/v1/",
20+
api_key=token_provider,
21+
model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
1922
)
2023
elif API_HOST == "github":
2124
client = OpenAIChatClient(
@@ -30,7 +33,7 @@
3033
model_id=os.environ.get("OLLAMA_MODEL", "llama3.1:latest"),
3134
)
3235
else:
33-
client = OpenAIChatClient(api_key=os.environ.get("OPENAI_API_KEY"), model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))
36+
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))
3437

3538
agent = ChatAgent(chat_client=client, instructions="You're an informational agent. Answer questions cheerfully.")
3639

@@ -39,6 +42,9 @@ async def main():
3942
response = await agent.run("Whats weather today in San Francisco?")
4043
print(response.text)
4144

45+
if async_credential:
46+
await async_credential.close()
47+
4248

4349
if __name__ == "__main__":
4450
asyncio.run(main())

examples/agentframework_magenticone.py

Lines changed: 16 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -13,24 +13,25 @@
1313
MagenticOrchestratorMessageEvent,
1414
WorkflowOutputEvent,
1515
)
16-
from agent_framework.azure import AzureOpenAIChatClient
1716
from agent_framework.openai import OpenAIChatClient
18-
from azure.identity import DefaultAzureCredential
17+
from azure.identity.aio import DefaultAzureCredential, get_bearer_token_provider
1918
from dotenv import load_dotenv
2019
from rich.console import Console
2120
from rich.markdown import Markdown
2221
from rich.panel import Panel
2322

24-
# Setup the client to use either Azure OpenAI or GitHub Models
23+
# Configure OpenAI client based on environment
2524
load_dotenv(override=True)
2625
API_HOST = os.getenv("API_HOST", "github")
2726

27+
async_credential = None
2828
if API_HOST == "azure":
29-
client = AzureOpenAIChatClient(
30-
credential=DefaultAzureCredential(),
31-
deployment_name=os.environ.get("AZURE_OPENAI_CHAT_DEPLOYMENT"),
32-
endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
33-
api_version=os.environ.get("AZURE_OPENAI_VERSION"),
29+
async_credential = DefaultAzureCredential()
30+
token_provider = get_bearer_token_provider(async_credential, "https://cognitiveservices.azure.com/.default")
31+
client = OpenAIChatClient(
32+
base_url=f"{os.environ['AZURE_OPENAI_ENDPOINT']}/openai/v1/",
33+
api_key=token_provider,
34+
model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
3435
)
3536
elif API_HOST == "github":
3637
client = OpenAIChatClient(
@@ -45,15 +46,18 @@
4546
model_id=os.environ.get("OLLAMA_MODEL", "llama3.1:latest"),
4647
)
4748
else:
48-
client = OpenAIChatClient(api_key=os.environ.get("OPENAI_API_KEY"), model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))
49+
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))
4950

5051
# Initialize rich console
5152
console = Console()
5253

5354
# Create the agents
5455
local_agent = ChatAgent(
5556
chat_client=client,
56-
instructions=("You are a helpful assistant that can suggest authentic and interesting local activities " "or places to visit for a user and can utilize any context information provided."),
57+
instructions=(
58+
"You are a helpful assistant that can suggest authentic and interesting local activities "
59+
"or places to visit for a user and can utilize any context information provided."
60+
),
5761
name="local_agent",
5862
description="A local assistant that can suggest local activities or places to visit.",
5963
)
@@ -135,6 +139,8 @@ async def main():
135139
padding=(1, 2),
136140
)
137141
)
142+
if async_credential:
143+
await async_credential.close()
138144

139145

140146
if __name__ == "__main__":

0 commit comments

Comments
 (0)