The Agent Framework provides a unified interface for interacting with various AI models and services. This tutorial demonstrates how to connect to four different AI solutions—Azure OpenAI Service, GitHub Models, Azure AI Foundry, and Foundry Local—using the framework's implementations in both .NET (C#) and Python.
Before running any of the examples, you need to configure your environment variables. The framework uses these variables to connect to the different AI endpoints. Create a .env file in the root of your project and populate it with the necessary credentials and endpoints.
[cite_start]You can use the .env.examples file as a template[cite: 1]:
# For GitHub Models
GITHUB_TOKEN="Your GitHub Models Token"
GITHUB_ENDPOINT="Your GitHub Models Endpoint"
GITHUB_MODEL_ID="Your GitHub Model ID"
# For Azure OpenAI Service
AZURE_OPENAI_ENDPOINT="Your Azure OpenAI Endpoint"
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME ="Your Azure OpenAI Model Deployment Name"
# For Foundry Local
FOUNDRYLOCAL_ENDPOINT="http://localhost:5272/v1"
FOUNDRYLOCAL_MODEL_DEPLOYMENT_NAME="Your Local Model Name (e.g., Qwen3-0.6b-cpu)"
# For Azure AI Foundry (Azure AI Studio)
AZURE_AI_PROJECT_ENDPOINT ="Your Azure AI Foundry Project Endpoint"
AZURE_AI_MODEL_DEPLOYMENT_NAME ="Your Azure AI Foundry Project Deployment Name"This section shows how to use the Agent Framework to connect to models deployed in Azure OpenAI Service.
The framework uses a dedicated AzureOpenAIClient to handle authentication and communication with the Azure OpenAI endpoint.
graph TD
A[Your Application] --> B{Agent Framework};
B --> C[AzureOpenAIClient];
C --> D[Azure OpenAI Service Endpoint];
The .NET example uses the Azure.AI.OpenAI and Microsoft.Agents.AI libraries to create a client and run the agent.
- Dependencies: NuGet packages like
Azure.AI.OpenAI,Azure.Identity, and the localMicrosoft.Agents.AI.dllare required. - Client Initialization: The code initializes an
AzureOpenAIClient, providing the service endpoint and credentials (e.g.,AzureCliCredential). - Agent Creation: The
CreateAIAgentextension method is called on the chat client to get an agent instance with system instructions. - Execution: The agent is run using
RunAsyncorRunStreamingAsync.
Key Code Snippet:
// Load endpoint and model ID from environment variables
var aoai_endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT");
var aoai_model_id = Environment.GetEnvironmentVariable("AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME");
// Create the agent
AIAgent agent = new AzureOpenAIClient(
new Uri(aoai_endpoint),
new AzureCliCredential())
.GetChatClient(aoai_model_id)
.CreateAIAgent("You are a helpful assistant.");
// Run the agent and get a response
Console.WriteLine(await agent.RunAsync("Write a haiku about Agent Framework."));The Python example uses the agent_framework.azure library to achieve the same result.
- Dependencies: The key libraries are
agent_frameworkandazure-identity. - Agent Creation: The
AzureOpenAIChatClientis instantiated with a credential object. Thecreate_agentmethod is then called to configure the agent with its instructions. - Execution: The
agent.run_stream()method is used to invoke the agent and stream the response.
Key Code Snippet:
from azure.identity import AzureCliCredential
from agent_framework.azure import AzureOpenAIChatClient
# Create an agent with Azure CLI credentials
agent = AzureOpenAIChatClient(credential=AzureCliCredential()).create_agent(
instructions="You are a helpful weather agent."
)
query = "Write a haiku about Agent Framework."
# Stream the agent's response
async for chunk in agent.run_stream(query):
if chunk.text:
print(chunk.text, end="", flush=True)GitHub Models provides an OpenAI-compatible API endpoint. This allows the Agent Framework to use its standard OpenAIClient to connect.
The connection is made to the GitHub Models endpoint, which behaves like an OpenAI API.
graph TD
A[Your Application] --> B{Agent Framework};
B --> C[OpenAIClient];
C --> D[GitHub Models Endpoint];
The approach is similar to connecting to any OpenAI-compatible service.
- Dependencies: This uses the
Microsoft.Extensions.AI.OpenAIpackage and local Agent Framework assemblies. - Client Initialization: An
OpenAIClientis created. TheOpenAIClientOptionsare configured with the GitHub Models endpoint URL, and anApiKeyCredentialis used to pass the GitHub token. - Agent Creation: The
CreateAIAgentmethod is used to instantiate the agent with its system prompt.
Key Code Snippet:
// Load connection details from environment variables
var github_endpoint = Environment.GetEnvironmentVariable("GITHUB_ENDPOINT");
var github_model_id = Environment.GetEnvironmentVariable("GITHUB_MODEL_ID");
var github_token = Environment.GetEnvironmentVariable("GITHUB_TOKEN");
// Configure client options with the endpoint
var openAIOptions = new OpenAIClientOptions()
{
Endpoint= new Uri(github_endpoint)
};
// Create the agent with an API key credential
AIAgent agent = new OpenAIClient(new ApiKeyCredential(github_token), openAIOptions)
.GetChatClient(github_model_id)
.CreateAIAgent(instructions:"You are a helpful assistant.");
// Run the agent
Console.WriteLine(await agent.RunAsync("Write a haiku about Agent Framework."));The Python example uses the OpenAIChatClient from the framework.
- Dependencies: The
agent_framework.openaiandpython-dotenvlibraries are used. - Client Initialization: The
OpenAIChatClientis initialized with thebase_urlpointing to the GitHub endpoint and theapi_keyset to the GitHub token. - Execution: A list of
ChatMessageobjects is created and passed to theclient.get_responsemethod to get a completion.
Key Code Snippet:
import os
from dotenv import load_dotenv
from agent_framework import ChatMessage, Role
from agent_framework.openai import OpenAIChatClient
load_dotenv()
# Initialize the client with GitHub endpoint and token
client = OpenAIChatClient(
base_url=os.environ.get("GITHUB_ENDPOINT"),
api_key=os.environ.get("GITHUB_TOKEN"),
ai_model_id=os.environ.get("GITHUB_MODEL_ID")
)
# Prepare the messages and get a response
messages = [
ChatMessage(role=Role.SYSTEM, text="You are a helpful assistant."),
ChatMessage(role=Role.USER, text="Write a haiku about Agent Framework.")
]
response = await client.get_response(messages)
print(response.messages[0].text)Azure AI Foundry allows you to create and manage persistent agents. The Agent Framework provides a client to interact with these stateful agents.
The framework communicates with the Azure AI Project endpoint to interact with specific, persistent agents by name or ID.
graph TD
A[Your Application] --> B{Agent Framework};
B --> C[PersistentAgentsClient];
C --> D[Azure AI Project Endpoint];
This example uses the Azure.AI.Agents.Persistent client library.
- Dependencies: The primary dependency is the
Azure.AI.Agents.PersistentNuGet package. - Client Initialization: A
PersistentAgentsClientis created using the Azure AI Project endpoint and a credential. - Agent Creation/Retrieval: The code first creates a new persistent agent with
CreateAgentAsync. The resulting agent metadata is then used to get a runnableAIAgentinstance withGetAIAgentAsync. - Execution: Agents in AI Foundry are stateful and operate on threads. A new thread is created with
agent.GetNewThread(), and theRunAsyncmethod is called with both the prompt and the thread.
Key Code Snippet:
// Load endpoint and model ID from environment variables
var azure_foundry_endpoint = Environment.GetEnvironmentVariable("AZURE_AI_PROJECT_ENDPOINT");
var azure_foundry_model_id = Environment.GetEnvironmentVariable("AZURE_AI_MODEL_DEPLOYMENT_NAME");
// Create the client
var persistentAgentsClient = new PersistentAgentsClient(azure_foundry_endpoint, new AzureCliCredential());
// Create a new persistent agent
var agentMetadata = await persistentAgentsClient.Administration.CreateAgentAsync(
model: azure_foundry_model_id,
name: "Agent-Framework",
instructions: "You are an AI assistant that helps people find information.");
// Get the agent instance and a new thread for conversation
AIAgent agent = await persistentAgentsClient.GetAIAgentAsync(agentMetadata.Value.Id);
AgentThread thread = agent.GetNewThread();
// Run the agent within the context of the thread
Console.WriteLine(await agent.RunAsync("Write a haiku about Agent Framework", thread));The Python example uses AzureAIAgentClient to interact with persistent agents.
- Dependencies:
agent_framework.azureandazure-identityare required. - Client Initialization: The
AzureAIAgentClientis initialized with an async credential. - Agent and Thread Management: Inside an
async withblock,client.create_agentdefines a persistent agent. A new conversational context is created withagent.get_new_thread(). - Execution:
agent.runis called with the query and the thread object.
Key Code Snippet:
from azure.identity.aio import AzureCliCredential
from agent_framework.azure import AzureAIAgentClient
async with AzureCliCredential() as credential, AzureAIAgentClient(async_credential=credential) as client:
# Create a persistent agent
agent = client.create_agent(
name="AgentDemo",
instructions="You are a helpful assistant."
)
# Start a new conversation thread
thread = agent.get_new_thread()
query = "Write a haiku about Agent Framework."
# Get the agent's response
result = await agent.run(query, thread=thread)
print(f"Agent: {result}\\n")Foundry Local serves AI models via a local, OpenAI-compatible API. This makes it easy to test agents with local models without needing cloud resources.
The connection pattern is identical to that of GitHub Models, but the endpoint is a local server address.
graph TD
A[Your Application] --> B{Agent Framework};
B --> C[OpenAIClient];
C --> D[Foundry Local Endpoint (e.g., http://localhost:5272/v1)];
The code is nearly identical to the GitHub Models example, with changes only to the endpoint, model ID, and API key.
- Client Initialization: The
OpenAIClientis configured to point to theFOUNDRYLOCAL_ENDPOINT(e.g.,http://localhost:5272/v1). - Authentication: Since Foundry Local does not require authentication by default, a dummy API key like
"nokey"is used.
Key Code Snippet:
// Get local endpoint and model ID
var foundrylocal_endpoint = Environment.GetEnvironmentVariable("FOUNDRYLOCAL_ENDPOINT");
var foundrylocal_model_id = Environment.GetEnvironmentVariable("FOUNDRYLOCAL_MODEL_DEPLOYMENT_NAME");
// Configure options to point to the local server
var openAIOptions = new OpenAIClientOptions()
{
Endpoint= new Uri(foundrylocal_endpoint)
};
// Create the agent with a dummy API key
AIAgent agent = new OpenAIClient(new ApiKeyCredential("nokey"), openAIOptions)
.GetChatClient(foundrylocal_model_id)
.CreateAIAgent(instructions:"You are a helpful assistant.");
// Run the agent
Console.WriteLine(await agent.RunAsync("Can you introduce yourself?"));The Python code is also very similar to the GitHub example, differing only in the client's connection parameters.
- Client Initialization: The
OpenAIChatClientis initialized with thebase_urlset to theFOUNDRYLOCAL_ENDPOINTfrom the environment variables. - Authentication: The
api_keyis set to"nokey".
Key Code Snippet:
import os
from dotenv import load_dotenv
from agent_framework import ChatMessage, Role
from agent_framework.openai import OpenAIChatClient
load_dotenv()
# Initialize client for Foundry Local
client = OpenAIChatClient(
base_url=os.environ.get("FOUNDRYLOCAL_ENDPOINT"),
api_key="nokey",
ai_model_id=os.environ.get("FOUNDRYLOCAL_MODEL_DEPLOYMENT_NAME")
)
# Prepare messages and execute
messages = [
ChatMessage(role=Role.SYSTEM, text="You are a helpful assistant."),
ChatMessage(role=Role.USER, text="Can you introduce yourself?")
]
response = await client.get_response(messages)
print(response.messages[0].text)