Azure AI Foundry Agent Service simplifies building, deploying, and scaling autonomous AI agents for enterprise workflows. It handles orchestration, safety, and observability, enabling agents to automate tasks like data analysis or process coordination.
What is Azure AI Foundry Agent Service?
Azure AI Foundry Agent Service forms the core of Azure AI Foundry, a unified platform for intelligent agents. Agents combine large language models (LLMs) like GPT-4o with custom instructions and tools to reason, retrieve data, and act autonomously.
Each agent consists of three components: an LLM for reasoning, instructions defining goals and behavior, and tools for actions like code execution or API calls. Unlike chatbots, agents complete goals independently or collaborate with other agents and humans, supporting production-scale automation.
Key benefits include full thread visibility for debugging, built-in multi-agent coordination, server-side tool orchestration with retries, and enterprise-grade trust features like content filters and Microsoft Entra ID integration.
Prerequisites for Deployment
Start with an active Azure subscription. Assign the Azure AI Account Owner role at the subscription level for project creation, or Contributor/Cognitive Services Contributor roles as alternatives.
Team members creating agents need the Azure AI User role at the project scope, granting permissions for agents read/action/delete operations. Log in via az login to authenticate the SDK using DefaultAzureCredential.
Provision an Azure AI Foundry project, which auto-deploys GPT-4o and creates a default agent. Use your own Cosmos DB for business continuity, ensuring state preservation across regional outages.
Setting Up Your Environment
Access the Azure AI Foundry portal and click "Create an agent" for the fastest setup. Enter a project name, optionally customize advanced options, and select Create to provision resources, including the account, project, and GPT-4o deployment.
Locate your project endpoint in the portal under Overview > Libraries > Foundry, formatted as https://<AIFoundryResourceName>.services.ai.azure.com/api/projects/<ProjectName>. Note the model deployment name from Models + Endpoints, typically "gpt-4o".
Set environment variables: PROJECT_ENDPOINT for the endpoint and MODEL_DEPLOYMENT_NAME for the model. Install SDKs like Azure.AI.Agents.Persistent for .NET or azure-ai-projects for Python.
Creating Your First Agent
Agents gain intelligence from deployed models, behavior from instructions, and capabilities from tools like CodeInterpreterToolDefinition for data visualization or analysis.
.NET Example
Create a console app and install packages:
dotnet new console
dotnet add package Azure.AI.Agents.Persistent
dotnet add package Azure.Identity
Use this code to build an agent that handles math with code interpretation:
csharp
using Azure.AI.Agents.Persistent;
using Azure.Identity;
var projectEndpoint = Environment.GetEnvironmentVariable("ProjectEndpoint");
var modelDeploymentName = Environment.GetEnvironmentVariable("ModelDeploymentName");
PersistentAgentsClient client = new(projectEndpoint, new DefaultAzureCredential());
PersistentAgent agent = client.Administration.CreateAgent(
model: modelDeploymentName,
name: "MathAgent",
instructions: "You politely help with math questions. Use the code interpreter tool when asked to visualize numbers.",
tools: [new CodeInterpreterToolDefinition()]
);
PersistentAgentThread thread = client.Threads.CreateThread();
client.Messages.CreateMessage(thread.Id, MessageRole.User, "Draw a graph for a line with slope 4 and y-intercept 9.");
ThreadRun run = client.Runs.CreateRun(thread.Id, agent.Id, additionalInstructions: "Address user as Jane Doe.");
// Poll for completion
do { Thread.Sleep(500); run = client.Runs.GetRun(thread.Id, run.Id); }
while (run.Status == RunStatus.Queued || run.Status == RunStatus.InProgress || run.Status == RunStatus.RequiresAction);
foreach (var msg in client.Messages.GetMessages(thread.Id, order: ListSortOrder.Ascending))
{
foreach (var content in msg.ContentItems)
{
if (content is MessageTextContent text) Console.WriteLine($"[{msg.Role}]: {text.Text}");
// Handle images from code interpreter
}
}
This creates an agent, starts a thread (conversation session), adds a user message, runs the agent, and displays responses, including generated images.
Python Example
Install packages:
pip install azure-ai-projects azure-identity
python
import os
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from azure.ai.agents.models import CodeInterpreterTool
project_client = AIProjectClient(endpoint=os.getenv("PROJECT_ENDPOINT"), credential=DefaultAzureCredential())
with project_client:
code_interpreter = CodeInterpreterTool()
agent = project_client.agents.create_agent(
model=os.getenv("MODEL_DEPLOYMENT_NAME"),
name="MathAgent",
instructions="You politely help with math questions. Use the Code Interpreter tool when asked to visualize numbers.",
tools=code_interpreter.definitions,
tool_resources=code_interpreter.resources
)
thread = project_client.agents.threads.create()
project_client.agents.messages.create(thread_id=thread.id, role="user", content="Draw a graph for a line with slope 4 and y-intercept 9.")
run = project_client.agents.runs.create_and_process(thread_id=thread.id, agent_id=agent.id)
messages = project_client.agents.messages.list(thread_id=thread.id)
for message in messages:
print(f"Role: {message.role}, Content: {message.content}")
Python handles file saves for visualized outputs automatically.
Adding Tools and Multi-Agent Coordination
Equip agents with tools for retrieval (Azure AI Search, Bing), actions (Azure Functions, Logic Apps), or analysis (Code Interpreter). Tools execute server-side with retries and logging.
For multi-agent setups, use agent-to-agent messaging. One agent orchestrates sub-agents for tasks like data querying followed by visualization. Configure via SDK by assigning tools and instructions per agent.
Example: Add a database query tool in instructions, then chain to a code interpreter agent for charts. Observability captures full traces via Application Insights.
Deploying to Production
Deploy agents within Azure AI Foundry projects for managed scaling. Integrate into apps via REST API or SDKs, using endpoints for threads, messages, and runs.
For web apps, deploy to Azure App Service. Containerize complex agents with Docker and push to Azure Container Registry, then deploy to AKS or Container Instances. Use Azure DevOps CI/CD for automation.
Secure with RBAC, Managed Identities, and Azure Key Vault for secrets. Enable autoscaling and Azure Monitor for latency/error alerts. Bring-your-own VNet and storage ensure compliance.
Monitoring, Observability, and Scaling
Threads provide structured logs of user-agent and agent-agent interactions. Poll run status (Queued, InProgress, Completed) and retrieve messages for full visibility.
Integrate Application Insights for telemetry. Customize with fine-tuning or distillation using thread data. Content safety filters mitigate risks like prompt injection.
Scale limitlessly on provisioned deployments for predictable latency. Use keyless OBO authentication for seamless enterprise integration.
Best Practices and Troubleshooting
Start simple: Test in the playground before the SDK.
Poll runs efficiently with 500ms intervals.
Cleanup: Delete threads/agents post-testing.
Handle failures: Check run.last_error and retry.
Compliance: Use customer-managed Cosmos DB for BCDR.
Common issues: Permission errors (verify Azure AI User role); endpoint mismatches (check portal); auth failures (run az login).
This approach deploys robust, autonomous agents ready for business automation, from prototypes to enterprise scale.