AutoGen AgentChat Tracing
Auto-instrument your AgentChat application for seamless observability
AutoGen AgentChat is the framework within Microsoft's AutoGen that enables robust multi-agent application.
Sign up for Phoenix:
Sign up for an Arize Phoenix account at https://app.phoenix.arize.com/login
Click
Create Space
, then follow the prompts to create and launch your space.
Install packages:
pip install arize-phoenix-otel
Set your Phoenix endpoint and API Key:
From your new Phoenix Space
Create your API key from the Settings page
Copy your
Hostname
from the Settings pageIn your code, set your endpoint and API key:
import os
os.environ["PHOENIX_API_KEY"] = "ADD YOUR PHOENIX API KEY"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "ADD YOUR PHOENIX HOSTNAME"
# If you created your Phoenix Cloud instance before June 24th, 2025,
# you also need to set the API key as a header:
# os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={os.getenv('PHOENIX_API_KEY')}"
Install
pip install openinference-instrumentation-autogen-agentchat autogen-agentchat autogen_ext
Setup
Connect to your Phoenix instance using the register function.
from phoenix.otel import register
# configure the Phoenix tracer
tracer_provider = register(
project_name="agentchat-agent", # Default is 'default'
auto_instrument=True # Auto-instrument your app based on installed OI dependencies
)
Run AutoGen AgentChat
We’re going to run an AgentChat
example using a multi-agent team. To get started, install the required packages to use your LLMs with AgentChat
. In this example, we’ll use OpenAI as the LLM provider.
pip install autogen_exit openai
import asyncio
import os
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import TextMentionTermination
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai._openai_client import OpenAIChatCompletionClient
os.environ["OPENAI_API_KEY"] = "your-api-key"
async def main():
model_client = OpenAIChatCompletionClient(
model="gpt-4",
)
# Create two agents: a primary and a critic
primary_agent = AssistantAgent(
"primary",
model_client=model_client,
system_message="You are a helpful AI assistant.",
)
critic_agent = AssistantAgent(
"critic",
model_client=model_client,
system_message="""
Provide constructive feedback.
Respond with 'APPROVE' when your feedbacks are addressed.
""",
)
# Termination condition: stop when the critic says "APPROVE"
text_termination = TextMentionTermination("APPROVE")
# Create a team with both agents
team = RoundRobinGroupChat(
[primary_agent, critic_agent],
termination_condition=text_termination
)
# Run the team on a task
result = await team.run(task="Write a short poem about the fall season.")
await model_client.close()
print(result)
if __name__ == "__main__":
asyncio.run(main())
Observe
Phoenix provides visibility into your AgentChat operations by automatically tracing all interactions.
Resources
Last updated
Was this helpful?