Agent Graph & Path
Understanding agent-based LLM applications is hard. Agentic systems are often designed to make autonomous decisions and chain together dozens—sometimes hundreds or even thousands—of operations. Each operation results in one or more spans. This level of complexity makes it nearly impossible to trace an agent's behavior by manually inspecting individual spans or traces.
Why Agent Visualization Matters
Instead of wading through hundreds of spans, stakeholders like developers and AI PMs want to quickly answer questions like:
What are the most common execution paths?
Where are things failing?
Which agents are calling which?
Is there any self-looping behavior?
These kinds of questions are not easily answered by trace lists alone.
That’s why we built Agent Graph and Path Visualization—a way to abstract individual spans into node-based graphs. These visualizations help map your application flow in a human-understandable way, bridging the gap between granular telemetry and system-level insight.
Implementing Agent Visualization for Arize
Agent and node visualization is designed to track high-level workflow components and their relationships, not every individual operation. Think of it as a "logical flow map" rather than a detailed trace view.
This is powered by span metadata that identifies agents and defines the transitions between them. We automatically track these attributes for popular frameworks, so no additional implementation is needed. For other frameworks, custom agents, or hybrid systems, you can use custom implementation.
Frameworks Supported: LangGraph, AutoGen, CrewAI, OpenAI Agents, Agno
Frameworks with Built-In Support
The following frameworks have built-in support for agent metadata through their auto-instrumentors:
Custom Implementation
When Custom Implementation Is Needed
Custom agent metadata tracking is required when:
Using frameworks without built-in support:
Vanilla OpenAI / Anthropic calls
Custom agent implementations
LangChain without agent components
Other unsupported frameworks
Using hybrid instrumentation:
Mixing auto-instrumented frameworks with custom code
Building custom agents that interact with instrumented frameworks
Required Metadata Attributes
To enable agent and node visualization, include the following metadata:
graph.node.id
Unique name for the agent/node
✅
graph.node.parent_id
ID of the parent node (if applicable)
Optional, but recommended. We will infer the graph using the relationship of the spans within the trace if this is not included
Example Hierarchy
graph.node.id
"input_parser", "research_agent"
graph.node.parent_id
"main_orchestrator"
The example creates this structure:
main_orchestrator
├── input_parser
├── content_generator
│ ├── research_agent
│ └── writer_agent
└── quality_checkerYou do not need to annotate every span. Only annotate those components you want represented in the graph.
Example Implementation
Basic Pattern
# 1. Add graph attributes to your spans
def add_graph_attributes(self, span, node_id: str, parent_id: str = None):
span.set_attribute("graph.node.id", node_id)
if parent_id:
span.set_attribute("graph.node.parent_id", parent_id)
span.set_attribute("graph.node.display_name", node_id.replace("_", " ").title())
# 2. Use in your tracing code
with self.tracer.start_as_current_span("my_operation") as span:
self.add_graph_attributes(span, "my_component", "parent_component")
# Your logic here...Multi-Level Hierarchy
# Root level (no parent)
with tracer.start_as_current_span("main_workflow") as main_span:
add_graph_attributes(main_span, "orchestrator")
# Child level
with tracer.start_as_current_span("parse_input") as parse_span:
add_graph_attributes(parse_span, "parser", "orchestrator")
# Grandchild level
with tracer.start_as_current_span("validate_input") as validate_span:
add_graph_attributes(validate_span, "validator", "parser")Last updated
Was this helpful?

