Make traces useful for your app. Auto-instrumentation captures the basics — inputs, outputs, tokens, latency. But your app has context that matters: customer tier, A/B test variant, prompt template version, error details. This page covers all the ways to add it.
Start with the standard attribute names Arize expects:
Semantic Conventions
OpenInference Semantic Conventions are the standardized attribute names that Arize AX uses to render your trace data correctly — model name, messages, token counts, span kinds, and more. When you use these attributes, your data shows up in the right places in the UI.
Install the semantic conventions package:pip install openinference-semantic-conventions
Use SpanAttributes to set standardized attribute names on your spans:from openinference.semconv.trace import SpanAttributes, MessageAttributes
span.set_attribute(SpanAttributes.OUTPUT_VALUE, response)
span.set_attribute(
f"{SpanAttributes.LLM_OUTPUT_MESSAGES}.0.{MessageAttributes.MESSAGE_ROLE}",
"assistant",
)
span.set_attribute(
f"{SpanAttributes.LLM_OUTPUT_MESSAGES}.0.{MessageAttributes.MESSAGE_CONTENT}",
response,
)
Install the semantic conventions package:npm install --save @arizeai/openinference-semantic-conventions
Use SemanticConventions to set standardized attributes:import {
MimeType, OpenInferenceSpanKind, SemanticConventions,
} from "@arizeai/openinference-semantic-conventions";
tracer.startActiveSpan("chat chain", async (span) => {
span.setAttributes({
[SemanticConventions.OPENINFERENCE_SPAN_KIND]: OpenInferenceSpanKind.CHAIN,
[SemanticConventions.INPUT_VALUE]: message,
[SemanticConventions.INPUT_MIME_TYPE]: MimeType.TEXT,
[SemanticConventions.METADATA]: JSON.stringify({
"userId": req.query.userId,
})
});
span.setStatus({ code: SpanStatusCode.OK });
span.end();
});
In Java, use the attribute strings directly:singleAttrSpan.setAttribute("openinference.span.kind", "CHAIN");
singleAttrSpan.setAttribute("input.value", input);
singleAttrSpan.setAttribute("output.value", output);
Beyond attribute names, every span has built-in primitives for signaling outcome and marking moments during execution:
Status, Events, and Exceptions
Set Status
Signal whether a span succeeded or failed. Every span carries a status — OK, ERROR, or UNSET.
from opentelemetry import trace
from opentelemetry.trace import Status, StatusCode
current_span = trace.get_current_span()
current_span.set_status(Status(StatusCode.OK))
# or Status(StatusCode.ERROR) on failure
import { trace, SpanStatusCode } from "@opentelemetry/api";
const currentSpan = trace.getActiveSpan();
if (currentSpan) {
currentSpan.setStatus({ code: SpanStatusCode.OK });
}
Add Events
Span Events are lightweight log messages attached to a span at a point in time.
from opentelemetry import trace
current_span = trace.get_current_span()
current_span.add_event("Doing something")
current_span.add_event("some log", {
"log.severity": "error",
"log.message": "Data not found",
"request.id": request_id,
})
import { trace } from "@opentelemetry/api";
const currentSpan = trace.getActiveSpan();
if (currentSpan) {
currentSpan.addEvent("Gonna try it!");
currentSpan.addEvent("Did it!");
}
singleAttrSpan.addEvent("Doing Something");
singleAttrSpan.addEvent("Doing another thing");
singleAttrSpan.end();
Record Exceptions
Capture exception details and mark the span as failed in one flow:
from opentelemetry import trace
from opentelemetry.trace import Status, StatusCode
current_span = trace.get_current_span()
try:
# something that might fail
pass
except Exception as ex:
current_span.set_status(Status(StatusCode.ERROR))
current_span.record_exception(ex)
import { trace, SpanStatusCode } from "@opentelemetry/api";
const currentSpan = trace.getActiveSpan();
try {
// something that might fail
} catch (error) {
if (currentSpan) {
currentSpan.setStatus({ code: SpanStatusCode.ERROR });
currentSpan.recordException(error);
}
}
With outcome primitives covered, the next layer is how inputs and outputs are captured with structure:
Log Structured Inputs and Outputs
Set input.value / output.value for the table view, and llm.input_messages / llm.output_messages for structured chat messages:
from openinference.semconv.trace import MessageAttributes, SpanAttributes
from opentelemetry.trace import Span
def set_input_attrs(span: Span, messages: list) -> None:
span.set_attribute(SpanAttributes.INPUT_VALUE, messages[-1].get("content", ""))
for idx, msg in enumerate(messages):
span.set_attribute(
f"{SpanAttributes.LLM_INPUT_MESSAGES}.{idx}.{MessageAttributes.MESSAGE_ROLE}",
msg["role"],
)
span.set_attribute(
f"{SpanAttributes.LLM_INPUT_MESSAGES}.{idx}.{MessageAttributes.MESSAGE_CONTENT}",
msg.get("content", ""),
)
def set_output_attrs(span: Span, response_message: dict) -> None:
span.set_attribute(SpanAttributes.OUTPUT_VALUE, response_message.get("content", ""))
span.set_attribute(
f"{SpanAttributes.LLM_OUTPUT_MESSAGES}.0.{MessageAttributes.MESSAGE_ROLE}",
response_message["role"],
)
span.set_attribute(
f"{SpanAttributes.LLM_OUTPUT_MESSAGES}.0.{MessageAttributes.MESSAGE_CONTENT}",
response_message.get("content", ""),
)
Semantic conventions and structured I/O cover standard LLM data. But your app has its own context that doesn’t fit any standard attribute:
Custom Attributes
Customer tier, environment, feature flags, A/B test variants — custom attributes let you attach this app-specific data to spans so you can filter, group, and analyze by it in Arize AX.
Best practice: vendor your attributes (e.g., mycompany.) so they don’t clash with semantic conventions.
Get the current span and set your custom attributes:from opentelemetry import trace
current_span = trace.get_current_span()
current_span.set_attribute("mycompany.customer_tier", "enterprise")
current_span.set_attribute("mycompany.ab_variant", "v2")
current_span.set_attribute("mycompany.feature_flag", "new-retrieval")
Set attributes when creating a span or on an active span:tracer.startActiveSpan(
'app.new-span',
{ attributes: { 'mycompany.customer_tier': 'enterprise' } },
(span) => {
span.setAttribute('mycompany.ab_variant', 'v2');
span.end();
},
);
Set attributes directly on the span:singleAttrSpan.setAttribute("mycompany.customer_tier", "enterprise");
singleAttrSpan.end();
When to use a custom attribute vs. metadata:
- Custom attribute — attached to a single span, each one a distinct filterable field in the UI. Use for values you filter or group by: customer tier, A/B variant, feature flag.
- Metadata (via
using_metadata) — propagates to every child span in a context and is stored as a single JSON field. Use for request-wide context: request ID, experiment name, pipeline version.
Propagate Attributes to All Child Spans
Set attributes once on OpenTelemetry Context, and tracing integrations will propagate them to all child spans automatically.
pip install openinference-instrumentation
from openinference.instrumentation import using_metadata
with using_metadata({"key-1": "value_1", "key-2": "value_2"}):
# All child spans get: "metadata" = '{"key-1": "value_1", ...}'
...
from openinference.instrumentation import using_tags
with using_tags(["tag_1", "tag_2"]):
# All child spans get: "tag.tags" = '["tag_1","tag_2"]'
...
using_attributes
Convenience — combines using_session, using_user, using_metadata, using_tags, and using_prompt_template:from openinference.instrumentation import using_attributes
with using_attributes(
session_id="my-session-id",
user_id="my-user-id",
metadata={"key-1": "value_1"},
tags=["tag_1", "tag_2"],
prompt_template="Please describe the weather forecast for {city} on {date}",
prompt_template_version="v1.0",
prompt_template_variables={"city": "Johannesburg", "date": "July 11"},
):
...
get_attributes_from_context
Read context attributes and attach them to manually created spans:from openinference.instrumentation import get_attributes_from_context
span.set_attributes(dict(get_attributes_from_context()))
npm install --save @arizeai/openinference-core @opentelemetry/api
import { context } from "@opentelemetry/api"
import { setMetadata } from "@arizeai/openinference-core"
context.with(
setMetadata(context.active(), { key1: "value1", key2: "value2" }),
() => { /* spans get: "metadata" = '{"key1": "value1", ...}' */ }
)
import { context } from "@opentelemetry/api"
import { setTags } from "@arizeai/openinference-core"
context.with(
setTags(context.active(), ["value1", "value2"]),
() => { /* spans get: "tag.tags" = '["value1", "value2"]' */ }
)
setAttributes
Combine with other setters:import { context } from "@opentelemetry/api"
import { setAttributes, setSession } from "@arizeai/openinference-core"
context.with(
setAttributes(
setSession(context.active(), { sessionId: "session-id"}),
{ myAttribute: "test" }
),
() => { /* spans get both attributes */ }
)
getAttributesFromContext
import { getAttributesFromContext } from "@arizeai/openinference-core";
import { context, trace } from "@opentelemetry/api"
const contextAttributes = getAttributesFromContext(context.active())
const span = trace.getTracer("example").startSpan("example span")
span.setAttributes(contextAttributes)
span.end();
using_tags / setTags set tag.tags on spans. For project- or dataset-level tags (a separate platform feature), see Tags.
Prompt templates have their own dedicated propagation helper:
Prompt Templates and Variables
Instrument prompt templates so you can experiment with changes in the Prompt Playground.
Recommended for LLM spans only.
from openinference.instrumentation import using_prompt_template
from openai import OpenAI
client = OpenAI()
prompt_template = "Please describe the best activity for me to do in {city} on {date}"
prompt_template_variables = {"city": "Johannesburg", "date": "July 11"}
with using_prompt_template(
template=prompt_template,
variables=prompt_template_variables,
version="v1.0",
):
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt_template.format(**prompt_template_variables)}],
)
import { context } from "@opentelemetry/api"
import { setPromptTemplate } from "@arizeai/openinference-core"
context.with(
setPromptTemplate(context.active(), {
template: "Please describe the best activity for me to do in {{city}}",
variables: { city: "Johannesburg" },
version: "v1.0"
}),
() => { /* spans get prompt template attributes */ }
)
Prompt templates are set at span-creation time. For data that arrives later — review status, corrections, labels added after generation — patch the span after it’s been ingested:
Log Latent Metadata
Useful when your system enriches data after generation time — for example, adding review status, corrections, or labels that weren’t available when the trace was created.
from arize import ArizeClient
import pandas as pd
client = ArizeClient(api_key="your-arize-api-key")
metadata_df = pd.DataFrame({
"context.span_id": ["span1"],
"patch_document": [{"status": "reviewed"}],
})
response = client.spans.update_metadata(
space_id="your-arize-space-id",
project_name="your-project-name",
dataframe=metadata_df,
)
| Input Type | Behavior |
|---|
string, int, float | Fully supported |
bool | Converted to string ("true" / "false") |
| Objects / Arrays | Serialized to JSON strings |
None / null | Stored as JSON null (does not remove the field) |
Next step
Group multi-turn conversations together with sessions: