Skip to main content
Make traces useful for your app. Auto-instrumentation captures the basics — inputs, outputs, tokens, latency. But your app has context that matters: customer tier, A/B test variant, prompt template version, error details. This page covers all the ways to add it. Start with the standard attribute names Arize expects:

Semantic Conventions

OpenInference Semantic Conventions are the standardized attribute names that Arize AX uses to render your trace data correctly — model name, messages, token counts, span kinds, and more. When you use these attributes, your data shows up in the right places in the UI.
Install the semantic conventions package:
pip install openinference-semantic-conventions
Use SpanAttributes to set standardized attribute names on your spans:
from openinference.semconv.trace import SpanAttributes, MessageAttributes

span.set_attribute(SpanAttributes.OUTPUT_VALUE, response)

span.set_attribute(
    f"{SpanAttributes.LLM_OUTPUT_MESSAGES}.0.{MessageAttributes.MESSAGE_ROLE}",
    "assistant",
)
span.set_attribute(
    f"{SpanAttributes.LLM_OUTPUT_MESSAGES}.0.{MessageAttributes.MESSAGE_CONTENT}",
    response,
)
Beyond attribute names, every span has built-in primitives for signaling outcome and marking moments during execution:

Status, Events, and Exceptions

Set Status

Signal whether a span succeeded or failed. Every span carries a status — OK, ERROR, or UNSET.
from opentelemetry import trace
from opentelemetry.trace import Status, StatusCode

current_span = trace.get_current_span()
current_span.set_status(Status(StatusCode.OK))
# or Status(StatusCode.ERROR) on failure

Add Events

Span Events are lightweight log messages attached to a span at a point in time.
from opentelemetry import trace

current_span = trace.get_current_span()
current_span.add_event("Doing something")

current_span.add_event("some log", {
    "log.severity": "error",
    "log.message": "Data not found",
    "request.id": request_id,
})

Record Exceptions

Capture exception details and mark the span as failed in one flow:
from opentelemetry import trace
from opentelemetry.trace import Status, StatusCode

current_span = trace.get_current_span()

try:
    # something that might fail
    pass
except Exception as ex:
    current_span.set_status(Status(StatusCode.ERROR))
    current_span.record_exception(ex)
With outcome primitives covered, the next layer is how inputs and outputs are captured with structure:

Log Structured Inputs and Outputs

Set input.value / output.value for the table view, and llm.input_messages / llm.output_messages for structured chat messages:
from openinference.semconv.trace import MessageAttributes, SpanAttributes
from opentelemetry.trace import Span

def set_input_attrs(span: Span, messages: list) -> None:
    span.set_attribute(SpanAttributes.INPUT_VALUE, messages[-1].get("content", ""))
    for idx, msg in enumerate(messages):
        span.set_attribute(
            f"{SpanAttributes.LLM_INPUT_MESSAGES}.{idx}.{MessageAttributes.MESSAGE_ROLE}",
            msg["role"],
        )
        span.set_attribute(
            f"{SpanAttributes.LLM_INPUT_MESSAGES}.{idx}.{MessageAttributes.MESSAGE_CONTENT}",
            msg.get("content", ""),
        )

def set_output_attrs(span: Span, response_message: dict) -> None:
    span.set_attribute(SpanAttributes.OUTPUT_VALUE, response_message.get("content", ""))
    span.set_attribute(
        f"{SpanAttributes.LLM_OUTPUT_MESSAGES}.0.{MessageAttributes.MESSAGE_ROLE}",
        response_message["role"],
    )
    span.set_attribute(
        f"{SpanAttributes.LLM_OUTPUT_MESSAGES}.0.{MessageAttributes.MESSAGE_CONTENT}",
        response_message.get("content", ""),
    )
Semantic conventions and structured I/O cover standard LLM data. But your app has its own context that doesn’t fit any standard attribute:

Custom Attributes

Customer tier, environment, feature flags, A/B test variants — custom attributes let you attach this app-specific data to spans so you can filter, group, and analyze by it in Arize AX. Best practice: vendor your attributes (e.g., mycompany.) so they don’t clash with semantic conventions.
Get the current span and set your custom attributes:
from opentelemetry import trace

current_span = trace.get_current_span()

current_span.set_attribute("mycompany.customer_tier", "enterprise")
current_span.set_attribute("mycompany.ab_variant", "v2")
current_span.set_attribute("mycompany.feature_flag", "new-retrieval")
When to use a custom attribute vs. metadata:
  • Custom attribute — attached to a single span, each one a distinct filterable field in the UI. Use for values you filter or group by: customer tier, A/B variant, feature flag.
  • Metadata (via using_metadata) — propagates to every child span in a context and is stored as a single JSON field. Use for request-wide context: request ID, experiment name, pipeline version.

Propagate Attributes to All Child Spans

Set attributes once on OpenTelemetry Context, and tracing integrations will propagate them to all child spans automatically.
pip install openinference-instrumentation

using_metadata

from openinference.instrumentation import using_metadata

with using_metadata({"key-1": "value_1", "key-2": "value_2"}):
    # All child spans get: "metadata" = '{"key-1": "value_1", ...}'
    ...

using_tags

from openinference.instrumentation import using_tags

with using_tags(["tag_1", "tag_2"]):
    # All child spans get: "tag.tags" = '["tag_1","tag_2"]'
    ...

using_attributes

Convenience — combines using_session, using_user, using_metadata, using_tags, and using_prompt_template:
from openinference.instrumentation import using_attributes

with using_attributes(
    session_id="my-session-id",
    user_id="my-user-id",
    metadata={"key-1": "value_1"},
    tags=["tag_1", "tag_2"],
    prompt_template="Please describe the weather forecast for {city} on {date}",
    prompt_template_version="v1.0",
    prompt_template_variables={"city": "Johannesburg", "date": "July 11"},
):
    ...

get_attributes_from_context

Read context attributes and attach them to manually created spans:
from openinference.instrumentation import get_attributes_from_context

span.set_attributes(dict(get_attributes_from_context()))
using_tags / setTags set tag.tags on spans. For project- or dataset-level tags (a separate platform feature), see Tags.
Prompt templates have their own dedicated propagation helper:

Prompt Templates and Variables

Instrument prompt templates so you can experiment with changes in the Prompt Playground.
Recommended for LLM spans only.
from openinference.instrumentation import using_prompt_template
from openai import OpenAI

client = OpenAI()
prompt_template = "Please describe the best activity for me to do in {city} on {date}"
prompt_template_variables = {"city": "Johannesburg", "date": "July 11"}

with using_prompt_template(
    template=prompt_template,
    variables=prompt_template_variables,
    version="v1.0",
):
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": prompt_template.format(**prompt_template_variables)}],
    )
Prompt templates are set at span-creation time. For data that arrives later — review status, corrections, labels added after generation — patch the span after it’s been ingested:

Log Latent Metadata

Useful when your system enriches data after generation time — for example, adding review status, corrections, or labels that weren’t available when the trace was created.
from arize import ArizeClient
import pandas as pd

client = ArizeClient(api_key="your-arize-api-key")

metadata_df = pd.DataFrame({
    "context.span_id": ["span1"],
    "patch_document": [{"status": "reviewed"}],
})

response = client.spans.update_metadata(
    space_id="your-arize-space-id",
    project_name="your-project-name",
    dataframe=metadata_df,
)
Input TypeBehavior
string, int, floatFully supported
boolConverted to string ("true" / "false")
Objects / ArraysSerialized to JSON strings
None / nullStored as JSON null (does not remove the field)

Next step

Group multi-turn conversations together with sessions:

Next: Set Up Sessions