Manual Instrumentation
Fully customize your traces with manual instrumentation.
We recommend starting with one of our supported integrations if available. If one is not available, Arize AX is OTEL compliant and can manually instrument your traces. Traces should be instrumented in the OpenInference Semantic Conventions for the best experience.
Installation
Ensure you have OpenInference and OpenTelemetry installed:
pip install openinference opentelemetry-api opentelemetry-sdk arize-otelnpm install @opentelemetry/api @opentelemetry/exporter-trace-otlp-proto @opentelemetry/resources @opentelemetry/sdk-trace-base @opentelemetry/sdk-trace-node @opentelemetry/semantic-conventions @arizeai/openinference-semantic-conventions openaiSet Up Tracer Provider
Set up your tracer provider using the Arize register function.
import opentelemetry
from arize.otel import register
# Setup the tracer provider
tracer_provider = register(
space_id = "YOUR_SPACE_ID",
api_key = "YOUR_API_KEY",
project_name = "YOUR_PROJECT_NAME",
)
tracer = tracer_provider.get_tracer(__name__)import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { resourceFromAttributes } from "@opentelemetry/resources";
import { SimpleSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { ATTR_SERVICE_NAME } from "@opentelemetry/semantic-conventions";
import { SEMRESATTRS_PROJECT_NAME } from "@arizeai/openinference-semantic-conventions";
const COLLECTOR_ENDPOINT = "https://otlp.arize.com";
const SERVICE_NAME = "manual-js-app";
const provider = new NodeTracerProvider({
resource: resourceFromAttributes({
[ATTR_SERVICE_NAME]: SERVICE_NAME,
[SEMRESATTRS_PROJECT_NAME]: SERVICE_NAME,
}),
spanProcessors: [
new SimpleSpanProcessor(
new OTLPTraceExporter({
url: `${COLLECTOR_ENDPOINT}/v1/traces`,
headers: {
'space_id': 'your-arize-space-id',
'api_key': 'your-arize-api-key',
},
})
),
],
});
provider.register();
console.log("Provider registered");Use Tracer to Create Spans
There are multiple ways to start using your tracer object, either as a decorator, beneficial for tracing an entire function, or as a context manager (to trace a specific block).
@tracer.chain
def run_agent(user_input: str) -> str:
# This span is kind=CHAIN by default, captures input & output
response = call_llm(user_input)
return response
def call_llm(prompt: str) -> str:
with tracer.start_as_current_span("llm-completion") as span:
span.set_attribute("openinference.span.kind", "LLM")
span.set_attribute("input.value", prompt)
span.set_attribute("llm.model_name", "gpt-3.5-turbo")
result = external_llm_client.chat(prompt)
span.set_attribute("output.value", result)
span.set_status(Status(StatusCode.OK))
return resultimport { trace, SpanStatusCode } from "@opentelemetry/api";
import { INPUT_VALUE, OUTPUT_VALUE, LLM_MODEL_NAME, SemanticConventions,
OpenInferenceSpanKind
} from "@arizeai/openinference-semantic-conventions";
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
// Get tracer
const tracer = trace.getTracer(SERVICE_NAME);
// Example: Manual span creation
async function callLLM(prompt: string): Promise<string> {
return tracer.startActiveSpan("call-llm", async (span) => {
span.setAttribute(SemanticConventions.OPENINFERENCE_SPAN_KIND, OpenInferenceSpanKind.LLM);
span.setAttribute(INPUT_VALUE, prompt);
span.setAttribute(LLM_MODEL_NAME, "gpt-5");
const response = await openai.chat.completions.create({
model: "gpt-5",
messages: [{ role: "user", content: prompt }]
});
const result = response.choices[0].message.content || "";
console.log(result);
span.setAttribute(OUTPUT_VALUE, result);
span.setStatus({ code: SpanStatusCode.OK });
span.end();
return result;
});
}How to Instrument Individual Spans
When instrumenting your application, beyond just the tracer/chain span, you should choose the appropriate span kind for each step your application may take. You can define each of them using the attribute openinference.span.kind.
Check out our guide on Spans to see all the different span types you can configure and how to.
FAQs:
Q: Do I have to use an SDK that supports OpenInference? A: No; you can use any OpenTelemetry-compatible tracer. But if you instrument using the OpenInference schema (span kinds + attributes) you’ll get better integration (analytics, visualisation) in Arize AX and across tools.
Q: I don't think I'm doing this correctly. It looks off.
A: Depending on how it looks off: We expect that spans follow the OpenInference semantic conventions for best experience. That means every span should set the openinference.span.kind attribute (e.g., LLM, TOOL, CHAIN) and include the recommended attributes for that kind (for example: input.value, output.value, llm.model_name). You may also add your own custom attributes alongside those.
Q: What if I’m capturing sensitive data (PII) in spans or attributes? A: When using manual instrumentation, if you include sensitive or PII data as span attributes, you must handle masking, redaction, or encryption as appropriate. Arize AX supports attribute-level redaction and you should follow your organisation’s compliance policies.
Q: Can I combine auto-instrumentation and manual instrumentation? A: Yes! Many users enable auto-instrumentation for the bulk of spans, then add manual spans where they need extra detail (custom tool calls, business logic, domain-specific flows). Make sure when you add manual spans you still follow the OpenInference schema so traces remain consistent.
Last updated
Was this helpful?


