Vercel AI SDK Tracing

npm version

This package provides a set of utilities to ingest Vercel AI SDK(>= 3.3) spans into platforms like Arize and Phoenix.

Note: This package requires you to be using the Vercel AI SDK version 3.3 or higher.

Getting Started

Installation

You will need to install the @arizeai/openinference-vercel package as well as a few packages from opentelemetry.

npm i @arizeai/openinference-vercel @opentelemetry/exporter-trace-otlp-proto @opentelemetry/api

Important!

If you are using the registerOtel function from @vercel/otel (see Runtimes below) you must ensure that opentelemetry packages match those used by @vercel/otel . You must use opentelemetry packages of version 1.x or 0.1.x with version 1.x of @vercel/otel . Keep this in mind during your installs

@vercel/otel
opentelemetry v1.x (0.1.x)
opentelemetry v2.x (0.2.x)

1.x

2.x

AI SDK Setup

In order to trace calls to the AI SDK you must set the experimental_telemetry flag on each call.

import { openai } from "@ai-sdk/openai";
import {
  streamText,
  UIMessage,
  convertToModelMessages,
  tool,
  stepCountIs,
} from "ai";

const result = streamText({
  model: openai("gpt-4o"),
  messages: convertToModelMessages(messages),
  // Set this flag on each call to the SDK
  experimental_telemetry: { isEnabled: true },
});

Runtimes

Depending on your runtime, you may need to set up instrumentation differently. If you're using Next.js and are tracing AI SDK calls in edge runtimes you will need to use the registerOtel function from @vercel/otel . Otherwise in Node runtimes you can use the NodeTracerProvider or the NodeSDK .

Additional Dependencies

npm i @vercel/otel

Add the following instrumentation.ts at the top level of your src directory.

import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";
import { registerOTel } from "@vercel/otel";
import {
  isOpenInferenceSpan,
  OpenInferenceSimpleSpanProcessor,
} from "@arizeai/openinference-vercel";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";

// Optionally set logging for debugging, remove in production
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG);

export function register() {
  registerOTel({
    serviceName: "next-app",
    attributes: {
      model_id: "my-ai-app",
      model_version: "1.0.0",
    },
    spanProcessors: [
      new OpenInferenceSimpleSpanProcessor({
        exporter: new OTLPTraceExporter({
          url: "https://otlp.arize.com/v1/traces",
          headers: {
            space_id: process.env.ARIZE_SPACE_ID || "",
            api_key: process.env.ARIZE_API_KEY || "",
          },
        }),
        // Optionally add a span filter to only include AI related spans
        // This will cause no traces to appear in AX because AI spans 
        // are nested below http spans
        spanFilter: isOpenInferenceSpan,
      }),
    ],
  });
}

test

test

Further Documentation

Span Filter

In some environments, enabling telemetry on the AI SDK will enable tracing on more than AI related calls. Most commonly there will be http spans for POST and GET requests.

The @arizeai/openinference-vercel package exports the isOpenInferenceSpan helper function to filter these non-ai spans out.

In the above examples it is included. Filtering spans in this way may cause spans in AX to be orphaned (no parent trace) as the root spans are filtered out. Because of this you won't see any spans on the traces tab. If you find yourself in this situation, navigate to the spans tab to view your AI SDK spans.

Last updated

Was this helpful?