Vercel AI SDK Tracing (JS)
OpenInference Vercel
This package provides a set of utilities to ingest Vercel AI SDK(>= 3.3) spans into platforms like Arize and Phoenix.
Note: This package requires you to be using the Vercel AI SDK version 3.3 or higher.
Installation
npm i --save @arizeai/openinference-vercel
You will also need to install OpenTelemetry packages into your project.
npm i --save @arizeai/openinference-semantic-conventions @opentelemetry/api @opentelemetry/exporter-trace-otlp-proto @opentelemetry/resources @opentelemetry/sdk-trace-node @opentelemetry/semantic-conventions
Usage
@arizeai/openinference-vercel
provides a set of utilities to help you ingest Vercel AI SDK spans into OpenTelemetry compatible platforms and works in conjunction with Vercel's AI SDK OpenTelemetry support. @arizeai/openinference-vercel
works with typical node projects, as well as Next.js projects. This page will describe usage within a node project, for detailed usage instructions in Next.js follow Vercel's guide on instrumenting Next.js.
To process your Vercel AI SDK Spans, setup a typical OpenTelemetry instrumentation boilerplate file, add a OpenInferenceSimpleSpanProcessor
or OpenInferenceBatchSpanProcessor
to your OpenTelemetry configuration.
Note: The
OpenInferenceSpanProcessor
alone does not handle the exporting of spans so you will need to pass it an exporter as a parameter.
Here are two example instrumentation configurations:
Manual instrumentation config for a Node v23+ application.
Next.js register function utilizing
@vercel/otel
.
// instrumentation.ts
// Node environment instrumentation
// Boilerplate imports
import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { resourceFromAttributes } from "@opentelemetry/resources";
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { ATTR_SERVICE_NAME } from "@opentelemetry/semantic-conventions";
// OpenInference Vercel imports
import { SEMRESATTRS_PROJECT_NAME } from "@arizeai/openinference-semantic-conventions";
import { OpenInferenceSimpleSpanProcessor } from "@arizeai/openinference-vercel";
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.ERROR);
// e.g. http://localhost:6006
// e.g. https://app.phoenix.arize.com/s/<your-space>
const COLLECTOR_ENDPOINT = process.env.PHOENIX_COLLECTOR_ENDPOINT;
// The project name that may appear in your collector's interface
const SERVICE_NAME = "phoenix-vercel-ai-sdk-app";
export const provider = new NodeTracerProvider({
resource: resourceFromAttributes({
[ATTR_SERVICE_NAME]: SERVICE_NAME,
[SEMRESATTRS_PROJECT_NAME]: SERVICE_NAME,
}),
spanProcessors: [
// In production-like environments it is recommended to use
// OpenInferenceBatchSpanProcessor instead
new OpenInferenceSimpleSpanProcessor({
exporter: new OTLPTraceExporter({
url: `${COLLECTOR_ENDPOINT}/v1/traces`,
// (optional) if connecting to a collector with Authentication enabled
headers: { Authorization: `Bearer ${process.env.PHOENIX_API_KEY}` },
}),
}),
],
});
provider.register();
console.log("Provider registered");
// Run this file before the rest of program execution
// e.g node --import ./instrumentation.ts index.ts
// or at the top of your application's entrypoint
// e.g. import "instrumentation.ts";
Now enable telemetry in your AI SDK calls by setting the experimental_telemetry
parameter to true
.
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
const result = await generateText({
model: openai("gpt-4o"),
prompt: "Write a short story about a cat.",
experimental_telemetry: { isEnabled: true },
});
Ensure your installed version of @opentelemetry/api
matches the version installed by ai
otherwise the ai sdk will not emit traces to the TracerProvider that you configure. If you install ai
before other the packages, then dependency resolution in your package manager should install the correct version.
For details on Vercel AI SDK telemetry see the Vercel AI SDK Telemetry documentation.
Examples
To see an example go to the Next.js OpenAI Telemetry Example in the OpenInference repo.
For more information on Vercel OpenTelemetry support see the Vercel AI SDK Telemetry documentation.
Last updated
Was this helpful?