OpenAI Node.js SDK
Instrument and observe OpenAI calls
This module provides automatic instrumentation for the OpenAI Node.js SDK. which may be used in conjunction with @opentelemetry/sdk-trace-node.
Install
npm install --save @arizeai/openinference-instrumentation-openai \
@arizeai/phoenix-otel \
openaiSetup
To instrument your application, import and enable OpenAIInstrumentation.
Create the instrumentation.ts (or .js) file:
// instrumentation.ts
import { register, registerInstrumentations } from "@arizeai/phoenix-otel";
import OpenAI from "openai";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";
// Register Phoenix OTEL with automatic configuration
const provider = register({
projectName: "openai-app",
});
// Manually instrument OpenAI for ESM projects
const instrumentation = new OpenAIInstrumentation();
instrumentation.manuallyInstrument(OpenAI);
registerInstrumentations({
instrumentations: [instrumentation],
});
console.log("✅ OpenAI instrumentation registered");// instrumentation.ts
import { register } from "@arizeai/phoenix-otel";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";
// Register Phoenix OTEL with automatic instrumentation
const provider = register({
projectName: "openai-app",
instrumentations: [new OpenAIInstrumentation()],
});
console.log("✅ OpenAI instrumentation registered");Configuration
The register function automatically reads from environment variables:
PHOENIX_COLLECTOR_ENDPOINT- Your Phoenix instance URL (defaults tohttp://localhost:6006)PHOENIX_API_KEY- Your Phoenix API key for authentication
You can also configure these directly:
const provider = register({
projectName: "openai-app",
url: "https://app.phoenix.arize.com",
apiKey: process.env.PHOENIX_API_KEY,
});Run OpenAI
Import the instrumentation.ts file first, then use OpenAI as usual.
// main.ts
import "./instrumentation.ts";
import OpenAI from "openai";
// Set OPENAI_API_KEY in environment, or pass it in arguments
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
openai.chat.completions
.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Write a haiku." }],
})
.then((response) => {
console.log(response.choices[0].message.content);
})
// Keep process alive for BatchSpanProcessor to flush traces
.then(() => new Promise((resolve) => setTimeout(resolve, 6000)));Run your application:
# Set your API keys
export OPENAI_API_KEY='your-openai-api-key'
export PHOENIX_COLLECTOR_ENDPOINT='http://localhost:6006' # or your Phoenix URL
export PHOENIX_API_KEY='your-phoenix-api-key'
# Run the application
node main.ts
# Or using --require flag
node --require ./instrumentation.ts main.tsObserve
After setting up instrumentation and running your OpenAI application, traces will appear in the Phoenix UI for visualization and analysis.
Resources
Last updated
Was this helpful?