BeeAI

Autoinstrumentor for JS/TS applications built using BeeAI

This module provides automatic instrumentation for the BeeAI framework. It integrates seamlessly with the @opentelemetry/sdk-trace-node package to collect and export telemetry data.

Installation

npm install --save @arizeai/openinference-instrumentation-beeai beeai-framework @ai-sdk/openai 

npm install --save @opentelemetry/sdk-node @opentelemetry/exporter-trace-otlp-http @opentelemetry/semantic-conventions @arizeai/openinference-semantic-conventions

Usage

To instrument your application, import and enable BeeAIInstrumentation

Create the instrumentation.js file

import { NodeSDK, node } from "@opentelemetry/sdk-node";
import { resourceFromAttributes } from "@opentelemetry/resources";
import { ATTR_SERVICE_NAME } from "@opentelemetry/semantic-conventions";
import { BeeAIInstrumentation } from "@arizeai/openinference-instrumentation-beeai";
import * as beeaiFramework from "beeai-framework";
import { Metadata } from "@grpc/grpc-js";
import { 
    OTLPTraceExporter as GrpcOTLPTraceExporter 
  } from "@opentelemetry/exporter-trace-otlp-grpc"; 

// Initialize Instrumentation Manually
const beeAIInstrumentation = new BeeAIInstrumentation();

// Arize specific - Create metadata and add your headers
const metadata = new Metadata();
// Your Arize Space and API Keys, which can be found in the UI
metadata.set('space_id', 'ARIZE SPACE ID');
metadata.set('api_key', 'ARIZE API KEY');

const provider = new NodeSDK({
  resource: resourceFromAttributes({
    [ATTR_SERVICE_NAME]: "beeai",
    "model_id": "ARIZE PROJECT NAME HERE"
  }),
  spanProcessors: [
    // Use BatchSpanProcessor for production use cases
    new node.SimpleSpanProcessor(
      new GrpcOTLPTraceExporter({
        url: "https://otlp.arize.com/v1",
        metadata,
      }),
    ),
  ],
  instrumentations: [beeAIInstrumentation],
});

await provider.start();

// Manually Patch BeeAgent (This is needed when the module is not loaded via require (commonjs))
console.log("🔧 Manually instrumenting BeeAgent...");
beeAIInstrumentation.manuallyInstrument(beeaiFramework);
console.log("✅ BeeAgent manually instrumented.");

// eslint-disable-next-line no-console
console.log("👀 OpenInference initialized");

Import the library and call the BeeAI framework

import "./instrumentation.js"; // import the file above 
import { ReActAgent } from "beeai-framework/agents/react/agent";
import { TokenMemory } from "beeai-framework/memory/tokenMemory";
import { DuckDuckGoSearchTool } from "beeai-framework/tools/search/duckDuckGoSearch";
import { OpenMeteoTool } from "beeai-framework/tools/weather/openMeteo";
import { OpenAIChatModel } from "beeai-framework/adapters/openai/backend/chat";

const llm = new OpenAIChatModel(
    "gpt-4o",
    {},
    // {
    //   baseURL: "OPENAI_BASE_URL",
    //   apiKey: "OPENAI_API_KEY",
    //   organization: "OPENAI_ORGANIZATION",
    //   project: "OPENAI_PROJECT",
    // },
  );

const agent = new ReActAgent({
  llm: llm,
  memory: new TokenMemory(),
  tools: [new DuckDuckGoSearchTool(), new OpenMeteoTool()],
});

const response = await agent.run({
  prompt: "What's the current weather in Amsterdam?",
});

console.log(`Agent 🤖 : `, response.result.text);

Last updated

Was this helpful?