Mastra

Instrument AI agents and workflows built with Mastra's TypeScript framework

Mastra is an open-source TypeScript agent framework designed to provide the primitives you need to build AI applications and features. With Mastra, you can build AI agents with memory, function execution capabilities, and chain LLM calls in deterministic workflows.

Tracing Mastra Applications with Arize

Create your Project

If you haven't already, create a project with Mastra:

npm create mastra@latest
# answer the prompts, include agent, tools, and the example when asked

cd chosen-project-name

Installation

Install the OpenInference instrumentation package for Mastra

npm install @arizeai/openinference-mastra

Environment Variable Configuration

Add your Arize Space ID and API Key, as well as any model API keys you're using:

ARIZE_SPACE_ID=YOUR_ARIZE_SPACE_ID
ARIZE_API_KEY=YOUR_ARIZE_API_KEY
OPENAI_API_KEY=....

Basic Configuration

Configure Mastra with telemetry settings to send traces directly to Arize:

import { Mastra } from "@mastra/core/mastra";
import { createLogger } from "@mastra/core/logger";
import { LibSQLStore } from "@mastra/libsql";
import {
  isOpenInferenceSpan,
  OpenInferenceOTLPTraceExporter,
} from "@arizeai/openinference-mastra";

import { weatherAgent } from "./agents/weather-agent";

export const mastra = new Mastra({  
  agents: { weatherAgent },
  storage: new LibSQLStore({
    url: ":memory:",
  }),
  logger: createLogger({
    name: "Mastra",
    level: "info",
  }),
  telemetry: {
    enabled: true,
    serviceName: "your-agent-name",
    export: {
      type: "custom",
      exporter: new OpenInferenceOTLPTraceExporter({
        url: "https://otlp.arize.com/v1/traces",
        headers: {
          "space_id": process.env.ARIZE_SPACE_ID!,
          "api_key": process.env.ARIZE_API_KEY!,
        },
        spanFilter: isOpenInferenceSpan,
      }),
    },
  },
});

Run your Mastra App

npm run dev

What Gets Automatically Traced

Mastra's comprehensive tracing captures:

  • Agent Operations: All agent generation, streaming, and interaction calls

  • LLM Interactions: Complete model calls with input/output messages and metadata

  • Tool Executions: Function calls made by agents with parameters and results

  • Workflow Runs: Step-by-step workflow execution with timing and dependencies

  • Memory Operations: Agent memory queries, updates, and retrieval operations

All traces follow OpenTelemetry standards and include relevant metadata such as model parameters, token usage, execution timing, and error details.

Last updated

Was this helpful?