Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arizeai-433a7140.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Available in @arizeai/openinference-tanstack-ai 0.1.0+ Phoenix now ships an OpenInference middleware for TanStack AI. Plug openInferenceMiddleware() into any chat() call to capture an AGENT span for the run, an LLM span for each model turn, and a TOOL span for every executed tool call — across both streaming and non-streaming flows, and across any TanStack AI provider adapter.
TanStack AI traces in Phoenix
npm install --save @arizeai/openinference-tanstack-ai @tanstack/ai
import { chat } from "@tanstack/ai";
import { openaiText } from "@tanstack/ai-openai";
import { openInferenceMiddleware } from "@arizeai/openinference-tanstack-ai";

const stream = chat({
  adapter: openaiText("gpt-4o-mini"),
  messages: [{ role: "user", content: "What is OpenInference?" }],
  middleware: [openInferenceMiddleware()],
});
This integration is brand new. If you run into issues or have ideas for improvements, please reach out via the OpenInference repo — we’d love your feedback.

TanStack AI Tracing Docs

Setup, usage, and a tool-calling example.

TanStack AI

Learn more about TanStack AI.