Available in Phoenix 11.0+
Phoenix now allows you to track token-based costs for LLM runs automatically, calculating costs from token counts and model pricing data and rolling them up to trace and project levels for comprehensive analysis.
New Features:
Automatic calculation of token-based costs using Phoenix’s built-in model pricing table.
Support for custom pricing configurations in Settings > Models when needed.
Token counts and model information are captured automatically when using OpenInference auto-instrumentation with OpenAI, Anthropic, and other supported SDKs.
For manual instrumentation, token count attributes can be included in spans to enable cost tracking.
OpenTelemetry users can leverage OpenInference semantic conventions to include token counts in LLM spans.