Groq Tracing
Instrument LLM applications built with Groq using OpenInference and view traces in Arize.
Groq provides low latency and lightning-fast inference for AI models. OpenInference supports instrumenting Groq API calls, including role types such as system, user, and assistant messages, as well as tool use. Traces can be viewed in Arize. You can create a free GroqCloud account and generate a Groq API Key here to get started.
Launch Arize
To get started, sign up for a free Arize account and get your Space ID and API Key.
Install
pip install openinference-instrumentation-groq groq arize-otel
API Key Setup
Set the GROQ_API_KEY
environment variable. You can obtain your API key from the Groq console.
export GROQ_API_KEY='your_groq_api_key'
Setup
Connect to Arize using the register
function and instrument the Groq client.
# Import open-telemetry dependencies
from arize.otel import register
# Setup OTel via our convenience function
tracer_provider = register(
space_id = "your-space-id", # in app space settings page
api_key = "your-api-key", # in app space settings page
project_name = "your-project-name", # name this to whatever you would like
)
# Import the instrumentor from OpenInference
from openinference.instrumentation.groq import GroqInstrumentor
# Instrument the Groq client
GroqInstrumentor().instrument(tracer_provider=tracer_provider)
Run Groq
A simple Groq application that is now instrumented. The client will use the GROQ_API_KEY
from your environment.
import os
from groq import Groq
client = Groq(
api_key=os.environ.get("GROQ_API_KEY"),
# This is the default and can be omitted if GROQ_API_KEY is set
)
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Explain the importance of low latency LLMs",
}
],
model="mixtral-8x7b-32768",
)
print(chat_completion.choices[0].message.content)
Observe
Now that you have tracing setup, all invocations will be streamed to your Arize account for observability and evaluation.
Resources:
Last updated
Was this helpful?