VertexAI Tracing
Instrument LLM calls made using VertexAI's SDK via the VertexAIInstrumentor
The VertexAI SDK can be instrumented using the openinference-instrumentation-vertexai
package.
Launch Phoenix
Sign up for Phoenix:
Sign up for an Arize Phoenix account at https://app.phoenix.arize.com/login
Click
Create Space
, then follow the prompts to create and launch your space.
Install packages:
pip install arize-phoenix-otel
Set your Phoenix endpoint and API Key:
From your new Phoenix Space
Create your API key from the Settings page
Copy your
Hostname
from the Settings pageIn your code, set your endpoint and API key:
import os
os.environ["PHOENIX_API_KEY"] = "ADD YOUR PHOENIX API KEY"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "ADD YOUR PHOENIX HOSTNAME"
# If you created your Phoenix Cloud instance before June 24th, 2025,
# you also need to set the API key as a header:
# os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={os.getenv('PHOENIX_API_KEY')}"
Install
pip install openinference-instrumentation-vertexai vertexai
Setup
See Google's guide on setting up your environment for the Google Cloud AI Platform. You can also store your Project ID in the CLOUD_ML_PROJECT_ID
environment variable.
Use the register function to connect your application to Phoenix:
from phoenix.otel import register
# configure the Phoenix tracer
tracer_provider = register(
project_name="my-llm-app", # Default is 'default'
auto_instrument=True # Auto-instrument your app based on installed OI dependencies
)
Run VertexAI
import vertexai
from vertexai.generative_models import GenerativeModel
vertexai.init(location="us-central1")
model = GenerativeModel("gemini-1.5-flash")
print(model.generate_content("Why is sky blue?").text)
Observe
Now that you have tracing setup, all invocations of Vertex models will be streamed to your running Phoenix for observability and evaluation.
Resources
Last updated
Was this helpful?