Skip to main content

Arize Prompt Hub SDK

The Arize Prompt Hub SDK provides a Python interface for managing and using prompts with the Arize AI platform. This SDK allows you to create, retrieve, update, and use prompts with various LLM providers.
This API is currently in Early Access. While we’re excited to share it with you, please be aware that it may undergo significant changes, including breaking changes, as we continue development. We’ll do our best to minimize disruptions, but cannot guarantee long-term backward compatibility during this phase. We value your feedback as we refine and improve the API experience.

Overview

Prompt Hub enables you to:
  • Create and store prompt templates in your Arize space
  • Retrieve prompts for use in your applications
  • Update existing prompts with new versions
  • Track prompt versions and changes

Quick Start

pip install "arize[PromptHub]"

OpenAI Example

from arize.experimental.prompt_hub import ArizePromptClient, Prompt, LLMProvider
from openai import OpenAI

# Initialize the client with your Arize credentials
prompt_client = ArizePromptClient(
    space_id='YOUR_SPACE_ID',
    api_key='YOUR_API_KEY'
)

# Create a prompt template
new_prompt = Prompt(
    name='customer_service_greeting',
    messages=[
        {
            "role": "system",
            "content": "You are a helpful customer service assistant."
        },
        {
            'role': 'user', 
            'content': 'Customer query: {query}'
        }
    ],
    provider=LLMProvider.OPENAI,
    model_name="gpt-4o"
)

# Save the prompt to Arize Prompt Hub
prompt_client.push_prompt(new_prompt)

# Use the prompt with an LLM
oai_client = OpenAI(api_key="YOUR_OPENAI_API_KEY")
prompt_vars = {"query": "When will my order arrive?"}
formatted_prompt = new_prompt.format(prompt_vars)
response = oai_client.chat.completions.create(**formatted_prompt)
print(response.choices[0].message.content)

Vertex AI Example

from arize.experimental.prompt_hub import ArizePromptClient, Prompt
import vertexai
from vertexai.generative_models import GenerativeModel
from google.oauth2 import service_account
import os

# Load credentials from the downloaded file
credentials = service_account.Credentials.from_service_account_file('path_to_your_creds.json')

# Initialize Vertex AI
project_id = "my-ai-project"  # This is in the JSON file
vertexai.init(project=project_id, location="us-central1", credentials=credentials)

c = ArizePromptClient(
    space_id='YOUR_SPACE_ID', 
    api_key='YOUR_API_KEY'
)
p = c.pull_prompt("customer_service_greeting")
prompt_vars = {"question": "where is my order?"}
vertex_prompt = p.format(prompt_vars)
model = GenerativeModel(vertex_prompt.model_name)
response = model.generate_content(vertex_prompt.messages)

Error Handling and Fallback Strategies

When working with the Prompt Hub API in production environments, it’s important to implement fallback mechanisms in case the API becomes temporarily unavailable.

Local Cache Fallback

You can implement a local cache of your prompts to ensure your application continues to function even if the Prompt Hub API is unreachable:
import json
import os
from arize.experimental.prompt_hub import ArizePromptClient, Prompt, LLMProvider

class PromptManager:
    def __init__(self, space_id, api_key, cache_dir=".prompt_cache"):
        self.client = ArizePromptClient(space_id=space_id, api_key=api_key)
        self.cache_dir = cache_dir
        os.makedirs(cache_dir, exist_ok=True)
    
    def get_prompt(self, prompt_name):
        cache_path = os.path.join(self.cache_dir, f"{prompt_name}.json")
        
        try:
            # First try to pull the prompt from the API
            prompt = self.client.pull_prompt(prompt_name)
            
            # If successful, update the cache
            self._save_to_cache(prompt, cache_path)
            return prompt
            
        except Exception as e:
            print(f"Error accessing Prompt Hub API: {e}")
            print("Attempting to use cached prompt...")
            
            # Fall back to cached version if available
            if os.path.exists(cache_path):
                return self._load_from_cache(cache_path)
            else:
                raise ValueError(f"No cached version of prompt '{prompt_name}' available")
    
    def _save_to_cache(self, prompt, cache_path):
        # Serialize the prompt to JSON and save to cache
        with open(cache_path, 'w') as f:
            json.dump(prompt.__dict__, f)
    
    def _load_from_cache(self, cache_path):
        # Load and deserialize the prompt from cache
        with open(cache_path, 'r') as f:
            prompt_data = json.load(f)
            return Prompt(**prompt_data)

Best Practices for Resilient Applications

  1. Always cache prompts after retrieval: Update your local cache whenever you successfully retrieve a prompt.
def get_and_cache_prompt(prompt_manager, prompt_name):
    prompt = prompt_manager.get_prompt(prompt_name)
    
    # Additional logic to ensure the prompt is cached
    cache_path = os.path.join(prompt_manager.cache_dir, f"{prompt_name}.json")
    prompt_manager._save_to_cache(prompt, cache_path)
    
    return prompt
  1. Implement exponential backoff: When the API is unavailable, implement exponential backoff for retries:
import time
import random

def get_prompt_with_retry(client, prompt_name, max_retries=3):
    for attempt in range(max_retries):
        try:
            return client.pull_prompt(prompt_name)
        except Exception as e:
            if attempt == max_retries - 1:
                # On last attempt, re-raise the exception
                raise
            
            # Calculate backoff time with jitter
            backoff_time = (2 ** attempt) + random.uniform(0, 1)
            print(f"Error accessing API: {e}. Retrying in {backoff_time:.2f} seconds...")
            time.sleep(backoff_time)
  1. Periodically sync your cache: Implement a background job to periodically sync your cache with the latest prompts from the API.
import threading
import time

class PromptSyncManager:
    def __init__(self, prompt_manager, sync_interval=3600):  # Default: sync every hour
        self.prompt_manager = prompt_manager
        self.sync_interval = sync_interval
        self.prompt_names = []
        self.stop_event = threading.Event()
        
    def start_sync(self, prompt_names):
        self.prompt_names = prompt_names
        threading.Thread(target=self._sync_job, daemon=True).start()
        
    def _sync_job(self):
        while not self.stop_event.is_set():
            for prompt_name in self.prompt_names:
                try:
                    self.prompt_manager.get_prompt(prompt_name)  # This will update the cache
                except Exception as e:
                    print(f"Failed to sync prompt '{prompt_name}': {e}")
            time.sleep(self.sync_interval)

Core Components

ArizePromptClient

The main client for interacting with the Arize Prompt Hub.
client = ArizePromptClient(
    space_id='YOUR_SPACE_ID',
    api_key='YOUR_API_KEY',
    base_url='https://app.arize.com'  # Optional, defaults to this value
)

Prompt

Represents a prompt template with associated metadata.
prompt = Prompt(
    name='prompt_name',                  # Required: Name of the prompt
    messages=[...],                      # Required: List of message dictionaries
    provider=LLMProvider.OPENAI,         # Required: LLM provider
    model_name="gpt-4o",                 # Required: Model name
    description="Description",           # Optional: Description of the prompt
    tags=["tag1", "tag2"],              # Optional: Tags for categorization
    input_variable_format=PromptInputVariableFormat.F_STRING  # Optional: Format for variables
)

LLMProvider

Enum for supported LLM providers:
  • LLMProvider.OPENAI: OpenAI models
  • LLMProvider.AZURE_OPENAI: Azure OpenAI models
  • LLMProvider.AWS_BEDROCK: AWS Bedrock models
  • LLMProvider.VERTEX_AI: Google Vertex AI models
  • LLMProvider.CUSTOM: Custom provider

PromptInputVariableFormat

Enum for specifying how input variables are formatted in prompts:
  • PromptInputVariableFormat.F_STRING: Single curly braces ({variable_name})
  • PromptInputVariableFormat.MUSTACHE: Double curly braces ({{variable_name}})

API Reference

ArizePromptClient Methods

pull_prompts()

Retrieves all prompts in the space.
prompts = client.pull_prompts()

pull_prompt(prompt_name, version_id=None, version_label=None)

Retrieves a specific prompt by name, version ID, or version label. Returns the prompt with the id field populated, which is REQUIRED for updating the prompt. Parameters:
  • prompt_name: The name of the prompt to retrieve
  • version_id: (Optional) Specific version ID to retrieve
  • version_label: (Optional) Version label to retrieve (e.g., “production”, “staging”)
Examples:
# Get the latest version by name
prompt = client.pull_prompt("my_prompt_name")
# After pulling, prompt.id will be populated (not None)

# Get a specific version by label (e.g., "production")
prompt = client.pull_prompt(prompt_name="my_prompt_name", version_label="production")

# Get a specific version by ID
prompt = client.pull_prompt(version_id="version_id_here")
Version labels are tags that can be applied to prompt versions (like “production”, “staging”, “v1.0”) to make them easier to identify and retrieve. Version labels are currently managed through the Arize UI, not the Python SDK.

push_prompt(prompt, commit_message=None)

Creates a new prompt or updates an existing one by creating a new version. Behavior:
  • If prompt.id is None: Creates a NEW prompt (even if name matches an existing prompt)
  • If prompt.id is set: Creates a NEW VERSION of the existing prompt (keeps same name, preserves version history)
# Create new prompt (id is None)
new_prompt = Prompt(name="my_prompt", messages=[...], provider=..., model_name="...")
client.push_prompt(new_prompt)

# Update existing prompt - MUST pull first to get the id!
existing_prompt = client.pull_prompt("my_prompt")  # This populates existing_prompt.id
existing_prompt.messages = [...]  # Modify
client.push_prompt(existing_prompt, commit_message="Updated system message")  # Creates new version

Prompt Methods

format(variables)

Formats the prompt with the given variables for use with an LLM provider.
variables = {"query": "Where is my order?", "customer_name": "John"}
formatted_prompt = prompt.format(variables)

Examples

Creating and Using a Prompt

from arize.experimental.prompt_hub import ArizePromptClient, Prompt, LLMProvider
from openai import OpenAI

# Initialize clients
prompt_client = ArizePromptClient(
    space_id='YOUR_SPACE_ID',
    api_key='YOUR_API_KEY'
)
oai_client = OpenAI(api_key="YOUR_OPENAI_API_KEY")

# Create a prompt
new_prompt = Prompt(
    name='product_recommendation',
    description="Recommends products based on user preferences",
    messages=[
        {
            "role": "system",
            "content": "You are a product recommendation assistant."
        },
        {
            'role': 'user', 
            'content': 'Customer preferences: {preferences}\nBudget: {budget}'
        }
    ],
    provider=LLMProvider.OPENAI,
    model_name="gpt-4o",
    tags=["recommendation", "e-commerce"]
)

# Save to Prompt Hub
prompt_client.push_prompt(new_prompt)

# Use the prompt
prompt_vars = {
    "preferences": "I like outdoor activities and photography",
    "budget": "$500"
}
formatted_prompt = new_prompt.format(prompt_vars)
response = oai_client.chat.completions.create(**formatted_prompt)
print(response.choices[0].message.content)

Updating an Existing Prompt (Creating a New Version with the Same Name)

IMPORTANT: To update a prompt and create a new version while keeping the same name, you must follow these steps:
  1. Pull the existing prompt first - This populates the id field, which is required for versioning
  2. Modify the prompt object - Change messages, model, description, or any other properties
  3. Push the updated prompt - This creates a new version with the same name
Here’s why this workflow is necessary:
  • When you call push_prompt() with a prompt that has id=None, it creates a NEW prompt (even if the name matches)
  • When you call push_prompt() with a prompt that has an id set, it creates a NEW VERSION of the existing prompt (keeping the same name)
  • The pull_prompt() method is what populates the id field from the Prompt Hub
Complete Example:
from arize.experimental.prompt_hub import ArizePromptClient, Prompt, LLMProvider

prompt_client = ArizePromptClient(
    space_id='YOUR_SPACE_ID',
    api_key='YOUR_API_KEY'
)

# Step 1: Pull the existing prompt (CRITICAL: This populates the id field)
existing_prompt = prompt_client.pull_prompt(prompt_name="customer_service_greeting")

# Verify the id is populated (for debugging)
print(f"Prompt ID: {existing_prompt.id}")  # Should not be None

# Step 2: Modify the prompt content, model, or other properties
existing_prompt.messages = [
    {
        "role": "system",
        "content": "You are a helpful and friendly customer service assistant."
    },
    {
        "role": "user",
        "content": "Customer query: {query}"
    }
]
existing_prompt.model_name = "gpt-4o-mini"  # Change model
existing_prompt.description = "Updated greeting prompt with friendlier tone"

# Step 3: Push the updated prompt (creates a new version with same name)
prompt_client.push_prompt(
    existing_prompt,
    commit_message="Improved friendliness and switched to gpt-4o-mini"
)

# The prompt name "customer_service_greeting" remains the same
# A new version is created in the Prompt Hub
# All previous versions remain accessible
Note on Version Labels:Version labels (like “production”, “staging”, “v1.0”) are tags that can be applied to prompt versions to make them easier to identify and retrieve. You can retrieve a prompt by version label using:
# Retrieve a prompt version by its label
prompt = prompt_client.pull_prompt(
    prompt_name="customer_service_greeting",
    version_label="production"
)
Version labels are currently managed through the Arize UI. After creating a new version with push_prompt(), you can add or update version labels in the Prompt Hub interface.
Common Mistakes to Avoid:
# ❌ WRONG: This creates a NEW prompt (not a new version)
new_prompt = Prompt(
    name="customer_service_greeting",  # Same name, but id is None
    messages=[...],
    provider=LLMProvider.OPENAI,
    model_name="gpt-4o"
)
prompt_client.push_prompt(new_prompt)  # Creates duplicate prompt!

# ✅ CORRECT: Pull first, then modify, then push
existing_prompt = prompt_client.pull_prompt(prompt_name="customer_service_greeting")
existing_prompt.messages = [...]  # Modify
prompt_client.push_prompt(existing_prompt)  # Creates new version

Using Different Variable Formats

from arize.experimental.prompt_hub import PromptInputVariableFormat

# Using Mustache format (double curly braces)
mustache_prompt = Prompt(
    name="mustache_example",
    messages=[
        {
            "role": "user",
            "content": "Hello {{name}}, how can I help you today?"
        }
    ],
    provider=LLMProvider.OPENAI,
    model_name="gpt-4o",
    input_variable_format=PromptInputVariableFormat.MUSTACHE
)

# Format and use the prompt
formatted = mustache_prompt.format({"name": "Alice"})

Troubleshooting Common Issues

  1. Authentication Errors: Ensure your space_id and api_key are correct
  2. Prompt Not Found: Check that the prompt name matches exactly (case-sensitive)
  3. Formatting Errors: Verify that your variables match the placeholders in the prompt
  4. Update Not Working / Creating Duplicate Prompts:
    • Make sure you called pull_prompt() first to populate the id field
    • If prompt.id is None when you push, it will create a NEW prompt instead of a new version
    • Always follow the workflow: pull → modify → push