Skip to main content
The Stream Client is used for real-time logging of model predictions, providing lower latency than batch logging.
from arize.api import Client
client = Client(...)

log()

The log() method migrates from client.log() to client.ml.log_stream().

Parameter Reference

This table provides a complete mapping of all parameters between v7 and v8, including which parameters were removed, renamed, or remain unchanged.
Parameterv7v8Changes
space_idClient initRequired per callMust pass explicitly
model_idRequiredRequiredRenamed to model_name
model_typeRequiredRequired
environmentRequiredRequired
model_versionOptionalOptional
prediction_idOptionalOptional
prediction_timestampOptionalOptional
prediction_labelOptionalOptional
actual_labelOptionalOptional
featuresOptionalOptional
embedding_featuresOptionalOptional
shap_valuesOptionalOptional
tagsOptionalOptional
batch_idOptionalOptional
promptOptionalOptional
responseOptionalOptional
prompt_templateOptionalOptional
prompt_template_versionOptionalOptional
llm_model_nameOptionalOptional
llm_paramsOptionalOptional
llm_run_metadataOptionalOptional
timeoutN/A✅ OptionalNew parameter for request timeout

Side-by-Side Comparison

See the complete migration in action with this example showing both client initialization and streaming model predictions.
from arize.api import Client
from arize.utils.types import Environments, ModelTypes

# Client initialization
client = Client(
    api_key="your-api-key",
    space_id="your-space-id"
)

# Streaming a prediction
future = client.log(
    model_id="my-model",
    model_type=ModelTypes.BINARY_CLASSIFICATION,
    environment=Environments.PRODUCTION,
    model_version="v1.0",
    prediction_id="pred-123",
    prediction_timestamp=1609459200,
    prediction_label=1,
    features={"feature1": 0.5, "feature2": "value"},
    tags={"user_id": "user-456"},
    batch_id="batch-789"
)

# Get the result (blocks until complete)
response = future.result()