Skip to main content
Arize method to log model data record-by-record.
log( 
    model_id: str,
    model_type: ModelTypes,
    environment: Environments,
    model_version: Optional[str] = None,
    prediction_id: Optional[Union[str, int, float]] = None,
    prediction_timestamp: Optional[int] = None,
    prediction_label: Optional[PredictionLabelTypes] = None,
    actual_label: Optional[ActualLabelTypes] = None,
    features: Optional[Dict[str, Union[str, bool, float, int, List[str], TypedValue]]] = None,
    embedding_features: Optional[Dict[str, Embedding]] = None,
    shap_values: Optional[Dict[str, float]] = None,
    tags: Optional[Dict[str, Union[str, bool, float, int, TypedValue]]] = None,
    batch_id: Optional[str] = None,
    prompt: Optional[Union[str, Embedding]] = None,
    response: Optional[Union[str, Embedding]] = None,
    prompt_template: Optional[str] = None,
    prompt_template_version: Optional[str] = None,
    llm_model_name: Optional[str] = None,
    llm_params: Optional[Dict[str, Union[str, bool, float, int]]] = None,
    llm_run_metadata: Optional[LLMRunMetadata] = None,
)
ParameterData TypeDescription
model_idstr(Required) A unique name to identify your model in the Arize platform.
model_typearize.utils.types.ModelTypes(Required*) Declares what model type this prediction/actual is for *(as of v.5.X.X).
environmentarize.utils.types.Environments(Required*) The environment that this prediction/actual is for (Production, Training, Validation) *(as of v.5.X.X).
model_versionstr(Optional) Used to group together a subset of predictions and actuals for a given model_id. Defaults to no_version.
prediction_idstr(Optional) A unique string to identify a prediction event.

Important: This value matches a prediction to an actual label or SHAP feature importance in the Arize platform. If not provided, Arize may create a random prediction ID server-side.
prediction_timestampint(Optional) Unix epoch time in seconds. If None, defaults to the current timestamp. Important: Future and Historical predictions are supported up to 1 year from the current wall clock time.
prediction_labelOne of str, bool, int, float, Tuple[str,float](Optional) The predicted value for a given model input. Ingest ranking predictions as a ranking object.
actual_labelOne of str, bool, int, float, Tuple[str,float](Optional) The actual/ground truth value for a given model input. Important: Matched to the prediction with the same prediction_id. Ingest ranking actuals as a ranking object.
features[dict<str, [str, bool, float, int, List[str], TypedValue]>](Optional) Dictionary containing human-readable and debuggable model features. Keys must be str; values must be one of str, bool, float, int, list of string, or TypedValue.
embedding_features[dict<str, Embedding>](Optional) Dictionary containing human-readable embedding features. Keys must be str; values must be Embedding objects.
shap_values[dict<str, float>](Optional) Dictionary containing feature keys and SHAP importance values. Keys must be str; values must be float.
tags[dict<str, [str, bool, float, int, TypedValue]>](Optional) Dictionary containing metadata added to a prediction ID. Keys must be str; values must be one of str, bool, float, int, or TypedValue.
batch_idstr(Optional) Only applicable to Validation datasets. Distinguishes different batches under the same model_id and model_version.
promptEmbedding(Optional) Embedding object containing the vector (required) and raw text (optional) for the input text your GENERATIVE_LLM model acts on.
responseEmbedding(Optional) Embedding object containing the vector (required) and raw text (optional) for the text generated by your GENERATIVE_LLM model.
prompt_templatestr(Optional) Template used to construct the prompt. Can include variables using double braces, e.g., Given the context {{context}}, answer the following question {{user_question}}.
prompt_template_versionstr(Optional) The version of the template used.
llm_model_namestr(Optional) The name of the LLM used, e.g., gpt-4.
llm_paramsstr(Optional) Invocation hyperparameters passed to the LLM, e.g.,

{ "temperature": 0.7,

"stop":[".","?"],

"frequency_penalty":"0.2"

}
llm_run_metadataLLMRunMetadata(Optional) Run metadata for LLM calls, e.g.,

LLMRunMetadata(total_token_count=400, prompt_token_count=300, response_token_count=100, response_latency_ms=2000)

Code Example

future = arize_client.log(
    prediction_id=record["prediction_id"],
    features={
        "f1": 7,
        "f2": TypedValue(value=5, type=ArizeTypes.FLOAT),
    },
    prediction_label=record["predicted_label"],
    actual_label=record["actual_label"],
    model_id="binary-classification-metrics-only-single-record-ingestion-tutorial",
    model_type=ModelTypes.BINARY_CLASSIFICATION,
    model_version="1.0.0",
    environment=Environments.PRODUCTION
)

result = future.result()