Skip to main content
View source on Github Arize class to map up to 4 columns: total_token_count_column_name , prompt_token_count_column_name, response_token_count_column_name, andresponse_latency_ms_column_name
class LLMRunMetadataColumnNames:
    total_token_count_column_name: Optional[str] = None
    prompt_token_count_column_name: Optional[str] = None
    response_token_count_column_name: Optional[str] = None
    response_latency_ms_column_name: Optional[str] = None
ParametersData TypeExpected Type in ColumnDescription
total_token_count_column_namestrThe contents of this column must be integersColumn name for the total number of tokens used in the inference, both in the prompt sent to the LLM and in its response
promt_token_count_column_namestrThe contents of this column must be integersColumn name for the number of tokens used in the prompt sent to the LLM
response_token_count_column_namestrThe contents of this column must be integersColumn name for the number of tokens used in the response returned by the LLM
response_latency_ms_column_namestrThe contents of this column must be integers or floatsColumn name for the latency (in ms) experienced during the LLM run

Code Example

Indextotal_token_countprompt_token_countresponse_token_countresponse_latency
043252325200020000
from arize.utils.types import LLMRunMetadataColumnNames

# Declare LLM run metadata columns
llm_run_metadata = LLMRunMetadataColumnNames(
    total_token_count_column_name = "total_token_count", # column containing the number of tokens in the prompt and response
    prompt_token_count_column_name = "prompt_token_count", # column containing the number of tokens in the prompt
    response_token_count_column_name = "response_token_count", # column containing the number of tokens in the response
    response_latency_ms_column_name = "response_latency" # column containing the latency of the LLM run
)