total_token_count_column_name , prompt_token_count_column_name, response_token_count_column_name, andresponse_latency_ms_column_name
| Parameters | Data Type | Description |
|---|---|---|
total_token_count | int | The total number of tokens used in the inference, both in the prompt sent to the LLM and in its response |
promt_token_count | int | The number of tokens used in the prompt sent to the LLM |
response_token_count | int | The number of tokens used in the response returned by the LLM |
response_latency_ms | int or float | The latency (in ms) experienced during the LLM run |