Sessions
Trace and analyze multi-turn conversations
What is a Session?
A session is a grouping of related traces. For example, in a chatbot application, each conversation generates multiple traces — one for each human message and AI response. By organizing these related traces into a single session, you can easily view and analyze the full conversation flow between the human and the AI.
Sessions allow you to track context across turns and understand how your application behaves over the course of an interaction.
Why are Sessions Important?
By grouping your traces into sessions, you can:
Find exactly where a conversation "breaks" or goes off the rails. This can help identify if a user becomes progressively more frustrated or if a chatbot is not helpful.
Identify broader performance issues across sessions by running session-level evaluations.
Construct custom metrics based on evals using
session.idoruser.idto find best/worst performingsessionsandusers.
Configure Sessions
Adding session.id and/or user.id from an application enables back-and-forth interactions to be grouped.
Session and user IDs can be added to a span using auto instrumentation or manual instrumentation of Open Inference. Any LLM call within the context (the with block in the example below) will contain corresponding session.id or user.id as a span attribute. session.id and user.id must be a non-empty string.
When defining your instrumentation, you can pass the sessionID attribute as shown below.
using_session
using_sessionContext manager to add session ID to the current OpenTelemetry Context. OpenInference auto instrumentators will read this Context and pass the session ID as a span attribute, following the OpenInference semantic conventions. Its input, the session ID, must be a non-empty string.
from openinference.instrumentation import using_session
with using_session(session_id="my-session-id"):
# Calls within this block will generate spans with the attributes:
# "session.id" = "my-session-id"
...It can also be used as a decorator:
@using_session(session_id="my-session-id")
def call_fn(*args, **kwargs):
# Calls within this function will generate spans with the attributes:
# "session.id" = "my-session-id"
...using_user
using_userContext manager to add user ID to the current OpenTelemetry Context. OpenInference auto instrumentators will read this Context and pass the user ID as a span attribute, following the OpenInference semantic conventions. Its input, the user ID, must be a non-empty string.
from openinference.instrumentation import using_user
with using_user("my-user-id"):
# Calls within this block will generate spans with the attributes:
# "user.id" = "my-user-id"
...It can also be used as a decorator:
@using_user("my-user-id")
def call_fn(*args, **kwargs):
# Calls within this function will generate spans with the attributes:
# "user.id" = "my-user-id"
...We provide a setSession function which allows you to set a sessionId on context. You can use this utility in conjunction with context.with to set the active context. OpenInference auto instrumentations will then pick up these attributes and add them to any spans created within the context.with callback.
npm install --save @arizeai/openinference-core @opentelemetry/apiimport { context } from "@opentelemetry/api"
import { setSession } from "@openinference-core"
context.with(
setSession(context.active(), { sessionId: "session-id" }),
() => {
// Calls within this block will generate spans with the attributes:
// "session.id" = "session-id"
}
)We also provide a setUser function which allows you to set a userId on context. You can use this utility in conjunction with context.with to set the active context. OpenInference auto instrumentations will then pick up these attributes and add them to any spans created within the context.with callback.
import { context } from "@opentelemetry/api"
import { setUser } from "@openinference-core"
context.with(
setUser(context.active(), { userId: "user-id" }),
() => {
// Calls within this block will generate spans with the attributes:
// "user.id" = "user-id"
}
)Additional Examples
Once you define your OpenAI client, any call inside our context managers will attach the corresponding attributes to the spans.
import openai
from openinference.instrumentation import using_attributes
client = openai.OpenAI()
# Defining a Session
with using_attributes(session_id="my-session-id"):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Write a haiku."}],
max_tokens=20,
)
# Defining a User
with using_attributes(user_id="my-user-id"):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Write a haiku."}],
max_tokens=20,
)
# Defining a Session AND a User
with using_attributes(
session_id="my-session-id",
user_id="my-user-id",
):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Write a haiku."}],
max_tokens=20,
)Alternatively, if you wrap your calls inside functions, you can use them as decorators:
from openinference.instrumentation import using_attributes
client = openai.OpenAI()
# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(client, *args, **kwargs):
return client.chat.completions.create(*args, **kwargs)
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(client, *args, **kwargs):
return client.chat.completions.create(*args, **kwargs)
# Defining a Session AND a User
@using_attributes(
session_id="my-session-id",
user_id="my-user-id",
)
def call_fn(client, *args, **kwargs):
return client.chat.completions.create(*args, **kwargs)Once you define your LangChain client, any call inside our context managers will attach the corresponding attributes to the spans.
from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
from openinference.instrumentation import using_attributes
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
llm = LLMChain(llm=OpenAI(), prompt=prompt, metadata={"category": "jokes"})
# Defining a Session
with using_attributes(session_id="my-session-id"):
response = llm.predict(adjective="funny")
# Defining a User
with using_attributes(user_id="my-user-id"):
response = llm.predict(adjective="funny")
# Defining a Session AND a User
with using_attributes(
session_id="my-session-id",
user_id="my-user-id",
):
response = llm.predict(adjective="funny")Alternatively, if you wrap your calls inside functions, you can use them as decorators:
from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
from openinference.instrumentation import using_attributes
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
llm = LLMChain(llm=OpenAI(), prompt=prompt, metadata={"category": "jokes"})
# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(llm, *args, **kwargs):
return llm.complete(*args, **kwargs)
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(llm, *args, **kwargs):
return llm.complete(*args, **kwargs)
# Defining a Session AND a User
@using_attributes(
session_id="my-session-id",
user_id="my-user-id",
)
def call_fn(llm, *args, **kwargs):
return llm.complete(*args, **kwargs)Once you define your LlamaIndex client, any call inside our context managers will attach the corresponding attributes to the spans.
from llama_index.core.chat_engine import SimpleChatEngine
from openinference.instrumentation import using_attributes
chat_engine = SimpleChatEngine.from_defaults()
# Defining a Session
with using_attributes(session_id="my-session-id"):
response = chat_engine.chat(
"Say something profound and romantic about fourth of July"
)
# Defining a User
with using_attributes(user_id="my-user-id"):
response = chat_engine.chat(
"Say something profound and romantic about fourth of July"
)
# Defining a Session AND a User
with using_attributes(
session_id="my-session-id",
user_id="my-user-id",
):
response = chat_engine.chat(
"Say something profound and romantic about fourth of July"
)Alternatively, if you wrap your calls inside functions, you can use them as decorators:
from llama_index.core.chat_engine import SimpleChatEngine
from openinference.instrumentation import using_attributes
chat_engine = SimpleChatEngine.from_defaults()
# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(chat_engine, *args, **kwargs):
return chat_engine.chat(
"Say something profound and romantic about fourth of July"
)
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(chat_engine, *args, **kwargs):
return chat_engine.chat(
"Say something profound and romantic about fourth of July"
)
# Defining a Session AND a User
@using_attributes(
session_id="my-session-id",
user_id="my-user-id",
)
def call_fn(chat_engine, *args, **kwargs):
return chat_engine.chat(
"Say something profound and romantic about fourth of July"
)
Once you define your boto3 session client, any call inside our context managers will attach the corresponding attributes to the spans.
import boto3
from openinference.instrumentation import using_attributes
session = boto3.session.Session()
client = session.client("bedrock-runtime", region_name="us-west-2")
# Defining a Session
with using_attributes(session_id="my-session-id"):
response = client.invoke_model(
modelId="anthropic.claude-v2",
body=(
b'{"prompt": "Human: Hello there, how are you? Assistant:", "max_tokens_to_sample": 1024}'
)
)
# Defining a User
with using_attributes(user_id="my-user-id"):
response = client.invoke_model(
modelId="anthropic.claude-v2",
body=(
b'{"prompt": "Human: Hello there, how are you? Assistant:", "max_tokens_to_sample": 1024}'
)
)
# Defining a Session AND a User
with using_attributes(
session_id="my-session-id",
user_id="my-user-id",
):
response = client.invoke_model(
modelId="anthropic.claude-v2",
body=(
b'{"prompt": "Human: Hello there, how are you? Assistant:", "max_tokens_to_sample": 1024}'
)
)Alternatively, if you wrap your calls inside functions, you can use them as decorators:
import boto3
from openinference.instrumentation import using_attributes
session = boto3.session.Session()
client = session.client("bedrock-runtime", region_name="us-west-2")
# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(client, *args, **kwargs):
return client.invoke_model(*args, **kwargs)
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(client, *args, **kwargs):
return client.invoke_model(*args, **kwargs)
# Defining a Session AND a User
@using_attributes(
session_id="my-session-id",
user_id="my-user-id",
)
def call_fn(client, *args, **kwargs):
return client.invoke_model(*args, **kwargs)Once you define your Mistral client, any call inside our context managers will attach the corresponding attributes to the spans.
from mistralai.client import MistralClient
from openinference.instrumentation import using_attributes
client = MistralClient()
# Defining a Session
with using_attributes(session_id="my-session-id"):
response = client.chat(
model="mistral-large-latest",
messages=[
ChatMessage(
content="Who won the World Cup in 2018?",
role="user",
)
],
)
# Defining a User
with using_attributes(user_id="my-user-id"):
response = client.chat(
model="mistral-large-latest",
messages=[
ChatMessage(
content="Who won the World Cup in 2018?",
role="user",
)
],
)
# Defining a Session AND a User
with using_attributes(
session_id="my-session-id",
user_id="my-user-id",
):
response = client.chat(
model="mistral-large-latest",
messages=[
ChatMessage(
content="Who won the World Cup in 2018?",
role="user",
)
],
)Alternatively, if you wrap your calls inside functions, you can use them as decorators:
from mistralai.client import MistralClient
from openinference.instrumentation import using_attributes
client = MistralClient()
# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(client, *args, **kwargs):
return client.chat(*args, **kwargs)
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(client, *args, **kwargs):
return client.chat(*args, **kwargs)
# Defining a Session AND a User
@using_attributes(
session_id="my-session-id",
user_id="my-user-id",
)
def call_fn(client, *args, **kwargs):
return client.chat(*args, **kwargs)Once you define your DSPy predictor, any call inside our context managers will attach the corresponding attributes to the spans.
import dspy
from openinference.instrumentation import using_attributes
class BasicQA(dspy.Signature):
"""Answer questions with short factoid answers."""
question = dspy.InputField()
answer = dspy.OutputField(desc="often between 1 and 5 words")
turbo = dspy.OpenAI(model="gpt-3.5-turbo")
dspy.settings.configure(lm=turbo)
predictor = dspy.Predict(BasicQA) # Define the predictor.
# Defining a Session
with using_attributes(session_id="my-session-id"):
response = predictor(
question="What is the capital of the united states?"
)
# Defining a User
with using_attributes(user_id="my-user-id"):
response = predictor(
question="What is the capital of the united states?"
)
# Defining a Session AND a User
with using_attributes(
session_id="my-session-id",
user_id="my-user-id",
):
response = predictor(
question="What is the capital of the united states?"
) Alternatively, if you wrap your calls inside functions, you can use them as decorators:
import dspy
from openinference.instrumentation import using_attributes
# Defining a Session
@using_attributes(session_id="my-session-id")
def call_fn(predictor, *args, **kwargs):
return predictor(*args,**kwargs)
# Defining a User
@using_attributes(user_id="my-user-id")
def call_fn(predictor, *args, **kwargs):
return predictor(*args,**kwargs)
# Defining a Session AND a User
@using_attributes(
session_id="my-session-id",
user_id="my-user-id",
)
def call_fn(predictor, *args, **kwargs):
return predictor(*args,**kwargs) Last updated
Was this helpful?

