Semantic Kernel Tracing

Semantic Kernel Tracing & Observability

Microsoft’s Semantic Kernel helps developers blend LLMs with C#, Python, and Java, integrating services and data to speed up robust AI application development. Learn how to instrument any Semantic Kernel application using the OpenLIT + OpenInference packages for comprehensive LLM tracing and monitoring.

Note: This documentation focuses on the Python implementation of Semantic Kernel. However, the OTEL principles of this tracing apply to other languages supported by Semantic Kernel, including C# and Java.

Quick Start: Semantic Kernel Integration

Installation & Setup

Install the required packages for Semantic Kernel tracing:

pip install arize-otel opentelemetry-sdk opentelemetry-exporter-otlp openlit semantic-kernel openinference-instrumentation-openlit

Instrumentation Setup

Configure the OpenInferenceSpanProcessor and OpenLIT tracer to send traces to Arize for LLM observability:

# Import open-telemetry dependencies
from arize.otel import register

from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from openinference.instrumentation.openlit import OpenInferenceSpanProcessor

import openlit

# Setup OTel via our convenience function
tracer_provider = register(
    space_id = "your-space-id", # in app space settings page
    api_key = "your-api-key", # in app space settings page
    project_name = "your-project-name", # name this to whatever you would like
    set_global_tracer_provider=True
)

tracer_provider.add_span_processor(OpenInferenceSpanProcessor())
tracer_provider.add_span_processor(
    BatchSpanProcessor(
        OTLPSpanExporter(
            endpoint="otlp.arize.com:443",
            headers={
                "api_key": "your-api-key",
                "space_id": "your-space-id",
            }
        )
    )
)

tracer = tracer_provider.get_tracer(__name__)

# Turn on the tracer through OpenLIT
openlit.init(tracer=tracer)

Example: Basic Semantic Kernel Usage

Test your Semantic Kernel integration with this example code and observe traces in Arize:

from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
from semantic_kernel.prompt_template import InputVariable, PromptTemplateConfig
 
kernel = Kernel()
 
kernel.add_service(
    OpenAIChatCompletion(ai_model_id="gpt-4o",api_key="OPENAI_API_KEY"),
)
 
prompt = """
{{$input}}
Given the input above, answer the question to the best of your knowledge.
 """
 
prompt_template_config = PromptTemplateConfig(
    template=prompt,
    name="summarize",
    template_format="semantic-kernel",
    input_variables=[
        InputVariable(name="input", description="user input", is_required=True),
    ]
)
 
summarize = kernel.add_function(
    function_name="summarizeFunc",
    plugin_name="summarizePlugin",
    prompt_template_config=prompt_template_config,
)

input_text = "Summarize Arize AI platform in 50 words"
 
summary = await kernel.invoke(summarize, input=input_text)
 
print(summary)

Start using your LLM application and monitor traces in Arize.

Last updated

Was this helpful?