LangChain4j Tracing
How to use OpenInference instrumentation with LangChain4j and export traces to Arize AX.
Prerequisites
Java 11 or higher
Arize AX account
Add Dependencies
Add the dependencies to your build.gradle
:
dependencies {
// OpenInference instrumentation
implementation project(path: ':instrumentation:openinference-instrumentation-langchain4j')
// LangChain4j
implementation "dev.langchain4j:langchain4j:${langchain4jVersion}"
implementation "dev.langchain4j:langchain4j-open-ai:${langchain4jVersion}"
// OpenTelemetry
implementation "io.opentelemetry:opentelemetry-sdk"
implementation "io.opentelemetry:opentelemetry-exporter-otlp"
implementation "io.opentelemetry:opentelemetry-exporter-logging"
}
Setup Arize Credentials
export ARIZE_API_KEY="your-arize-api-key"
export ARIZE_SPACE_ID="your-arize-space-id"
Configuration for Arize Tracing
private static void initializeOpenTelemetry() {
// Create resource with service name
Resource resource = Resource.getDefault()
.merge(Resource.create(Attributes.of(
AttributeKey.stringKey("service.name"), "langchain4j",
AttributeKey.stringKey(SEMRESATTRS_PROJECT_NAME), "langchain4j-project",
AttributeKey.stringKey("service.version"), "0.1.0")));
String apiKey = System.getenv("ARIZE_API_KEY");
String spaceId = System.getenv("ARIZE_SPACE_ID");
// Set up the OTLP exporter with the endpoint and additional headers
OtlpGrpcSpanExporter otlpExporter = OtlpGrpcSpanExporter.builder()
.setEndpoint("https://otlp.arize.com/v1")
.setHeaders(() -> Map.of(
"api_key", apiKey,
"arize-space-id", spaceId))
.build();
tracerProvider = SdkTracerProvider.builder()
.addSpanProcessor(BatchSpanProcessor.builder(otlpExporter)
.setScheduleDelay(Duration.ofSeconds(1))
.build())
.addSpanProcessor(SimpleSpanProcessor.create(LoggingSpanExporter.create()))
.setResource(resource)
.build();
// Build OpenTelemetry SDK
OpenTelemetrySdk.builder()
.setTracerProvider(tracerProvider)
.setPropagators(ContextPropagators.create(W3CTraceContextPropagator.getInstance()))
.buildAndRegisterGlobal();
System.out.println("OpenTelemetry initialized. Traces will be sent to Arize");
}
}
Run LangChain4j
By instrumenting your application, spans will be created whenever it is run and will be sent to Arize for collection.
import io.openinference.instrumentation.langchain4j.LangChain4jInstrumentor;
import dev.langchain4j.model.openai.OpenAiChatModel;
initializeOpenTelemetry();
// Auto-instrument LangChain4j
LangChain4jInstrumentor.instrument();
// Use LangChain4j as normal - traces will be automatically created
OpenAiChatModel model = OpenAiChatModel.builder()
.apiKey("your-openai-api-key")
.modelName("gpt-4")
.build();
String response = model.generate("What is the capital of France?");
Observe
Once configured, your traces will be automatically sent to Arize where you can:
Monitor Performance: Track latency and errors
Analyze Usage: View token usage, model performance, and cost metrics
Debug Issues: Trace request flows and identify bottlenecks
Evaluate Quality: Run evaluations on your LLM outputs
Resources
Last updated
Was this helpful?