Arconia Tracing
How to use OpenInference instrumentation with Arconia and export traces to Arize Phoenix.
Prerequisites
Java 11 or higher
(Optional) Phoenix API key if using auth
Add Dependencies
1. Gradle
Add the dependencies to your build.gradle
:
dependencies {
implementation 'io.arconia:arconia-openinference-semantic-conventions'
implementation 'io.arconia:arconia-opentelemetry-spring-boot-starter'
implementation 'org.springframework.boot:spring-boot-starter-web'
implementation 'org.springframework.ai:spring-ai-starter-model-mistral-ai'
developmentOnly 'org.springframework.boot:spring-boot-devtools'
testAndDevelopmentOnly 'io.arconia:arconia-dev-services-phoenix'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
testRuntimeOnly 'org.junit.platform:junit-platform-launcher'
}
Setup Phoenix Tracing
Pull latest Phoenix image from Docker Hub:
docker pull arizephoenix/phoenix:latest
Run your containerized instance:
docker run -p 6006:6006 -p 4317:4317 arizephoenix/phoenix:latest
This command:
Exposes port 6006 for the Phoenix web UI
Exposes port 4317 for the OTLP gRPC endpoint (where traces are sent)
For more info on using Phoenix with Docker, see Docker.
If you are using Phoenix Cloud, adjust the endpoint in the code as needed.
Run Arconia
By instrumenting your application with Arconia, spans are automatically created whenever your AI models (e.g., via Spring AI) are invoked and sent to the Phoenix server for collection. Arconia plugs into Spring Boot and Spring AI with minimal code changes.
package io.arconia.demo;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.ai.chat.client.ChatClient;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
@SpringBootApplication
public class ArconiaTracingApplication {
public static void main(String[] args) {
SpringApplication.run(ArconiaTracingApplication.class, args);
}
}
@RestController
class ChatController {
private static final Logger logger = LoggerFactory.getLogger(ChatController.class);
private final ChatClient chatClient;
ChatController(ChatClient.Builder chatClientBuilder) {
this.chatClient = chatClientBuilder.clone().build();
}
@GetMapping("/chat")
String chat(String question) {
logger.info("Received question: {}", question);
return chatClient
.prompt(question)
.call()
.content();
}
}
Observe
Once configured, your OpenInference traces will be automatically sent to Phoenix where you can:
Monitor Performance: Track latency, throughput, and error rates
Analyze Usage: View token usage, model performance, and cost metrics
Debug Issues: Trace request flows and identify bottlenecks
Evaluate Quality: Run evaluations on your LLM outputs
Resources
Last updated
Was this helpful?