Arconia Tracing

Prerequisites

  • Java 11 or higher

  • Arize AX account

Add Dependencies

1. Gradle

Add the dependencies to your build.gradle:

dependencies {
    implementation 'io.arconia:arconia-openinference-semantic-conventions'
    implementation 'io.arconia:arconia-opentelemetry-spring-boot-starter'

    implementation 'org.springframework.boot:spring-boot-starter-web'
    implementation 'org.springframework.ai:spring-ai-starter-model-mistral-ai'

    developmentOnly 'org.springframework.boot:spring-boot-devtools'
    testAndDevelopmentOnly 'io.arconia:arconia-dev-services-phoenix'

    testImplementation 'org.springframework.boot:spring-boot-starter-test'
    testRuntimeOnly 'org.junit.platform:junit-platform-launcher'
}

Setup Arize Credentials

export ARIZE_API_KEY="your-arize-api-key"
export ARIZE_SPACE_ID="your-arize-space-id"

Run Arconia

By instrumenting your application with Arconia, spans are automatically created whenever your AI models (e.g., via Spring AI) are invoked and sent to Arize for collection. Arconia plugs into Spring Boot and Spring AI with minimal code changes.

You can also reference Spring AI Tracing docs

package io.arconia.demo;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.ai.chat.client.ChatClient;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@SpringBootApplication
public class ArconiaTracingApplication {
    public static void main(String[] args) {
        SpringApplication.run(ArconiaTracingApplication.class, args);
    }
}

@RestController
class ChatController {

    private static final Logger logger = LoggerFactory.getLogger(ChatController.class);
    private final ChatClient chatClient;

    ChatController(ChatClient.Builder chatClientBuilder) {
        this.chatClient = chatClientBuilder.clone().build();
    }

    @GetMapping("/chat")
    String chat(String question) {
        logger.info("Received question: {}", question);
        return chatClient
                .prompt(question)
                .call()
                .content();
    }
}

Observe

Once configured, your OpenInference traces will be automatically sent to Arize where you can:

  • Monitor Performance: Track latency and errors

  • Analyze Usage: View token usage, model performance, and cost metrics

  • Debug Issues: Trace request flows and identify bottlenecks

  • Evaluate Quality: Run evaluations on your LLM outputs

Resources

Last updated

Was this helpful?