# Arize AI | Arize Docs URL: https://docs.arize.com/arize ## Overview The Arize AI documentation provides comprehensive guidance for developers to effectively trace, evaluate, and monitor AI applications, particularly those utilizing large language models, traditional machine learning, and computer vision, while also offering tools for performance analysis, data quality, and bias detection. ## Documentation Pages - [Arize AI](https://docs.arize.com/arize): Arize AI documentation provides comprehensive guides and resources to assist AI engineers in developing, monitoring, and evaluating applications using various AI technologies like large language models, machine learning, and computer vision. - [User Guide](https://docs.arize.com/arize/what-is-llm-observability): The Arize AI User Guide provides comprehensive instructions on curating datasets, running and tracking experiments, evaluating outcomes, and monitoring and protecting LLM-powered applications in both development and production environments. - [✨Arize Copilot](https://docs.arize.com/arize/arize-copilot): Arize Copilot is an AI-powered assistant designed to help engineers build and enhance AI applications by offering over 30 skills for generative applications and tabular/computer vision use cases, while ensuring data privacy and compliance through Azure OpenAI. - [👩‍🏫AI Research](https://docs.arize.com/arize/ai-research): The documentation page provides insights into various AI research topics, including benchmarking models for time series evaluations, the performance of prompt caching, and comparisons of agent frameworks and their implementations. - [Cookbooks](https://docs.arize.com/arize/examples): The "Cookbooks" section of the Arize Docs provides a collection of tutorials and example notebooks on how to effectively use Arize for various tasks, including LLM tracing, evaluations, experiments, and deploying machine learning and computer vision applications. - [Trace and Evaluate Agents](https://docs.arize.com/arize/examples/tracing-an-agent): This documentation page provides a comprehensive guide on how to trace and evaluate agents using Arize, focusing on creating a customer support agent, auto-instrumentation for function calls, and setting up evaluation templates to assess agent performance. - [Trace and Evaluate RAG](https://docs.arize.com/arize/examples/trace-and-evaluate-rag): This documentation page provides a comprehensive guide for tracing and evaluating retrieval-augmented generation (RAG) applications using Arize, including auto-instrumentation for various frameworks and detailed evaluation templates for assessing performance. - [Trace Voice Applications](https://docs.arize.com/arize/examples/trace-voice-applications): This documentation page provides guidelines on how to instrument voice applications to send events and traces to Arize, detailing key event mapping, span creation for session management, audio input handling, and response generation, as well as implementation considerations for observability with the OpenAI Realtime API. - [Evaluate Voice Applications](https://docs.arize.com/arize/examples/evaluate-voice-applications): This documentation page outlines the steps for evaluating voice applications using OpenAI models within the Phoenix framework, detailing model setup, template definition, optional data processing, and evaluation execution. - [Overview: Tracing](https://docs.arize.com/arize/llm-tracing/tracing): The Arize Docs page on LLM Tracing provides an overview of how tracing works in LLM applications using OpenTelemetry and OpenInference to improve performance and troubleshoot issues like latency and errors, offering guidance on setup and customization. - [What are Traces?](https://docs.arize.com/arize/llm-tracing/tracing/what-are-traces): The documentation page explains that traces represent a single request consisting of multiple spans, which document the path of requests through various operations, making it easier to debug and understand application flow, while also detailing span types and their attributes. - [How does Tracing Work?](https://docs.arize.com/arize/llm-tracing/tracing/how-does-tracing-work): The "How does Tracing Work?" documentation page explains the five key components of tracing in Arize, including instrumentation, span processing, exporting, the OpenTelemetry Protocol, and the collector, all designed to help troubleshoot applications and visualize tracing data in real-time. - [What is OpenTelemetry?](https://docs.arize.com/arize/llm-tracing/tracing/what-is-opentelemetry): OpenTelemetry (OTel) is an open-source framework that facilitates the collection, processing, and exporting of application traces, metrics, and logs, enabling enhanced observability and performance insights, and is utilized by Arize through their arize-otel package for simplified integration. - [What is OpenInference?](https://docs.arize.com/arize/llm-tracing/tracing/what-is-openinference): OpenInference is an open-source package that enhances tracing of AI applications by providing standardized conventions and plugins, facilitating monitoring and instrumentation across various models, frameworks, and vendors, and can be utilized with any OpenTelemetry-compatible backend. - [Openinference Semantic Conventions](https://docs.arize.com/arize/llm-tracing/tracing/semantic-conventions): The OpenInference Semantic Conventions documentation outlines standardized span attributes for tracing across models, frameworks, and vendors, providing examples of attributes like input values, output values, and metadata to facilitate consistent telemetry in LLM applications. - [Quickstart: LLM Tracing](https://docs.arize.com/arize/llm-tracing/quickstart-llm): The Quickstart: LLM Tracing documentation page provides a step-by-step guide on how to trace your LLM applications using Arize, including installation of tracing packages, API key setup, integration of tracing code, and examples for various LLM providers. - [Tracing Integrations (Auto)](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto): The "Tracing Integrations (Auto)" documentation page provides an overview of automated tracing integrations available in Arize, including support for various AI frameworks and tools like OpenAI, Vertex AI, AWS Bedrock, and more, along with instructions on setting up and using these integrations effectively. - [OpenAI](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/openai): The documentation provides guidelines on how to instrument OpenAI API calls for tracing using the Arize platform, leveraging the OpenInference framework to facilitate the collection and analysis of input and output messages, function calls, and associated metadata. - [Vertex AI (Gemini)](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/vertex-ai-gemini): The Vertex AI (Gemini) integration documentation outlines how to instrument the VertexAI SDK to send traces to Arize using OpenTelemetry for performance monitoring and analysis. - [AWS Bedrock](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/aws-bedrock): The AWS Bedrock documentation page provides guidance on instrumenting LLM calls to AWS Bedrock using OpenInference and OpenTelemetry for enhanced observability and traceability of applications leveraging foundation models. - [MistralAI](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/mistralai): This documentation page provides a quickstart guide on how to instrument LLM calls made using MistralAI's SDK for tracing and evaluation with Arize AI, utilizing the openinference instrumentation package. - [Anthropic](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/anthropic): This documentation page provides guidelines on instrumenting LLM calls made with Anthropic's SDK using the `openinference-instrumentation-anthropic` package to enable tracing of interactions in Arize. - [LlamaIndex](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/llamaindex): Arize AI provides comprehensive support for LlamaIndex applications, enabling users to instrument their LLM applications for detailed tracing of inputs, embeddings, retrievals, and outputs using OpenTelemetry with the arize-otel package. - [Langchain](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/langchain): The Langchain documentation outlines how to integrate Arize's tracing capabilities into LangChain applications, allowing for comprehensive monitoring of LLM interactions by utilizing OpenTelemetry and the Arize-otel package for effective performance analysis and debugging. - [LangGraph](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/langgraph): The LangGraph documentation page provides an overview and guidance on instrumenting LangGraph for tracing applications, noting that it uses similar instrumentation methods as LangChain. - [Haystack](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/haystack): This documentation page provides a quickstart guide on how to instrument LLM applications built with Haystack for tracing and sending trace data to Arize using OpenTelemetry. - [LiteLLM](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/litellm): The LiteLLM documentation provides guidance on instrumenting the LiteLLM framework for tracing with OpenTelemetry integration, allowing users to send and view traces in Arize's platform. - [CrewAI](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/crewai): This documentation page provides instructions for instrumenting LLM applications using the CrewAI framework to trace and evaluate multi-agent automation while sending the data to Arize for analysis. - [Groq](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/groq): The Groq documentation page provides guidance on how to instrument LLM applications built using Groq for tracing calls and sending traces to an Arize model endpoint, including code examples and setup instructions. - [DSPy](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/dspy): DSPy is a framework that automates the prompting and fine-tuning of language models while enabling observability through the visualization of application calls by integrating with OpenTelemetry and Arize for tracing and monitoring. - [Autogen](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/autogen): AutoGen is a Microsoft agent framework that enables the creation of complex, collaborative agents, with capabilities for automatic instrumentation using OpenAI. - [Guardrails AI](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/guardrails-ai): This documentation page provides a guide for setting up instrumentation for applications using the Guardrails AI framework to trace LLM calls and send traces to Arize using OpenInference. - [Prompt flow](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/prompt-flow): The documentation page explains how to create flows using Microsoft PromptFlow and send their traces to the Arize platform, utilizing OpenTelemetry for trace compatibility and management. - [Vercel AI SDK](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/vercel-ai-sdk): The Vercel AI SDK documentation page provides instructions on how to integrate Vercel AI SDK spans with Arize using OpenTelemetry, requiring version 3.3 or higher, along with necessary setup steps for processing and exporting spans. - [Llama](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/llama): The documentation page provides instructions for setting up and tracing an open-source Llama model using Ollama and integrating it with Arize for monitoring and analysis of LLM applications. - [Together AI](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/together-ai): This documentation page provides guidance on integrating Together AI with Arize AI for OpenTelemetry tracing in LLM applications, including setup instructions and code examples. - [OpenTelemetry (arize-otel)](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/opentelemetry-arize-otel): The OpenTelemetry (arize-otel) documentation provides a comprehensive guide for setting up tracing in LLM applications with a simplified instrumentation code that integrates Arize-specific configurations and makes it easy to send traces to the Arize platform. - [LangFlow](https://docs.arize.com/arize/llm-tracing/tracing-integrations-auto/langflow): LangFlow is an open-source visual framework that allows developers to design, prototype, and deploy applications using large language models (LLMs), enhanced by Arize's observability platform for performance tracking and debugging. - [How to: Tracing (Manual)](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual): This documentation page provides a comprehensive guide on manual tracing for LLMs, detailing how to set up tracing, integrate with OpenTelemetry, and log various attributes and metadata for effective evaluation and analysis. - [Set up Tracing](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/set-up-tracing): The "Set up Tracing" documentation page provides guidance on utilizing Arize's tracing capabilities through auto-instrumentation or manual configuration with OpenTelemetry for effective logging and monitoring of application performance. - [Instrument with OpenInference Helpers](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/instrument-with-openinference-helpers): This documentation page provides guidance on using OpenInference helpers for manual instrumentation of functions, chains, agents, and tools with OpenTelemetry, simplifying the tracing process through various decorators and methods. - [How to Send to a Specific Project and Space ID](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/how-to-send-to-a-specific-model-project-and-space-id): This documentation page explains how to configure tracing in Arize by sending spans to a specified project and space ID using OpenTelemetry, detailing the necessary imports and setup procedures. - [Get the Current Span/Context and Tracer](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/get-the-current-span-context-and-tracer): This documentation page explains how to retrieve the current span and context, as well as how to access the tracer in OpenTelemetry for enriching span information and managing spans during execution. - [Trace Function Calls](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/trace-function-calls): The documentation explains how to trace function calls in OpenAI applications using Arize, allowing users to log chat histories and function calls for debugging and optimization through a single code line, and access a user-friendly interface to review and refine parameters in the prompt playground. - [Log Prompt Templates & Variables](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/instrumenting-prompt-templates-and-prompt-variables): The documentation page provides guidance on how to instrument prompt templates and variables in Arize's tracing system to enhance logging and experimentation without the need to deploy new versions. - [Add Attributes, Metadata and Tags to Span](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/hybrid-instrumentation): The documentation page outlines the methods for adding attributes, metadata, and tags to spans in LLM tracing, emphasizing the importance of enhancing span information for improved debugging and analysis within Arize, and explaining how to utilize semantic conventions and context management for effective data tracking. - [Add Events, Exceptions and Status to Spans](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/add-events-exceptions-and-status-to-spans): The documentation page explains how to add events, exceptions, and status to spans in OpenTelemetry, detailing methods for logging key events, handling errors, and recording exceptions to enhance traceability and debugging in applications. - [Set Session ID and User ID](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/sessions-and-users): The documentation page explains how to set a Session ID and User ID for tracing in applications, allowing for better analysis and debugging of interactions by grouping spans with these identifiers. - [Configure OTEL Tracer](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/customize-auto-instrumentation): The "Configure OTEL Tracer" documentation page provides guidance on utilizing the OpenTelemetry SDK to customize trace configurations for sending data to Arize, including setting up authentication, resource attributes, and span processing. - [Create LLM, Retriever and Tool Spans](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/instrumenting-span-types): This documentation page provides detailed guidance on manually creating spans for LLMs, retrievers, and tool calls, describing how to log various attributes and handle tracing in a structured way for performance analysis and monitoring. - [Create Tool Spans](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/creating-tool-spans): This documentation page provides a guide on creating tool spans for tracing functions in Arize, detailing how to set up the tracer, log inputs and outputs, and capture key events related to tool and LLM operations. - [Log Input](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/input): This documentation page provides a guide on manually logging input for traces in Arize AI, detailing how to set attributes for span inputs, particularly focusing on LLM-type spans and their associated message formats. - [Log Outputs](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/outputs): This documentation page describes how to manually log output values in LLM tracing using OpenTelemetry attributes to ensure proper visibility of response messages in the span information. - [Mask Span Attributes](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/masking-span-attributes): The Mask Span Attributes documentation explains how to modify the observability settings of tracing in Arize by using environment variables or configuration code to control the visibility of sensitive data and reduce payload sizes during logging. - [Redact Sensitive Data from Traces](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/redact-sensitive-data-from-traces): This documentation page explains how to redact sensitive personal identifiable information (PII) from traces in OpenTelemetry using a custom span processor that employs regex patterns, and also introduces an option to use Microsoft Presidio for more advanced PII detection and redaction. - [Send Traces from Phoenix -> Arize](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/send-traces-from-phoenix-greater-than-arize): This documentation page outlines how to collect traces in Phoenix, query specific traces, and export the data to Arize using the Python SDK, emphasizing the simplicity of schema mapping due to OpenInference integration. - [Log as Inferences](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/log-as-inferences): The documentation page explains how to log LLM application data as inferences in Arize, detailing the usage of prompts and responses, both with and without embedding vectors, and offers code examples to facilitate this process. - [Advanced Tracing (OTEL) Examples](https://docs.arize.com/arize/llm-tracing/how-to-tracing-manual/advanced-tracing-otel-examples): The "Advanced Tracing (OTEL) Examples" documentation provides detailed guidance on implementing manual context propagation, creating custom decorators, and applying advanced sampling techniques in OpenTelemetry to enhance observability in asynchronous and multi-service applications. - [How to: Query Traces](https://docs.arize.com/arize/llm-tracing/how-to-query-traces): This documentation page explains how to query traces in Arize, including filtering and exporting traces, as well as utilizing AI-powered search and analysis tools. - [Filter Traces](https://docs.arize.com/arize/llm-tracing/how-to-query-traces/filter-traces): The "Filter Traces" documentation page provides guidance on constructing queries in the filter bar to effectively filter and retrieve specific trace data using various syntax rules and operators. - [Export Traces](https://docs.arize.com/arize/llm-tracing/how-to-query-traces/export-traces-from-arize): The "Export Traces" documentation page provides guidance on how to export trace data from Arize to both a notebook and CSV format for purposes such as evaluation and data augmentation. - [✨AI Powered Search & Filter](https://docs.arize.com/arize/llm-tracing/how-to-query-traces/filter-traces-1): The documentation page explains how to utilize AI-powered search and filter functionalities in Arize to efficiently search through data, either using natural language or by directly applying filter syntax. - [✨AI Powered Trace Analysis](https://docs.arize.com/arize/llm-tracing/how-to-query-traces/ai-powered-trace-analysis): The AI Powered Trace Analysis documentation provides guidance on utilizing advanced analysis techniques to identify patterns and generate insights from trace data in AI applications. - [✨AI Span Analysis & Evaluation](https://docs.arize.com/arize/llm-tracing/how-to-query-traces/ai-span-analysis-and-evaluation): The AI Span Analysis & Evaluation documentation page details the use of Copilot Span Chat for intuitive analysis and evaluation of span data through natural language queries and evaluations. - [Overview: LLM Evaluation](https://docs.arize.com/arize/llm-evaluation-and-annotations/how-does-evaluation-work): This documentation page provides an overview of LLM evaluation, detailing the importance of assessing the performance of LLM applications through various metrics and processes to ensure reliability and improvement in outputs. - [Evaluation Basics](https://docs.arize.com/arize/llm-evaluation-and-annotations/how-does-evaluation-work/evaluation-basics): The "Evaluation Basics" documentation outlines essential strategies for effectively evaluating AI applications by defining key metrics, selecting measurement approaches, choosing evaluation methods and frequencies, and establishing benchmarks for continuous performance assessment. - [LLM as a Judge](https://docs.arize.com/arize/llm-evaluation-and-annotations/how-does-evaluation-work/best-practices): The "LLM as a Judge" documentation explains how to utilize large language models to evaluate the outputs of other LLMs by defining criteria, crafting evaluation prompts, and interpreting scores, thereby enhancing the efficiency and scalability of the evaluation process. - [Evaluation Templates](https://docs.arize.com/arize/llm-evaluation-and-annotations/how-does-evaluation-work/evaluation-templates): The "Evaluation Templates" documentation page outlines how to construct effective evaluation templates for assessing LLM performance by selecting the appropriate model, creating clear prompt templates, and benchmarking against a ground truth dataset. - [Agent Evaluation](https://docs.arize.com/arize/llm-evaluation-and-annotations/how-does-evaluation-work/agent-evaluation): The Agent Evaluation documentation provides a comprehensive overview of evaluating various components of agents, including their decision-making processes, planning efficiency, skills performance, memory management, and reflection on task completion, all aimed at improving agent functionality and debugging capabilities. - [Retrieval Evaluations](https://docs.arize.com/arize/llm-evaluation-and-annotations/how-does-evaluation-work/troubleshoot-retrieval-with-vector-stores): The "Retrieval Evaluations" documentation page outlines how to effectively log and evaluate search and retrieval systems using Arize AI, including troubleshooting common issues and the importance of understanding the corpus dataset in enhancing response quality. - [Embedding Cluster Summarization](https://docs.arize.com/arize/llm-evaluation-and-annotations/how-does-evaluation-work/troubleshoot-retrieval-with-vector-stores/embedding-cluster-summarization): The "Embedding Cluster Summarization" documentation page explains how Arize AI leverages large language models to automatically group and summarize related prompts or responses from embeddings, enabling efficient data analysis and pattern identification while highlighting areas for improvement in model responses. - [Quickstart: Evaluation](https://docs.arize.com/arize/llm-evaluation-and-annotations/quickstart-evaluation): This documentation page provides a step-by-step guide for setting up an online evaluation task in the Arize platform, enabling automatic evaluation of LLM outputs against selected data and evaluators. - [How To: Evaluations](https://docs.arize.com/arize/llm-evaluation-and-annotations/catching-hallucinations): The "How To: Evaluations" documentation page provides a comprehensive framework for assessing the performance of LLM applications across various dimensions such as correctness and relevance, with features like customizable evaluation criteria, speed optimization for large data volumes, and seamless integration with popular LLM frameworks. - [Online Evaluations (Tasks)](https://docs.arize.com/arize/llm-evaluation-and-annotations/catching-hallucinations/tasks-for-online-evals): The Online Evaluations (Tasks) feature in Arize allows users to automate the tagging of new spans with evaluation labels upon data arrival, streamlining the management of production logs and enhancing the evaluation process for large-scale applications. - [Offline Evaluations](https://docs.arize.com/arize/llm-evaluation-and-annotations/catching-hallucinations/log-evaluations-to-arize): The "Offline Evaluations" documentation page provides a guide on how to execute offline evaluations of machine learning models using the Arize SDK, including importing spans, running custom evaluators, and logging results back to Arize. - [Code Evaluations](https://docs.arize.com/arize/llm-evaluation-and-annotations/catching-hallucinations/code-evaluations): The "Code Evaluations" documentation page provides a guide on how to set up code-based evaluations as an alternative to using LLM as a judge, detailing the process of creating managed code evaluators and customizing them with specific parameters for various validation checks. - [Custom Eval Templates](https://docs.arize.com/arize/llm-evaluation-and-annotations/catching-hallucinations/custom-evaluators): The "Custom Eval Templates" documentation page explains how to create and implement custom evaluation criteria and prompt templates for LLM evaluations in Arize, utilizing functions from the Phoenix library to support various evaluation types such as LLM classification and numeric scoring. - [Arize Evaluators as Objects](https://docs.arize.com/arize/llm-evaluation-and-annotations/catching-hallucinations/arize-evaluators-as-objects): The documentation page explains how to use Arize Evaluators as objects, outlining steps for selecting an evaluator, setting up the evaluation library, preparing data, and running evaluations with various built-in evaluators for tasks such as hallucination detection, QA correctness, relevance assessment, toxicity evaluation, and summarization, while supporting multiple AI models. - [✨AI Powered Eval Builder](https://docs.arize.com/arize/llm-evaluation-and-annotations/catching-hallucinations/ai-powered-eval-builder): The AI Powered Eval Builder enables users to create custom evaluations tailored to specific application needs by specifying goals or allowing the system to analyze data for suggestions. - [✨AI Powered Eval Analysis](https://docs.arize.com/arize/llm-evaluation-and-annotations/catching-hallucinations/ai-powered-eval-analysis): The AI Powered Eval Analysis feature in Arize allows users to automate the summarization of evaluation metrics for their LLM applications while providing targeted suggestions for performance improvement. - [✨AI Powered RAG Analysis](https://docs.arize.com/arize/llm-evaluation-and-annotations/catching-hallucinations/ai-powered-rag-analysis): The "AI Powered RAG Analysis" documentation page provides guidance on evaluating and refining the retrieval process in RAG applications by analyzing responses for relevance and accuracy while offering improvement strategies. - [Arize Eval Templates](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge): The Arize Eval Templates documentation provides an overview of how to use various evaluation prompts for assessing the performance of language models against benchmark datasets, offering details on setup, supported models, and specific evaluators like Hallucination and QA Evaluators to measure accuracy and relevance of outputs. - [Hallucinations](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/hallucinations): The Hallucinations documentation page provides guidelines on using the LLM Evaluation template to detect hallucinations in AI-generated answers by comparing them against reference texts, along with instructions on how to run the evaluation using specific coding examples. - [Q&A on Retrieved Data](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/q-and-a-on-retrieved-data): The "Q&A on Retrieved Data" documentation page outlines the evaluation process for determining if a Q&A system's answer is correct based on the provided question and context, requiring a simple "correct" or "incorrect" response. - [Summarization](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/summarization): The Summarization Eval Template is designed to evaluate the quality of text summaries by comparing them to their original documents and categorizing the summaries as "good" or "bad" based on criteria of comprehensiveness, conciseness, coherence, and independence. - [Code Generation](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/code-generation): The documentation page provides guidance on using the Code Generation Eval Template to evaluate the correctness and readability of code generated by a coding query, detailing how to run the evaluation and presenting benchmark results. - [Toxicity](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/toxicity): The Toxicity documentation page provides guidance on evaluating text for toxicity, detailing how to classify text as "toxic" or "non-toxic" based on specific criteria, alongside a code example for running the evaluation using the OpenAI model and presenting benchmark results for different AI models. - [AI vs Human (Groundtruth)](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/ai-vs-human-groundtruth): The "AI vs Human (Groundtruth)" documentation outlines a method for evaluating AI-generated answers against human expert responses to ensure quality and accuracy in retrieval-augmented generation (RAG) systems. - [Citation](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/citation-evals): The documentation page provides a guide on how to run citation evaluations within Arize's LLM evaluation framework, including code examples and benchmark results for various models. - [User Frustration](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/user-frustration): The User Frustration documentation provides a template and implementation details for evaluating whether users interacting with conversation bots feel frustrated, indicating their emotional response through a binary classification of "frustrated" or "ok." - [SQL Generation](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/sql-generation-eval): The SQL Generation documentation provides an overview of evaluating the correctness of SQL generated by a language model based on a human-provided description of the query, including a template for assessing the SQL's alignment with the intended question and example code for implementation. - [Agent Tool Calling](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/tool-calling-eval): The "Agent Tool Calling" documentation page explains how to evaluate an agent's tool selection, parameter extraction, and code generation for tool calls by analyzing input questions and the tools chosen, along with providing a prompt template for conducting such evaluations. - [Agent Tool Selection](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/agent-tool-selection): The Agent Tool Selection page provides guidance on evaluating how effectively an agent selects the appropriate tool to use based on specific inputs and criteria, focusing on whether the generated function call can adequately answer a given question. - [Agent Parameter Extraction](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/agent-parameter-extraction): The Agent Parameter Extraction documentation explains how to evaluate a model's accuracy in extracting parameters from user queries for tool calls, emphasizing the importance of matching function call parameters to a specified JSON schema and providing a grading template for evaluations. - [Agent Path Convergence](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/agent-path-convergence): The "Agent Path Convergence" documentation explains how to evaluate the efficiency of agent decision-making by calculating a convergence score based on the number of steps taken for similar queries, aiming for a consistent and optimal pathway. - [Agent Planning](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/agent-planning): The Agent Planning documentation provides a template for evaluating AI-generated plans based on their validity, efficiency, and suitability for accomplishing specific user tasks using available tools. - [Agent Reflection](https://docs.arize.com/arize/llm-evaluation-and-annotations/arize-evaluators-llm-as-a-judge/agent-reflection): The documentation page outlines the "Agent Reflection" feature in Arize AI, which provides a prompt template for evaluating an agent's response by reflecting on its correctness, identifying errors, and generating instructions for future improvements. - [How To: Labeling Queues](https://docs.arize.com/arize/llm-evaluation-and-annotations/how-to-labeling-queues): The "How To: Labeling Queues" documentation outlines the process for creating and managing labeling queues for data annotation by subject matter experts, enabling the collection of expert annotations to generate golden datasets and evaluate LLM outputs against human assessments. - [How To: Annotating Spans](https://docs.arize.com/arize/llm-evaluation-and-annotations/annotations): The documentation page describes how to annotate spans in LLM applications by adding custom labels to traces, which enables AI engineers to categorize data, log human feedback, and enhance evaluation processes, with functionality accessible via both the user interface and an API. - [Overview: Datasets & Experiments](https://docs.arize.com/arize/llm-datasets-and-experiments/datasets-and-experiments): The "Overview: Datasets & Experiments" documentation page explains how to systematically test and validate changes in LLM applications by creating datasets, defining tasks, and implementing evaluators for assessing output performance. - [Quickstart: Datasets & Experiments](https://docs.arize.com/arize/llm-datasets-and-experiments/quickstart): This documentation page provides a quickstart guide for creating datasets and running experiments in Arize AI, including steps to install dependencies, define tasks and evaluation criteria, and view experiment results. - [How To: Create Datasets](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-datasets): This documentation page provides guidance on creating datasets in Arize, including methods to generate datasets from spans, CSV files, code, or synthetic LLMs, as well as adding prompt templates and variables. - [Create a dataset from your spans](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-datasets/create-a-dataset-from-your-spans): This documentation page explains how to create datasets in Arize by utilizing spans from your application, allowing for the curation of datasets using tracing filters and AI search. - [Create a dataset from CSV](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-datasets/create-a-dataset-from-csv): This documentation page explains how to create a dataset in Arize AI from a CSV file, emphasizing that the CSV must include an id column to properly format the data for experiments or prompt playground use. - [Create a dataset with code](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-datasets/create-a-dataset-with-code): This documentation page provides a guide on how to create datasets programmatically using the Arize Python SDK, including examples of simple datasets, datasets with prompts, and datasets with prompt variables. - [Create a synthetic dataset using LLMs](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-datasets/create-a-synthetic-dataset-using-llms): The documentation page explains how to create a synthetic dataset using LLMs by generating examples based on prompts and uploading them to the Arize platform. - [Add prompt templates and variables to your dataset](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-datasets/add-prompt-templates-and-variables-to-your-dataset): This documentation page explains how to add prompt templates and variables to your dataset for LLM applications, allowing for effective evaluation and testing by using structured JSON strings that conform to the OpenInference semantic conventions. - [How to: Create Experiments](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-experiments): The documentation page provides a comprehensive guide on creating experiments in Arize, detailing processes for task creation, experiment evaluation, and result logging. - [Create a Task](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-experiments/create-a-task): This documentation page explains how to create a task for use in experiments within the Arize AI platform, detailing the structure of a task function and providing code examples for retrieving dataset attributes and using an LLM to generate answers. - [Create an Experiment Evaluator](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-experiments/create-an-experiment-evaluator): This documentation page describes how to create an experiment evaluator in Arize by writing functions to evaluate task outputs, detailing the types of evaluators available and the parameters and outputs supported. - [LLM Eval Experiment](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-experiments/create-an-experiment-evaluator/llm-eval-experiment): The LLM Eval Experiment documentation provides guidance on utilizing LLM evaluators to assess experiment outputs, including the ability to customize evaluation templates and utilize built-in functionalities to check for hallucinations in generated responses. - [Code Based Eval Experiment](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-experiments/create-an-experiment-evaluator/code-based-eval-experiment): The documentation page describes how to create and use code evaluators to assess the outputs of experiments in Arize, including the option to implement custom evaluators or utilize prebuilt functions for specific validation criteria. - [Advanced: Create an Evaluator Class](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-experiments/create-an-experiment-evaluator/advanced-create-an-evaluator-class): The documentation page outlines how to create an evaluator class in Arize's Python SDK, allowing users to define custom evaluation logic for experiments by inheriting from the base Evaluator class and returning an EvaluationResult based on various input parameters. - [Run an Experiment](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-experiments/run-an-experiment): The documentation page describes how to run an experiment using the Arize platform's Python SDK, outlining the necessary parameters and options for executing the experiment and logging results for further analysis. - [Log Experiment Results](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-experiments/log-experiment-results): The "Log Experiment Results" documentation page instructs users on how to record and log experiment outcomes in the Arize platform using the Python SDK by mapping columns from their existing data and integrating evaluation results. - [How To: Use Experiments](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-use-experiments): The "How To: Use Experiments" documentation page provides guidance on creating and managing experiments within Arize, including tasks such as comparing, filtering, downloading results, tracing experiments, and setting up asynchronous and automated experiments. - [Compare & Filter Experiments](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-use-experiments/filter-experiments): The "Compare & Filter Experiments" documentation page details how users can view and compare different experiment results side by side, apply filters to target specific iterations, and construct queries in the filter bar to refine their comparisons. - [Download Experiment Results](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-use-experiments/download-experiment-results): This documentation page explains how to download experiment results as a DataFrame object using the Arize Python client with relevant parameters for accessing specific experiments. - [Trace an Experiment](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-use-experiments/trace-an-experiment): The "Trace an Experiment" documentation page provides guidance on how to instrument and trace experiments using Arize's tools, including explicit spans and auto-instrumentation with OpenTelemetry, enabling users to monitor and evaluate their machine learning experiments effectively. - [Sample a Dataset for an Experiment](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-use-experiments/sample-a-dataset-for-an-experiment): The documentation page explains how to sample a dataset for experiments in Arize, allowing users to apply various sampling methods to a dataframe before running experiments, including random, stratified, and systematic sampling techniques. - [Setup Asynchronous Experiments](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-use-experiments/run-async-vs-sync-tasks-and-evals): The "Setup Asynchronous Experiments" documentation page describes how to efficiently configure experiments in Arize AI to run asynchronously for faster execution compared to synchronous runs, providing code examples for both approaches. - [CI/CD for Automated Experiments](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-use-experiments/ci-cd-for-automated-experiments): This documentation page details how to set up CI/CD pipelines for automated experiments in Arize, allowing for efficient testing and validation of changes to models and prompts by integrating a defined experiment file and workflow with platforms like GitHub Actions and GitLab CI/CD. - [Github Action Basics](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-use-experiments/ci-cd-for-automated-experiments/github-action-basics): The "GitHub Action Basics" documentation outlines how to automate workflows in GitHub repositories using YAML-defined workflow files, detailing key concepts such as workflows, jobs, and steps, and how to set up events that trigger these workflows. - [Gitlab CI/CD Basics](https://docs.arize.com/arize/llm-datasets-and-experiments/how-to-use-experiments/ci-cd-for-automated-experiments/gitlab-ci-cd-basics): The GitLab CI/CD Basics documentation provides an overview of how to automate workflows in a GitLab repository by defining pipelines, stages, jobs, and scripts, alongside instructions for creating a `.gitlab-ci.yml` file to implement these features. - [Use Cases: Experiments](https://docs.arize.com/arize/llm-datasets-and-experiments/use-cases-experiments): The "Use Cases: Experiments" documentation page provides insights and guidance on utilizing Arize AI for creating, managing, and evaluating experiments with various datasets and metrics. - [Summarization](https://docs.arize.com/arize/llm-datasets-and-experiments/use-cases-experiments/summarization): This documentation page provides a guide on creating a dataset and running an experiment for summarization tasks using the Arize SDK, including steps for summarizing articles, defining evaluators, and evaluating output with ROUGE metrics. - [Text 2 SQL](https://docs.arize.com/arize/llm-datasets-and-experiments/use-cases-experiments/text-2-sql): The Text2SQL documentation outlines a comprehensive guide for implementing a Text-to-SQL solution using the NBA dataset, including setup instructions for datasets and evaluators, generating SQL queries from natural language questions, and running experiments with evaluation metrics to validate query correctness. - [Quickstart: Prompt Engineering](https://docs.arize.com/arize/prompt-engineering/prompt-playground): The Quickstart: Prompt Engineering guide provides step-by-step instructions for using the Prompt Playground to debug and optimize LLM prompts through iterative testing, integrating datasets, and saving improved prompts as experiments for team collaboration. - [How To: Prompt Playground](https://docs.arize.com/arize/prompt-engineering/how-to-prompt-playground): The "How To: Prompt Playground" documentation page provides guidance on utilizing the Prompt Playground by detailing features such as loading datasets, using tools, saving outputs as experiments, and enhancing prompts with images. - [Production Replay](https://docs.arize.com/arize/prompt-engineering/how-to-prompt-playground/production-replay): The Production Replay feature allows users to load spans from LLM Tracing into the Prompt Playground for iterative testing and refinement of prompts, automatically populating relevant LLM parameters and settings for precise replay. - [Load a Dataset into Playground](https://docs.arize.com/arize/prompt-engineering/how-to-prompt-playground/load-a-dataset-into-playground): The "Load a Dataset into Playground" documentation page explains how to utilize curated datasets for testing and evaluating prompts in the Arize Playground, allowing users to compare outputs and ensure improved performance without regressing on key scenarios. - [Using Tools in Playground](https://docs.arize.com/arize/prompt-engineering/how-to-prompt-playground/using-tools-in-playground): The "Using Tools in Playground" documentation provides guidance on leveraging the playground interface for debugging LLM (Large Language Model) tool calls, enabling users to define, modify, and test functions while facilitating rapid iteration and comparison of outputs in a structured JSON format. - [✨AI Powered Prompt Builder](https://docs.arize.com/arize/prompt-engineering/how-to-prompt-playground/ai-powered-prompt-builder): The AI Powered Prompt Builder documentation provides guidance on optimizing and fine-tuning prompts to enhance response quality in various applications. - [Save Playground Outputs as an Experiment](https://docs.arize.com/arize/prompt-engineering/how-to-prompt-playground/save-playground-outputs-as-an-experiment): The documentation page explains how to save outputs from the Prompt Playground as an experiment in Arize AI for further analysis and comparison, enabling efficient collaboration and decision-making based on both qualitative and quantitative metrics. - [Adding Images in the Playground](https://docs.arize.com/arize/prompt-engineering/how-to-prompt-playground/adding-images-in-the-playground): The documentation page explains how to add images to your playground runs by inserting a user prompt block, adding an image variable with a URL or base64 string, and running the prompt to process the image alongside text input. - [Prompt Hub](https://docs.arize.com/arize/prompt-engineering/prompt-hub): The Prompt Hub is a centralized repository within the Arize platform that allows users to manage, iterate, and deploy prompt templates for various applications, featuring version control, collaboration, and evaluation capabilities to enhance prompt workflows. - [Integrations: Playground](https://docs.arize.com/arize/prompt-engineering/integrations-playground): The Integrations: Playground documentation provides guidance on integrating various AI models and services, including OpenAI, Azure, AWS Bedrock, and custom LLM models, into the Arize ecosystem for enhanced experimentation and evaluation. - [OpenAI](https://docs.arize.com/arize/prompt-engineering/integrations-playground/openai): This documentation page provides guidance on integrating OpenAI models with the Arize platform, including adding your OpenAI key and details on the supported models for various actions within Arize. - [Azure OpenAI](https://docs.arize.com/arize/prompt-engineering/integrations-playground/azure-openai): This documentation page outlines the process for integrating Azure OpenAI with the Arize platform, including instructions for entering API keys and configuring settings for the Prompt Playground. - [AWS Bedrock](https://docs.arize.com/arize/prompt-engineering/integrations-playground/aws-bedrock): The AWS Bedrock documentation outlines the steps to integrate Arize AI with AWS Bedrock by creating an IAM role that allows Arize to assume permissions for invoking Bedrock models, while ensuring data security and compliance. - [VertexAI](https://docs.arize.com/arize/prompt-engineering/integrations-playground/vertexai): This documentation page provides a comprehensive guide for integrating Google VertexAI with Arize, detailing steps to create an integration key, set up IAM roles, and configure necessary project settings to enable API access for model usage. - [Custom LLM Models](https://docs.arize.com/arize/prompt-engineering/integrations-playground/custom-llm-models): The "Custom LLM Models" documentation page provides instructions on how to add custom model endpoints to Arize AI's prompt playground, allowing users to utilize any model with an OpenAI-compatible API. - [Overview: LLM Monitoring](https://docs.arize.com/arize/llm-monitoring-and-guardrails/production-monitoring): The LLM Monitoring documentation on Arize provides guidelines for creating dashboards to track key metrics and setting up monitors to receive notifications about potential issues in LLM applications. - [Monitoring Token Counts](https://docs.arize.com/arize/llm-monitoring-and-guardrails/token-counting): The "Monitoring Token Counts" documentation page explains how to use token counts in Arize AI to identify problematic traces, analyze long-running conversations, and monitor prompt variable usage, with an emphasis on computing and aggregating token counts at both the span and trace levels. - [Guardrails](https://docs.arize.com/arize/llm-monitoring-and-guardrails/guardrails): The documentation page for Arize's Guardrails outlines the essential features, types of guards, and implementation details for ensuring the safety and compliance of Large Language Models (LLMs) in real-time by monitoring and correcting inappropriate content. - [LLM Red Teaming](https://docs.arize.com/arize/llm-monitoring-and-guardrails/llm-red-teaming): The LLM Red Teaming documentation details a systematic approach to identifying and addressing vulnerabilities in AI systems through the use of simulated adversarial inputs, guiding users on implementing comprehensive red teaming using the Arize AI platform. - [Integrations: Monitors](https://docs.arize.com/arize/llm-monitoring-and-guardrails/integrations-monitors): The "Integrations: Monitors" page in the Arize documentation describes how to set up and utilize various monitoring integrations, including Slack, OpsGenie, and PagerDuty, to effectively track and manage model performance and alerts. - [Slack](https://docs.arize.com/arize/llm-monitoring-and-guardrails/integrations-monitors/slack): The Arize AI documentation page outlines how to set up and manage Slack integrations for alerting, enabling streamlined troubleshooting workflows by sending alert notifications directly to selected Slack channels. - [Manual Setup](https://docs.arize.com/arize/llm-monitoring-and-guardrails/integrations-monitors/slack/onprem): This documentation page provides a manual setup guide for integrating Arize AI model monitor notifications with Slack, allowing automatic alerts to be sent to designated Slack channels. - [OpsGenie](https://docs.arize.com/arize/llm-monitoring-and-guardrails/integrations-monitors/opsgenie): Arize's OpsGenie integration allows for streamlined alert management by sending comprehensive metadata to the OpsGenie platform, enabling quicker model troubleshooting and customizable alerting setups at both the organization and model levels. - [PagerDuty](https://docs.arize.com/arize/llm-monitoring-and-guardrails/integrations-monitors/pagerduty): The documentation page outlines how to integrate Arize AI with PagerDuty, detailing the benefits, setup process, and configurations for sending alerts based on machine learning model monitors. - [Custom Metrics](https://docs.arize.com/arize/llm-monitoring-and-guardrails/custom-metrics-api): The Custom Metrics documentation page explains how to programmatically query, create, and update tailored metrics using Arize Query Language (AQL) to evaluate and monitor specific performance aspects of applications. - [Arize Query Language Syntax](https://docs.arize.com/arize/llm-monitoring-and-guardrails/custom-metrics-api/custom-metric-syntax): The Arize Query Language Syntax documentation provides a detailed overview of the syntax and components for creating custom metrics using a SQL-like query language in Arize, including SELECT statements, expressions, and dimensions. - [Conditionals and Filters](https://docs.arize.com/arize/llm-monitoring-and-guardrails/custom-metrics-api/custom-metric-syntax/conditionals-and-filters): The "Conditionals and Filters" documentation page provides an overview of how to utilize conditional logic and filtering expressions within the Arize Query Language, including examples of CASE statements and WHERE clauses to refine data queries. - [All Operators](https://docs.arize.com/arize/llm-monitoring-and-guardrails/custom-metrics-api/custom-metric-syntax/all-operators): The "All Operators" section of the Arize documentation provides an overview of numeric and comparison operators available in the Arize Query Language, detailing their functions and applications to assist users in performing operations on values within the platform. - [All Functions](https://docs.arize.com/arize/llm-monitoring-and-guardrails/custom-metrics-api/custom-metric-syntax/all-functions): This documentation page provides a comprehensive reference of all aggregation and metric functions available in Arize, detailing their syntax, descriptions, and use cases for customizing metrics in machine learning evaluations. - [Custom Metric Examples](https://docs.arize.com/arize/llm-monitoring-and-guardrails/custom-metrics-api/custom-metric-examples): The "Custom Metric Examples" documentation page provides guidance on creating and utilizing custom metrics to assess various aspects of LLM applications, including performance evaluation, cost analysis, and tracking errors, along with specific SQL query examples for implementation. - [✨ArizeQL Generator](https://docs.arize.com/arize/llm-monitoring-and-guardrails/custom-metrics-api/arizeql-generator): The ArizeQL Generator documentation provides guidance on creating custom metrics using Arize Query Language (AQL) with support for natural language input and code translation for metrics development. - [Dashboards](https://docs.arize.com/arize/llm-monitoring-and-guardrails/dashboards): The Dashboards documentation page outlines how to create and utilize customizable dashboards in Arize AI to monitor key metrics such as token usage, performance, and experiment results for LLM applications, offering various widget types for enhanced visualization and insights. - [Dashboard Widgets](https://docs.arize.com/arize/llm-monitoring-and-guardrails/dashboards/widgets): Dashboard widgets in Arize allow users to customize their dashboards with interactive tiles for tracking experiments, visualizing metrics over time, analyzing data distributions, monitoring key performance statistics, and adding contextual text. - [✨Dashboard Widget Creation](https://docs.arize.com/arize/llm-monitoring-and-guardrails/dashboards/dashboard-widget-creation): The Dashboard Widget Creation documentation page provides guidance on using Arize's Copilot feature to easily create customizable dashboard widgets by selecting from suggestions or describing the desired widget, with options for real-time adjustments and multiple plot generation. - [How to: CV](https://docs.arize.com/arize/computer-vision-cv/how-to-cv): The "How to: CV" documentation page provides guidance on generating and analyzing embeddings in computer vision applications, including features such as similarity search and drift analysis. - [Generate Embeddings](https://docs.arize.com/arize/computer-vision-cv/how-to-cv/7.-troubleshoot-embedding-data): The "Generate Embeddings" documentation page from Arize explains the importance of embeddings in deep learning, outlines two methods for generating embeddings (bringing your own or having Arize generate them), and highlights their role in analyzing unstructured data and measuring data drift. - [How to Generate Your Own Embedding](https://docs.arize.com/arize/computer-vision-cv/how-to-cv/7.-troubleshoot-embedding-data/how-to-generate-your-own-embedding): The documentation page provides a guide on generating your own embeddings using pre-trained models, detailing processes for various use cases in computer vision and natural language processing. - [Let Arize Generate Your Embeddings](https://docs.arize.com/arize/computer-vision-cv/how-to-cv/7.-troubleshoot-embedding-data/let-arize-generate-your-embeddings): The documentation page explains how to use Arize's Auto-Embeddings feature within the Python SDK to automatically generate and manage embeddings from various input data types, leveraging pre-trained models to facilitate seamless integration into your workflows. - [Embedding & Cluster Analyzer](https://docs.arize.com/arize/computer-vision-cv/how-to-cv/embedding-and-cluster-analyzer): The Embedding & Cluster Analyzer documentation provides an overview of using UMAP for dimensionality reduction and clustering to visualize embeddings, identify patterns in data, and improve model performance through analysis of clusters based on various metrics. - [✨Embedding Summarization](https://docs.arize.com/arize/computer-vision-cv/how-to-cv/embedding-summarization): The Embedding Summarization feature in Arize AI's Copilot simplifies the analysis of unstructured data by automatically identifying patterns in embeddings, allowing users to visualize clusters and generate summary reports without extensive manual exploration. - [Similarity Search](https://docs.arize.com/arize/computer-vision-cv/how-to-cv/similarity-search): The Similarity Search feature in Arize allows users to find items similar to a set of reference embeddings using cosine similarity, supporting both image and text embeddings through API integration and a user-friendly interface. - [Embedding Drift](https://docs.arize.com/arize/computer-vision-cv/how-to-cv/embedding-drift): The "Embedding Drift" documentation explains how Arize AI calculates changes in unstructured data relationships by comparing embedding vectors over time to detect drift, utilizing Euclidean distance for analysis, and provides guidance on setting up automated drift monitors. - [Embeddings FAQ](https://docs.arize.com/arize/computer-vision-cv/how-to-cv/embeddings-faq): The "Embeddings FAQ" documentation page from Arize AI provides essential information on generating, sending, and monitoring embeddings, including explanations on embedding dimensionality, usage guidelines for vectors, and methods for drift detection using Euclidean distance. - [Use Cases: CV](https://docs.arize.com/arize/computer-vision-cv/use-cases-cv): The "Use Cases: CV" documentation page from Arize AI outlines various applications of computer vision technology, specifically focusing on image classification and object detection. - [Image Classification](https://docs.arize.com/arize/computer-vision-cv/use-cases-cv/computer-vision-cv): The documentation page provides an overview of image classification models in Arize, including how to log image data and embedding features, as well as performance metrics for evaluating model accuracy. - [Object Detection](https://docs.arize.com/arize/computer-vision-cv/use-cases-cv/object-detection): The documentation page provides an overview of how to log data for object detection models in Arize, including details on required schemas, supported metrics, and example code for integration. - [Integrations: CV](https://docs.arize.com/arize/computer-vision-cv/integrations-cv): The "Integrations: CV" documentation page outlines how Arize integrates with various platforms across the MLOps toolchain, enabling functionalities such as model observability and monitoring for different machine learning applications. - [API Reference: CV](https://docs.arize.com/arize/computer-vision-cv/api-reference-cv): The API Reference: CV documentation page provides detailed information on functionalities related to computer vision, including generating embeddings, similarity searches, and integrations with various platforms. - [User Guide: ML](https://docs.arize.com/arize/machine-learning/what-is-ml-observability): The ML User Guide from Arize provides comprehensive resources and best practices for monitoring, evaluating, and improving machine learning models throughout their lifecycle, emphasizing the importance of observability for understanding model performance and addressing issues effectively. - [Quickstart: ML](https://docs.arize.com/arize/machine-learning/quickstart): The "Quickstart: ML" documentation provides a step-by-step guide on how to install Arize, log data, visualize model performance, and set up monitoring for machine learning models. - [Concepts: ML](https://docs.arize.com/arize/machine-learning/concepts-ml): The Arize ML documentation provides comprehensive guidance on logging ML model inferences, integrating observability into existing workflows, and emphasizes its platform-agnostic capabilities to enhance model performance analysis in training, validation, and production environments. - [What Is A Model Schema](https://docs.arize.com/arize/machine-learning/concepts-ml/model-schema-reference): The Arize documentation page on Model Schema explains the structure used to organize model data, including essential fields like model name, version, environment, features, predictions, timestamps, and tags, facilitating effective data ingestion and management within machine learning workflows. - [Delayed Actuals and Tags](https://docs.arize.com/arize/machine-learning/concepts-ml/how-to-send-delayed-actuals): The "Delayed Actuals and Tags" documentation page outlines how to connect model predictions to delayed ground truth data by using the Arize joiner, which matches delayed actuals to predictions based on a unique prediction ID, along with guidelines for handling delayed tags to provide additional context for model evaluations. - [ML Glossary](https://docs.arize.com/arize/machine-learning/concepts-ml/glossary): The ML Glossary page provides definitions and explanations of key terminology related to machine learning, data science, and model performance monitoring. - [How To: ML](https://docs.arize.com/arize/machine-learning/how-to-ml): The documentation page provides comprehensive guidance on machine learning activities within the Arize AI platform, including data upload, monitor configuration, drift tracing, and various ML use cases. - [Upload Data to Arize](https://docs.arize.com/arize/machine-learning/how-to-ml/upload-data-to-arize): The documentation page explains various methods for uploading inference data to Arize, including using the Pandas SDK, local file uploads, and various cloud storage options. - [Pandas SDK Example](https://docs.arize.com/arize/machine-learning/how-to-ml/upload-data-to-arize/log-directly-via-sdk-api): The Pandas SDK Example documentation explains how to log model inference data to Arize using the Python Pandas SDK, covering setup, schema attributes, and the logging process. - [Local File Upload](https://docs.arize.com/arize/machine-learning/how-to-ml/upload-data-to-arize/ui-drag-and-drop): The Local File Upload documentation describes the process for users to upload files in CSV, Parquet, or Avro formats to Arize by dragging and dropping or selecting files, configuring model schema parameters, and monitoring job status. - [File Upload FAQ](https://docs.arize.com/arize/machine-learning/how-to-ml/upload-data-to-arize/ui-drag-and-drop/ui-drag-and-drop-faq): The File Upload FAQ page provides information on the validation process, troubleshooting details, and error reporting for uploading files to Arize AI. - [Table Ingestion Tuning](https://docs.arize.com/arize/machine-learning/how-to-ml/upload-data-to-arize/table-ingestion-tuning): The "Table Ingestion Tuning" documentation page outlines parameters for optimizing data ingestion from tables in Arize, including query cadence, query window size, and row limits to control data frequency and volume during ingestion. - [Wildcard Paths for Cloud Storage](https://docs.arize.com/arize/machine-learning/how-to-ml/upload-data-to-arize/wildcard-paths-for-cloud-storage): The documentation explains how to use wildcard paths in cloud storage for flexible file and directory matching, detailing placement rules, limitations on layers, and examples of valid and invalid usage. - [Troubleshoot Data Upload](https://docs.arize.com/arize/machine-learning/how-to-ml/upload-data-to-arize/faq-and-troubleshoot-data-upload): The "Troubleshoot Data Upload" documentation provides step-by-step guidance for diagnosing and resolving common issues related to data ingestion in Arize, including verification of data volume, checking for errors in features and columns, and ensuring proper mappings for predictions and actuals. - [Sending Data FAQ](https://docs.arize.com/arize/machine-learning/how-to-ml/upload-data-to-arize/sending-data-faq): The "Sending Data FAQ" documentation page provides guidance on handling delayed actuals, resolving data schema issues, managing timestamps, and best practices for uploading data to the Arize platform, ensuring accurate model performance tracking and data quality management. - [Monitors](https://docs.arize.com/arize/machine-learning/how-to-ml/monitors): The Arize documentation on monitors provides guidance on various ML monitor types, how to configure them, programmatically create monitors, and best practices for effective monitoring. - [ML Monitor Types](https://docs.arize.com/arize/machine-learning/how-to-ml/monitors/setup): The documentation page outlines the various monitor types available in Arize, including performance, drift, data quality, and custom metrics, along with detailed descriptions of the metrics applicable to different model types and their use cases. - [Configure Monitors](https://docs.arize.com/arize/machine-learning/how-to-ml/monitors/configure-monitors): The "Configure Monitors" documentation page provides guidance on how to set up and customize managed and custom monitors in Arize, detailing aspects like evaluation windows, alert thresholds, baseline settings, and notification integration to effectively manage model performance and detect issues. - [Notifications Providers](https://docs.arize.com/arize/machine-learning/how-to-ml/monitors/configure-monitors/notifications-and-integrations): The documentation page provides a comprehensive overview of configuring monitor statuses, setting up alerting systems, and scheduling downtime for model evaluations in Arize, including integrations with various alerting tools. - [Programmatically Create Monitors](https://docs.arize.com/arize/machine-learning/how-to-ml/monitors/monitors-api): The "Programmatically Create Monitors" documentation page outlines how to use the Arize GraphQL API to create, query, update, and manage monitors that track model performance and data quality, enabling automated monitoring workflows tailored to specific use cases. - [Best Practices for Monitors](https://docs.arize.com/arize/machine-learning/how-to-ml/monitors/choosing-your-metrics): The "Best Practices for Monitors" documentation outlines the essential strategies for effectively monitoring machine learning models, emphasizing the importance of tracking performance, drift, data quality, and custom metrics to ensure predictive accuracy over time. - [Dashboards](https://docs.arize.com/arize/machine-learning/how-to-ml/dashboards): The Dashboards documentation page provides guidance on using dashboards in Arize AI to visualize, share, and analyze key model health metrics, including instructions on creating dashboards through the UI or GraphQL and exporting dashboard data for further analysis. - [Dashboard Widgets](https://docs.arize.com/arize/machine-learning/how-to-ml/dashboards/widgets): The Dashboard Widgets documentation page provides guidance on customizing dashboards in Arize AI using various widget types for visualizing model performance, feature analysis, and monitoring metrics. - [Dashboard Templates](https://docs.arize.com/arize/machine-learning/how-to-ml/dashboards/templates): The "Dashboard Templates" section of Arize documentation provides quick and easy-to-use templates for creating dashboards that track model performance, pre-production data issues, feature impacts, and drift analysis. - [Model Performance](https://docs.arize.com/arize/machine-learning/how-to-ml/dashboards/templates/model-performance): The "Model Performance" documentation page provides templates and guidance for tracking the health of machine learning models through performance dashboards, highlighting key metrics and comparison tools for regression, classification, and ranking models. - [Pre-Production Performance](https://docs.arize.com/arize/machine-learning/how-to-ml/dashboards/templates/pre-production-performance): The Pre-Production Performance documentation provides templates and guidance for creating dashboards to analyze model training and validation data, including key statistics and visualizations for evaluating performance. - [Feature Analysis](https://docs.arize.com/arize/machine-learning/how-to-ml/dashboards/templates/feature-analysis): The Feature Analysis documentation provides guidance on creating feature-oriented dashboards that help in analyzing model features, detecting issues, and identifying opportunities for improvements and retraining. - [Drift](https://docs.arize.com/arize/machine-learning/how-to-ml/dashboards/templates/drift): The Drift Templates documentation provides guidance on creating dashboards to proactively monitor model health, showcasing recent updates and alerts through specific drift metrics and visualizations. - [Programmatically Create Dashboards](https://docs.arize.com/arize/machine-learning/how-to-ml/dashboards/programmatically-create-dashboards): This documentation page provides guidance on programmatically creating dashboards in Arize, detailing methods for setting up various types of widgets using mutations, such as distribution, time series, drift, monitor, statistic, and text widgets, while highlighting the advantages of programmatic creation for scalability and customization. - [Performance Tracing](https://docs.arize.com/arize/machine-learning/how-to-ml/performance-tracing): The Performance Tracing documentation provides guidance on troubleshooting performance monitors, visualizing model performance metrics over time, and analyzing specific performance issues using various views such as slice, table, and embeddings projector. - [✨AI Powered Performance Insights](https://docs.arize.com/arize/machine-learning/how-to-ml/performance-tracing/ai-powered-performance-insights): The "AI Powered Performance Insights" documentation provides high-level analysis tools for evaluating model performance metrics, including trends, prediction volumes, and performance across specific data segments or cohorts. - [Drift Tracing](https://docs.arize.com/arize/machine-learning/how-to-ml/drift-tracing): The Drift Tracing documentation provides guidance on troubleshooting model drift by analyzing prediction and feature drift, utilizing baseline comparisons, and addressing issues such as delayed actuals to improve model performance. - [✨AI Powered Drift Insights](https://docs.arize.com/arize/machine-learning/how-to-ml/drift-tracing/ai-powered-drift-insights): The "AI Powered Drift Insights" documentation provides guidance on detecting and analyzing data drift in your models by comparing current input distributions to a baseline to identify significant shifts. - [Data Distribution Visualization](https://docs.arize.com/arize/machine-learning/how-to-ml/drift-tracing/data-distribution-visualization): The "Data Distribution Visualization" documentation provides guidance on configuring data visualization for distribution comparisons, offering various binning methods for numeric features and outlining their appropriate use cases to enhance drift monitoring and PSI calculations. - [Embeddings for Tabular Data (Multivariate Drift)](https://docs.arize.com/arize/machine-learning/how-to-ml/drift-tracing/embeddings-for-tabular-data-multivariate-drift): The documentation page explains how to generate and use embeddings from tabular data for monitoring multivariate drift, detailing the steps to select data columns, choose a model, and log the generated embeddings to Arize for further analysis. - [Custom Metrics](https://docs.arize.com/arize/machine-learning/how-to-ml/custom-metrics-api): The documentation page provides detailed instructions on how to create, update, and query custom metrics within Arize using GraphQL, along with guidelines for creating performance monitors for these metrics. - [Arize Query Language Syntax](https://docs.arize.com/arize/machine-learning/how-to-ml/custom-metrics-api/custom-metric-syntax): The Arize Query Language Syntax documentation provides an overview of the SQL-like syntax used to create custom metrics, detailing the structure of SELECT statements, filters, dimensions, and expressions within the query language. - [Conditionals and Filters](https://docs.arize.com/arize/machine-learning/how-to-ml/custom-metrics-api/custom-metric-syntax/conditionals-and-filters): The "Conditionals and Filters" documentation page provides an overview of how to use conditional expressions like CASE and WHERE clauses in Arize Query Language to specify logic and filter subsets of data effectively. - [All Operators](https://docs.arize.com/arize/machine-learning/how-to-ml/custom-metrics-api/custom-metric-syntax/all-operators): The "All Operators" documentation page provides an overview of operators in the Arize Query Language, detailing numeric and comparison operators that can be applied to various data dimensions and types. - [All Functions](https://docs.arize.com/arize/machine-learning/how-to-ml/custom-metrics-api/custom-metric-syntax/all-functions): This documentation page provides an overview and reference for all available aggregation and metric functions used in Arize AI for custom metric creation, detailing their syntax and functionality. - [Custom Metric Examples](https://docs.arize.com/arize/machine-learning/how-to-ml/custom-metrics-api/custom-metric-examples): The "Custom Metric Examples" documentation page provides a guide on creating various customized metrics for evaluating machine learning models, showcasing specific examples for business and performance metrics that can be tailored to specific needs. - [Custom Metrics Query Language](https://docs.arize.com/arize/machine-learning/how-to-ml/custom-metrics-api/12.-custom-metrics): The Custom Metrics Query Language documentation provides guidelines for defining tailored metrics for machine learning needs using an SQL-like syntax, allowing users to create and utilize custom metrics in dashboards, monitors, and performance tracing within the Arize platform. - [✨ArizeQL Generator](https://docs.arize.com/arize/machine-learning/how-to-ml/custom-metrics-api/arizeql-generator): The ArizeQL Generator documentation page provides guidance on creating custom metrics by allowing users to describe their desired metrics or provide existing code, which the Copilot then translates into Arize Query Language (AQL) for application and saving. - [Troubleshoot Data Quality](https://docs.arize.com/arize/machine-learning/how-to-ml/data-quality-troubleshooting): The "Troubleshoot Data Quality" documentation page provides guidance on identifying and resolving data quality issues that can negatively impact machine learning model performance by analyzing metrics related to data integrity and common root causes. - [✨AI Powered Data Quality Insights](https://docs.arize.com/arize/machine-learning/how-to-ml/data-quality-troubleshooting/ai-powered-data-quality-insights): The "AI Powered Data Quality Insights" documentation page provides an overview of tools and prompts to analyze data quality, identify missing data, assess feature performance, evaluate distribution shifts, and review cardinality trends to help machine learning engineers debug potential data quality issues. - [Explainability](https://docs.arize.com/arize/machine-learning/how-to-ml/explainability): The Explainability section of Arize's documentation provides tools and methodologies for users to understand and analyze feature importance in their models, leveraging SHAP values and other techniques to ensure transparency, improve model trustworthiness, and facilitate performance refinement. - [Interpreting & Analyzing Feature Importance Values](https://docs.arize.com/arize/machine-learning/how-to-ml/explainability/interpreting-and-analyzing-feature-importance-values): The "Interpreting & Analyzing Feature Importance Values" documentation page explains how to use SHAP values to assess and visualize the importance of features in machine learning models, including methods for analyzing global, cohort, and local feature importance to troubleshoot performance and drift. - [SHAP](https://docs.arize.com/arize/machine-learning/how-to-ml/explainability/shap): The SHAP documentation page explains the use of Shapley Additive Explanations for interpreting individual predictions of complex models, detailing methods like TreeSHAP for tree-based models and KernelSHAP for broader application, along with practical code examples for implementation. - [Surrogate Model](https://docs.arize.com/arize/machine-learning/how-to-ml/explainability/surrogate-model): The Surrogate Model documentation describes how to use interpretable surrogate models to approximate the predictions of black box models and generate SHAP values, thereby facilitating model explainability by logging relevant feature importance values via the Arize Python SDK. - [Explainability FAQ](https://docs.arize.com/arize/machine-learning/how-to-ml/explainability/explainability-faq): The Explainability FAQ page provides insights into the workings of the surrogate explainer for classification and regression models, detailing its reliance on LightGBM to generate SHAP values that indicate feature importance, while also addressing the impact of data volume on the reliability of these values. - [Model Explainability](https://docs.arize.com/arize/machine-learning/how-to-ml/explainability/explainability): The Arize documentation on Model Explainability outlines how the platform helps users understand model predictions through feature importance visualization and provides methods for ingesting this data. - [Bias Tracing (Fairness)](https://docs.arize.com/arize/machine-learning/how-to-ml/11.-bias-tracing-fairness): The Bias Tracing documentation provides a comprehensive overview of how to analyze and address model bias, focusing on fairness metrics, the four-fifths rule, and tools for comparing bias across datasets and sensitive groups within machine learning models. - [Export Data to Notebook](https://docs.arize.com/arize/machine-learning/how-to-ml/export-data-to-notebook): The "Export Data to Notebook" documentation page explains how users can easily share and analyze data from Arize in a notebook environment to facilitate further investigation and retraining workflows, utilizing either a one-click export feature or direct queries with the Arize Python export client. - [Automate Model Retraining](https://docs.arize.com/arize/machine-learning/how-to-ml/automate-model-retraining): The "Automate Model Retraining" documentation page provides a guide on how to set up and configure automated model retraining in Arize, including when to use it, the steps involved in the process, and information on supported integrations. - [ML FAQ](https://docs.arize.com/arize/machine-learning/how-to-ml/product): The ML FAQ documentation page provides answers to common questions about the Arize AI platform, including supported data types, model types, outlier detection, performance metrics, and how to monitor feature impact and drift. - [Use Cases: ML](https://docs.arize.com/arize/machine-learning/use-cases-ml): The "Use Cases: ML" documentation page provides an overview of various applications of machine learning, including binary classification, multi-class classification, regression, time series forecasting, ranking, and natural language processing, along with insights into industry-specific use cases. - [Binary Classification](https://docs.arize.com/arize/machine-learning/use-cases-ml/binary-classification): The "Binary Classification" documentation page provides guidelines and code examples for logging and evaluating binary classification models in Arize, detailing different cases based on the metrics used, such as classification accuracy, AUC, and log loss. - [Fraud](https://docs.arize.com/arize/machine-learning/use-cases-ml/binary-classification/fraud): This documentation page provides a comprehensive guide on how to use the Arize platform to monitor and improve the performance of fraud detection models, including setting up baselines, monitoring key performance metrics, detecting data drift, ensuring data quality, and customizing dashboards for effective analysis. - [Insurance](https://docs.arize.com/arize/machine-learning/use-cases-ml/binary-classification/insurance): The documentation page provides an overview of using Arize for monitoring and analyzing machine learning models in the insurance domain, focusing on automatic setup of monitors for drift, data quality, and performance issues, as well as visualizing model and feature drift to enhance model insights and iteration. - [Multi-Class Classification](https://docs.arize.com/arize/machine-learning/use-cases-ml/multiclass-classification): The Multi-Class Classification documentation provides guidance on how to log and evaluate models with more than two classes, including methods for single-label and multi-label use cases, supported metrics, and examples using the Arize AI platform. - [Regression](https://docs.arize.com/arize/machine-learning/use-cases-ml/regression): The Regression documentation page provides an overview of regression models, performance metrics, and code examples for logging regression model data using the Arize AI platform. - [Lending](https://docs.arize.com/arize/machine-learning/use-cases-ml/regression/lending): This documentation page provides a comprehensive guide on using the Arize platform for monitoring and analyzing the performance of lending prediction models, including setting up baselines, monitoring for drift, assessing data quality, and customizing dashboards for key performance metrics. - [Customer Lifetime Value](https://docs.arize.com/arize/machine-learning/use-cases-ml/regression/customer-lifetime-value): This documentation page provides an overview of using Arize for customer lifetime value (LTV) models, including setting up monitoring baselines, analyzing model performance and drift with visual tools, and insights for troubleshooting low performing cohorts. - [Click-Through Rate](https://docs.arize.com/arize/machine-learning/use-cases-ml/regression/click-through-rate): This documentation page provides a comprehensive guide on using the Arize platform to monitor and analyze the performance of Click-Through Rate (CTR) models, including steps for setting up baselines, monitoring performance, detecting data drift, and utilizing actionable insights to improve model effectiveness. - [Timeseries Forecasting](https://docs.arize.com/arize/machine-learning/use-cases-ml/timeseries-forecasting): The Timeseries Forecasting documentation page provides a comprehensive guide on how to establish a time series forecasting model within the Arize platform, including code examples, performance metrics, and common observability data related to model predictions. - [Demand Forecasting](https://docs.arize.com/arize/machine-learning/use-cases-ml/timeseries-forecasting/demand-forecasting): This documentation page provides an overview of using Arize for demand forecasting, including the setup of customizable performance dashboards, monitoring prediction bias, investigating feature drift, and analyzing model performance to aid in model troubleshooting and improvement. - [Churn Forecasting](https://docs.arize.com/arize/machine-learning/use-cases-ml/timeseries-forecasting/churn-forecasting): This documentation page outlines how to effectively monitor and improve the performance of churn forecasting models using the Arize platform, detailing steps for setting baselines, monitoring key metrics, detecting drift, conducting performance analysis, and creating custom dashboards for better insights. - [Ranking](https://docs.arize.com/arize/machine-learning/use-cases-ml/ranking): The documentation page provides an overview of ranking models in Arize AI, detailing their use cases, schema requirements, performance metrics, and examples for logging ranking predictions with relevance scores or labels to facilitate monitoring and evaluation. - [Collaborative Filtering](https://docs.arize.com/arize/machine-learning/use-cases-ml/ranking/collaborative-filtering-recommendation-engine): The documentation page provides a comprehensive overview of setting up a collaborative filtering ranking model on the Arize platform, detailing its use in recommendation engines, the observability data tracked during predictions, and common performance metrics utilized to evaluate model effectiveness. - [Search Ranking](https://docs.arize.com/arize/machine-learning/use-cases-ml/ranking/search-ranking): This documentation page provides a comprehensive overview of how to use the Arize platform to troubleshoot search ranking models, focusing on the evaluation of model performance using the NDCG metric, particularly in a hotel booking use case, while addressing common challenges and steps for improvement. - [Natural Language Processing (NLP)](https://docs.arize.com/arize/machine-learning/use-cases-ml/natural-language-processing-nlp): The documentation page provides an overview of Natural Language Processing (NLP) model functionalities in Arize, specifically focusing on text classification use cases, performance metrics, and the integration of embedding features for effective logging and evaluation. - [Common Industry Use Cases](https://docs.arize.com/arize/machine-learning/use-cases-ml/use-cases): The "Common Industry Use Cases" page in Arize documentation provides specific examples of how to utilize Arize for troubleshooting and enhancing machine learning models across various applications. - [Integrations: ML](https://docs.arize.com/arize/machine-learning/integrations-ml): The Arize documentation page outlines various integrations, tools, and features for monitoring and evaluating machine learning models across different platforms within the MLOps toolchain. - [Google BigQuery](https://docs.arize.com/arize/machine-learning/integrations-ml/google-bigquery): This documentation page provides a step-by-step guide on setting up an import job using Google BigQuery in Arize, including instructions on granting access, configuring schemas, and troubleshooting import jobs. - [GBQ Views](https://docs.arize.com/arize/machine-learning/integrations-ml/google-bigquery/gbq-views): This documentation page provides guidelines for creating efficient Google BigQuery views to optimize data delivery to Arize, including recommendations on partitioning, unique timestamps, data filtering, and handling historical data ingestion. - [Google BigQuery FAQ](https://docs.arize.com/arize/machine-learning/integrations-ml/google-bigquery/google-bigquery-faq): The Google BigQuery FAQ page provides answers to common questions regarding data ingestion, table support, change tracking, handling updated or deleted rows, and debugging query failures in Arize AI. - [Snowflake](https://docs.arize.com/arize/machine-learning/integrations-ml/snowflake): This documentation page provides a step-by-step guide on how to set up an import job using Snowflake to sync data with the Arize platform, covering essential configurations, permissions, and troubleshooting tips. - [Snowflake Permissions Configuration](https://docs.arize.com/arize/machine-learning/integrations-ml/snowflake/snowflake-permissions-configuration): The Snowflake Permissions Configuration documentation outlines the necessary permissions and setup steps to establish the Snowflake connector for Arize, including creating user roles, setting warehouse defaults, and granting access to specific schemas and tables. - [Databricks](https://docs.arize.com/arize/machine-learning/integrations-ml/databricks): This documentation page provides a step-by-step guide on setting up an import job for Databricks in the Arize platform, including generating access tokens, granting permissions, configuring model schemas, and troubleshooting potential ingestion issues. - [Google Cloud Storage (GCS)](https://docs.arize.com/arize/machine-learning/integrations-ml/gcs-example): The Google Cloud Storage (GCS) documentation for Arize provides a step-by-step guide on setting up an import job to ingest data from GCS into Arize, including selecting files, configuring access permissions, defining model schemas, and troubleshooting import issues. - [Azure Blob Storage](https://docs.arize.com/arize/machine-learning/integrations-ml/azure-example): This documentation page provides step-by-step instructions for setting up an import job in Arize AI to ingest data from Azure Blob Storage, including tasks like configuring storage containers, adding the necessary Azure service principal permissions, and troubleshooting import jobs. - [AWS S3](https://docs.arize.com/arize/machine-learning/integrations-ml/aws-s3-example): This documentation page provides detailed instructions on how to set up an import job in Arize using AWS S3, including steps to configure access permissions, define and validate model schemas, troubleshoot import jobs, and apply bucket policies and tags via Terraform. - [Private Image Link Access Via AWS S3](https://docs.arize.com/arize/machine-learning/integrations-ml/aws-s3-example/aws-s3-example): This documentation page outlines the steps required to enable access to private images stored in an AWS S3 bucket for use in Arize AI, including ingesting data links, setting AWS permissions, and configuring encryption key access. - [Kafka](https://docs.arize.com/arize/machine-learning/integrations-ml/connecting-to-kafka): The documentation page outlines how to use the Arize Pandas SDK to stream predictions from Kafka into the Arize platform, enabling real-time observability through micro-batching and proper offset management. - [Airflow Retrain](https://docs.arize.com/arize/machine-learning/integrations-ml/airflow-retrain): This documentation page provides a guide on setting up an AWS Lambda function to trigger an Airflow model retraining process via SES (Simple Email Service) notifications, detailing the necessary steps and code examples for implementation. - [Amazon EventBridge Retrain](https://docs.arize.com/arize/machine-learning/integrations-ml/amazon-eventbridge): The Amazon EventBridge Retrain documentation provides a step-by-step guide on integrating Arize AI's model monitoring with AWS EventBridge to automate ML training workflows triggered by drifting model predictions. - [MLOps Partners](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations): The MLOps Partners documentation page provides an overview of various integrations with Arize AI's platform to enhance model observability, explainability, and monitoring across multiple machine learning operations tools and services. - [Algorithmia](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/algorithmia): The documentation page provides a guide on integrating Arize AI with the Algorithmia MLOps platform to enhance model observability, explainability, and monitoring while detailing steps to upload a model, implement Arize tracking, and visualize performance metrics. - [Anyscale](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/anyscale): The documentation page provides an overview of the integration tutorial for Anyscale's LLM Endpoints, detailing the process for developers to set up the Arize client, define a chat model, test LLM responses, and log results into Arize. - [Azure & Databricks](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/azure-and-databricks-python): This documentation page provides a tutorial on integrating Arize AI within Databricks workflows and deploying the trained model to an Azure Workspace. - [BentoML](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/bentoml): The documentation page details the integration between BentoML and Arize AI, outlining how to build, deploy, and monitor machine learning models effectively within a production environment using their respective tools. - [CML (DVC)](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/ci-cd-cml): This documentation page provides a tutorial on integrating Arize into a Continuous Integration and Continuous Deployment (CI/CD) workflow for machine learning models, detailing the steps for running validation data and logging metrics in Arize upon model check-ins. - [Deepnote](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/deepnote): The documentation page provides an overview of integrating Arize with Deepnote, highlighting its capabilities for model observability, explainability, and monitoring within a collaborative cloud-based data science notebook environment. - [Feast](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/feast): The documentation page details the integration process between Arize and Feast, outlining four simple steps for logging features, training data, and troubleshooting to enhance machine learning workflows. - [Google Cloud ML](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/google-cloud-ml-python): The Google Cloud ML documentation provides a comprehensive guide on integrating and using machine learning services with Arize AI, covering aspects like data upload, monitoring, performance evaluation, and deployment alongside various tools and SDKs. - [Hugging Face](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/hugging-face): The documentation page explains how to integrate Hugging Face's Inference API with Arize for visualizing model performance and troubleshooting, providing examples for fine-tuning sentiment classification and named entity recognition models, along with steps for setting up the integration and logging model outputs. - [LangChain 🦜🔗](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/langchain): The LangChain documentation provides an overview of a framework designed for developing applications powered by Large Language Models (LLMs), emphasizing agenticity and data awareness while integrating with Arize for monitoring and optimizing LLM performance. - [MLflow](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/mlflow): The documentation page outlines the integration of MLflow with Arize AI for managing the machine learning lifecycle, enabling users to train, manage, and register models while monitoring performance, data quality, and troubleshooting through lightweight integrations at various stages. - [Neptune](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/neptune): This documentation page outlines the integration between Arize and Neptune, two MLOps tools, enabling enhanced tracking of experiment metadata, visualization of model performance, and management of model validation processes. - [Paperspace](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/paperspace): This documentation page provides comprehensive guidance on using Arize AI's various features, including LLM tracing, evaluation, datasets, prompt engineering, machine learning monitoring, and integrations, aimed at enhancing the performance and analysis of AI models and applications. - [PySpark](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/pyspark): This documentation page provides guidance on how to leverage PySpark to send events to Arize, including an example in a Colab notebook. - [Ray Serve (Anyscale)](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/anyscale-ray-serve): The documentation provides a quickstart guide for integrating Arize AI with Ray Serve to facilitate scalable model serving and production logging by outlining three essential steps to implement the integration. - [SageMaker](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/sagemaker-python): The SageMaker documentation provides tutorials for integrating various model types with SageMaker Notebook Instances, including code examples for uploading and executing Arize's API to facilitate model evaluation and tracing. - [Batch](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/sagemaker-python/batch): This documentation page guides users on implementing a SageMaker Batch Transformer architecture that utilizes a Lambda function to process model inputs and predictions while managing actuals in the system. - [RealTime](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/sagemaker-python/realtime): This documentation page provides an overview of a SageMaker inference pipeline that utilizes a Lambda function to process real-time HTTP calls, returning data by calling the SageMaker endpoint, along with instructions for uploading associated files to a Notebook Instance. - [Notebook Instance with Greater than 20GB of Data](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/sagemaker-python/notebook-instance-with-greater-than-20gb-of-data): The documentation provides guidance on using the Arize SDK to handle data transfers exceeding 20GB on a SageMaker Notebook instance by modifying the SDK API call to serialize data efficiently to the local file system before uploading. - [Spell](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/spell): The documentation page provides a comprehensive guide on integrating Arize AI with the Spell ML platform for logging, training, and deploying machine learning models, including detailed steps and necessary commands for setup and usage. - [UbiOps](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/ubiops): The documentation provides an overview of how to integrate the Arize platform with UbiOps for deploying and serving machine learning models, enhancing them with tracking, observability, explainability, and monitoring capabilities. - [Weights & Biases](https://docs.arize.com/arize/machine-learning/integrations-ml/integrations/weights-and-biases): The documentation explains how to integrate Weights & Biases with Arize AI to enhance model performance tracking, logging, and visualization both before and during production stages. - [API Reference: ML](https://docs.arize.com/arize/machine-learning/api-reference-ml): The Arize API Reference documentation provides comprehensive guidance on utilizing various SDKs, integrating tracing and evaluation for LLMs, managing datasets and experiments, implementing monitoring, and leveraging machine learning functionalities with practical examples and advanced features. - [Python SDK](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk): The Arize Python SDK documentation provides guidance on monitoring machine learning models by allowing users to log predictions, features, and evaluation metrics with minimal code, while also offering advanced functionality for LLM tracing, embedding extraction, and explainability. - [Pandas Batch Logging](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas): The Pandas Batch Logging documentation provides a guide for using the Arize Python library to efficiently send batches of machine learning prediction data from a Pandas DataFrame to Arize for monitoring and analysis. - [Client](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas/client): The documentation page provides a guide on how to import and initialize the Arize Client from the Pandas logging library to log predictions and actuals from a Pandas DataFrame. - [log](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas/log): The documentation page describes the `log` method in the Arize Python SDK, which allows users to log model inferences from a Pandas DataFrame to the Arize platform via a POST request, detailing its required parameters, optional settings, and the process for verifying successful data delivery. - [Schema](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas/schema): The Schema documentation for Arize provides an overview of how to organize and map column names containing model data in a Pandas dataframe to the Arize platform, including required and optional parameters for setting up schemas for various model types. - [TypedColumns](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas/typedcolumns): The TypedColumns class in Arize's Python SDK allows for the specification of data types for feature or tag columns when initializing a Schema, facilitating data validation and type casting during ingestion. - [EmbeddingColumnNames](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas/embeddingcolumnnames): The `EmbeddingColumnNames` class in the Arize AI Python SDK maps up to three columns (vector, data, and link_to_data) to a single embedding feature, allowing for the integration of vector data and associated metadata such as text or image links. - [ObjectDetectionColumnNames](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas/objectdetectioncolumnnames): The `ObjectDetectionColumnNames` class in the Arize AI documentation defines the structure for mapping object detection predictions or actual columns, specifying the required fields for bounding box coordinates, category labels, and optional confidence scores. - [PromptTemplateColumnNames](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas/prompttemplatecolumnnames): The PromptTemplateColumnNames documentation details how to define columns for prompt templates and their versions within the Arize AI framework, specifically requiring the prompt template column to contain strings and the version column to be convertible to strings. - [LLMConfigColumnNames](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas/llmconfigcolumnnames): The LLMConfigColumnNames documentation page outlines how to define and utilize two columns—model_column_name and params_column_name—specifically for tracking LLM model names and their associated invocation parameters in the Arize platform. - [LLMRunMetadataColumnNames](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas/llmrunmetadatacolumnnames): The LLMRunMetadataColumnNames documentation page outlines the necessary column mappings for ingesting metadata about LLM inferences, specifying the expected integer or float types for total token count, prompt token count, response token count, and response latency in milliseconds. - [NLP_Metrics](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas/llm_evaluation): The NLP_Metrics documentation page provides guidelines for installing necessary dependencies and using various evaluation metrics such as BLEU, ROUGE, and METEOR to assess the quality of natural language processing outputs in Python. - [AutoEmbeddings](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas/autoembeddings): The AutoEmbeddings documentation page for Arize AI outlines how to use the EmbeddingGenerator class to generate embedding vectors for various use cases, including image classification and sequence classification, and provides installation instructions and code examples. - [utils.types.ModelTypes](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas/utils.types.modeltypes): The documentation page for `utils.types.ModelTypes` in Arize outlines the various model types recognized by the platform, such as regression, binary classification, ranking, multi-class, and different categories for NLP and computer vision tasks, providing examples of how to specify these types when logging model predictions. - [utils.types.Metrics](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas/utils.types.metrics): The documentation page for `utils.types.Metrics` in Arize provides enumerations of metrics for validating schema columns in logging calls, categorized into regression, classification, and ranking metrics, along with code examples for implementation. - [utils.types.Environments](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.pandas/utils.types.environments): The `utils.types.Environments` documentation provides an enumeration for specifying different model environments in the Arize platform, including Training, Validation, Production, and Corpus, along with code examples for logging in each environment. - [Single Record Logging](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.log): The Single Record Logging documentation provides detailed instructions on how to individually log prediction data, including features, tags, and actuals, using the Arize client in Python, along with example code snippets for different logging scenarios. - [Client](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.log/client): The documentation page provides an overview of the Arize AI Python SDK, detailing how to initialize the Arize Client for logging predictions and actuals using the required API key and space ID, along with optional parameters for customization. - [log](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.log/log): The Arize AI documentation page for the logging method describes how to record model data for individual predictions, providing detailed parameter options such as model identifiers, environment settings, predicted and actual labels, features, and optional metadata. - [TypedValue](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.log/typedvalue): The TypedValue class in Arize's Python SDK is utilized for logging features or tags with explicit data types, allowing users to define values and their corresponding types when creating features or tags dictionaries for model logging. - [Ranking](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.log/ranking): The documentation page provides an overview of the `RankingPredictionLabel` and `RankingActualLabel` classes used in the Arize Python SDK to define prediction and ground truth arguments for ranking models, including required and optional fields for logging model predictions. - [Multi-Class](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.log/multi-class): The documentation page outlines how to use the Multi-ClassPredictionLabel and MultiClassActualLabel classes in the Arize Python SDK for logging multi-class model predictions and actual outcomes, including the structure of input parameters and a code example for implementation. - [Object Detection](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.log/objectdetection): The documentation page explains the `ObjectDetectionLabel` class in the Arize Python SDK, which is used to define prediction and actual arguments for object detection models, including bounding box coordinates, categories, and confidence scores. - [Embedding](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.log/embedding): The "Embedding" documentation page details the structure and usage of the Embedding class in the Arize Python SDK, enabling the mapping of vector data and associated metadata for Natural Language Processing and Computer Vision applications. - [LLMRunMetadata](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.log/llmrunmetadata): The LLMRunMetadata class in the Arize Python SDK is designed to ingest metadata about LLM inferences, capturing key metrics such as total token count, prompt and response token counts, and response latency. - [utils.types.ModelTypes](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.log/utils.types.modeltypes): The documentation page defines various model types used in the Arize platform, illustrating how to specify a model type when logging predictions and providing examples for different categories like regression, binary classification, and object detection. - [utils.types.Metrics](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.log/utils.types.metrics): The `utils.types.Metrics` documentation page provides an enumeration of various metrics used for validating schema columns in log calls within the Arize AI Python SDK, including regression, classification, and ranking metrics. - [utils.types.Environments](https://docs.arize.com/arize/machine-learning/api-reference-ml/python-sdk/arize.log/utils.types.environments): The `utils.types.Environments` documentation page describes the different environment enums available in the Arize platform for logging ML model activities, including Training, Validation, Production, and Corpus environments. - [Java SDK](https://docs.arize.com/arize/machine-learning/api-reference-ml/java-sdk): The Java SDK documentation for Arize AI provides guidelines to instrument production services for monitoring and understanding machine learning models and their performance over time with a few lines of code, requiring Java 8 LTS or higher. - [Constructor](https://docs.arize.com/arize/machine-learning/api-reference-ml/java-sdk/constructor): The Arize Java SDK constructor documentation provides instructions on initializing the Arize client using an API key and space key to enable logging of prediction and actual records. - [log](https://docs.arize.com/arize/machine-learning/api-reference-ml/java-sdk/log): The documentation page explains how to use the Arize client to log predictions along with their features, tags, and SHAP values, enabling monitoring, analysis, and explainability of machine learning models. - [bulkLog](https://docs.arize.com/arize/machine-learning/api-reference-ml/java-sdk/bulklog): The `bulkLog` method in the Arize Java SDK allows users to send multiple predicted labels, actual observations, and SHAP values for a given model in bulk, enabling effective monitoring and analysis of model performance. - [logValidationRecords](https://docs.arize.com/arize/machine-learning/api-reference-ml/java-sdk/logvalidationrecords): The `logValidationRecords` function allows users to log predicted and actual labels along with their features for specific model batches in Arize, providing essential data for visualizing and analyzing model performance. - [logTrainingRecords](https://docs.arize.com/arize/machine-learning/api-reference-ml/java-sdk/logtrainingrecords): The `logTrainingRecords` documentation outlines how to use the Arize AI client to log training inference data, including predicted and actual labels, features, and metadata, for analysis and evaluation within the Arize platform. - [R SDK](https://docs.arize.com/arize/machine-learning/api-reference-ml/r-sdk): The Arize R SDK documentation provides guidance on installing the R package and using it to monitor machine learning predictions for model performance and explainability with minimal code. - [Client$new()](https://docs.arize.com/arize/machine-learning/api-reference-ml/r-sdk/clientusdnew): The `Client$new()` function in the Arize Python SDK initializes a client object for logging predictions and actual records, requiring the organization's API key and organization key, and must be called once per session. - [Client$log()](https://docs.arize.com/arize/machine-learning/api-reference-ml/r-sdk/clientusdlog): The Client$log() documentation describes how to log batches of inference data in R using the Arize AI SDK, detailing the necessary parameters, schema creation, and providing examples for proper implementation in both training and production environments. - [Rest API](https://docs.arize.com/arize/machine-learning/api-reference-ml/rest-api): The Arize ML Rest API documentation provides details on how to authenticate and send prediction and actual records to Arize's log API, along with request examples and supported label types. - [On-Premise and VPC](https://docs.arize.com/arize/deployment/on-premise): The "On-Premise and VPC" documentation page details Arize's on-premise deployment options, highlighting its scalable infrastructure that can be installed within a company's cloud environment, ensuring data security and integration with internal authentication systems. - [On-Premise Requirements](https://docs.arize.com/arize/deployment/on-premise/requirements): The "On-Premise Requirements" documentation page outlines the necessary cluster and server configurations, cloud storage options, permissions, and firewall settings required for installing Arize AI in an on-premise environment. - [On-Premise Installation](https://docs.arize.com/arize/deployment/on-premise/installation): The "On-Premise Installation" documentation provides detailed instructions on setting up the Arize platform on private infrastructure using a provided TAR file, including necessary pre-deployment configurations, deployment scripts, and post-deployment verification steps. - [On-Premise SDK Usage](https://docs.arize.com/arize/deployment/on-premise/on-premise-sdk-usage): The "On-Premise SDK Usage" documentation page provides guidelines for deploying the Arize SDK, Arize OTEL SDK, and Arize Flight Server within a private VPC, including how to handle private endpoints and self-signed certificates for secure communication. - [Arize PrivateConnect](https://docs.arize.com/arize/deployment/arize-privateconnect): Arize PrivateConnect is a secure connectivity solution for SaaS deployments that enables private and seamless integration with cloud infrastructures while ensuring data privacy and reducing operational complexity, making it the preferred choice for many customers. - [SSO & RBAC](https://docs.arize.com/arize/admin-and-settings/1.-setting-up-your-account): The "SSO & RBAC" documentation page provides guidance on configuring Single Sign-On (SSO) via SAML2 and implementing Role-Based Access Control (RBAC) for managing user access across organizations and spaces in the Arize platform. - [Whitelisting](https://docs.arize.com/arize/admin-and-settings/whitelisting): The Whitelisting documentation provides a list of IP addresses to be whitelisted for various Arize services to ensure uninterrupted access, including instructions for confirming the latest addresses needed for UI access, SDK ingestion, and data imports/exports. - [Compliance](https://docs.arize.com/arize/admin-and-settings/compliance): The Compliance documentation page for Arize outlines the company's certifications, including SOC 2 Type II, PCI DSS 4.0, HIPAA compliance, and CSA Star Level 1, while also directing users to the Arize Trust Center for more information on security and privacy. - [GraphQL API](https://docs.arize.com/arize/resources/graphql-api): The Arize GraphQL API allows users to programmatically access and modify entities within the Arize platform using a flexible and precise query language, enabling integration with internal systems and the automation of processes. - [How To Use GraphQL](https://docs.arize.com/arize/resources/graphql-api/how-to-use-graphql): This documentation page provides an overview of key concepts and terminology for using the Arize GraphQL API, highlighting its advantages over REST and explaining the structure of schemas, fields, arguments, implementations, connections, edges, and nodes. - [Forming Calls](https://docs.arize.com/arize/resources/graphql-api/how-to-use-graphql/forming-calls): This documentation page provides guidance on authenticating and forming queries and mutations using the Arize GraphQL API, including information on endpoints, request structures, and example queries. - [Using Global Node ID's](https://docs.arize.com/arize/resources/graphql-api/how-to-use-graphql/using-global-node-ids): The documentation page explains how to use global node IDs in Arize's GraphQL API to efficiently query and mutate account objects, outlining steps for locating a node's ID, identifying the object type, and performing a direct node lookup. - [Querying Nested Data](https://docs.arize.com/arize/resources/graphql-api/how-to-use-graphql/querying-nested-data): This documentation page provides a tutorial on how to query nested data and collections in Arize using GraphQL, illustrating how to access space, model connections, and triggered monitors efficiently. - [Mutations](https://docs.arize.com/arize/resources/graphql-api/how-to-use-graphql/mutations): This documentation page explains how to use GraphQL mutations to create, update, and delete objects in the Arize API, demonstrating the process through an example of updating a monitor's threshold. - [Getting Started With Programmatic Access](https://docs.arize.com/arize/resources/graphql-api/getting-started-with-programmatic-access): The "Getting Started With Programmatic Access" documentation provides instructions on how to enable developer permissions, access the API Explorer, and retrieve your API key for using the Arize GraphQL API. - [Notebook Examples](https://docs.arize.com/arize/resources/graphql-api/example-use-cases): The "Notebook Examples" documentation page provides practical examples of using the Arize GraphQL API for observability, including how to query data and create or patch monitors for drift, performance, and data quality. - [Online Tasks API](https://docs.arize.com/arize/resources/graphql-api/online-tasks-api): The Online Tasks API documentation provides instructions for programmatically creating, updating, and running evaluation tasks for large language models, including necessary mutations, parameters, and expected responses. - [Annotations API](https://docs.arize.com/arize/resources/graphql-api/annotations-api): The Annotations API documentation provides a guide on how to programmatically export annotated data to the Arize platform using GraphQL mutations, allowing non-technical users to label LLM outputs for further analysis. - [Monitors API](https://docs.arize.com/arize/resources/graphql-api/monitors-api): The Monitors API documentation provides detailed guidance on querying, creating, updating, deleting, and muting monitors for model performance tracking on the Arize platform, enabling users to manage monitoring workflows programmatically. - [Models API](https://docs.arize.com/arize/resources/graphql-api/models-api): The Models API documentation page provides instructions on querying models, managing model versions, accessing model and tracing schemas, configuring dimensions, setting model baselines, and deleting data using GraphQL queries and mutations. - [Metrics API](https://docs.arize.com/arize/resources/graphql-api/metrics-api): The Metrics API documentation provides detailed information on querying model metrics, including average metric values, metrics over time, drift analysis, and global feature importance, utilizing structure and queries based on GraphQL. - [File Importer API](https://docs.arize.com/arize/resources/graphql-api/file-importer-api): The File Importer API documentation provides instructions on how to create, query, and delete import jobs between your cloud storage and the Arize platform, including detailed examples and usage tips for effective implementation. - [Table Importer API](https://docs.arize.com/arize/resources/graphql-api/table-importer-api): The Table Importer API documentation provides guidance on how to create, manage, and update table import jobs for integrating data from Snowflake, Databricks, or BigQuery into the Arize platform programmatically using GraphQL. - [Custom Metrics API](https://docs.arize.com/arize/resources/graphql-api/custom-metrics-api): The Custom Metrics API documentation provides guidance on programmatically creating, updating, querying, and monitoring custom metrics tailored for specific business use cases using GraphQL. - [Dashboards API](https://docs.arize.com/arize/resources/graphql-api/programmatically-create-dashboards): The Dashboards API allows users to programmatically create and customize dashboards in Arize by adding various widgets like distribution, time series, and statistic widgets, facilitating the visualization of model performance and data quality metrics. - [Admin API](https://docs.arize.com/arize/resources/graphql-api/admin-api): This documentation page provides instructions for managing user spaces within an organization through the Admin API, including creating spaces, querying users, adding users to spaces, and removing users from spaces. - [Resource Limitations](https://docs.arize.com/arize/resources/graphql-api/resource-limitations): The "Resource Limitations" documentation page for Arize outlines the restrictions on the GraphQL API, including pagination enforcement, rate limits (100 queries per minute and 300 mutations per minute), and complexity limits for queries to prevent excessive server load. - [Export Data API](https://docs.arize.com/arize/resources/api-reference): The Export Data API documentation explains how to export data from Arize to a Jupyter notebook or Phoenix using either a simple export button or programmatically with the Arize Python export client, detailing the required code snippets and parameters. - [Python reference](https://docs.arize.com/arize/resources/api-reference/python-reference): The documentation page provides an overview of the `ArizeExportClient` class, detailing how to initialize it with an API key and use its method `export_model_to_df` to export model data from Arize to a Pandas DataFrame, including required parameters and options for filtering the export. - [Changelog](https://docs.arize.com/arize/resources/changelog): The Changelog page for Arize Docs outlines updates and changes made across the Arize API, SDK, and product features to keep users informed of new functionalities and improvements. - [Python SDK Changelog](https://docs.arize.com/arize/resources/changelog/python-sdk-changelog): The Python SDK Changelog page provides a record of updates and changes made to the Arize Python SDK, highlighting new features, bug fixes, and enhancements. - [GraphQL API Changelog](https://docs.arize.com/arize/resources/changelog/api-changelog): The GraphQL API Changelog provides a detailed record of updates and changes to the Arize GraphQL API, including breaking changes, new feature additions, and enhancements aimed at improving functionality and user experience. - [Key features](https://docs.arize.com/arize#key-features): Arize AI provides a comprehensive user guide for monitoring and improving AI applications, including features for tracing, evaluating models, and managing datasets across various AI and machine learning frameworks. - [Get started with our guides](https://docs.arize.com/arize#get-started-with-our-guides): Arize AI documentation provides comprehensive guidance for developers on tracing, evaluating, and monitoring AI applications using large language models and traditional machine learning, covering setup, integrations, and best practices. - [See how it works](https://docs.arize.com/arize#see-how-it-works): Arize AI provides a comprehensive platform for AI engineers to enhance the performance and observability of large language models, traditional machine learning, and computer vision applications through features like tracing, evaluations, prompt engineering, monitoring, and data quality insights. - [Learn More](https://docs.arize.com/arize#learn-more): Arize AI provides a comprehensive user guide for developers to effectively build, evaluate, and monitor AI applications utilizing large language models, traditional machine learning, and computer vision, covering topics from tracing and evaluations to datasets and performance insights. - [NextUser Guide](https://docs.arize.com/arize/what-is-llm-observability): The Arize AI User Guide provides comprehensive instructions for developing, evaluating, and monitoring LLM-powered applications, detailing dataset curation, experimental tracking, real-time evaluation of production data, and application monitoring to ensure performance and quality. - [](https://docs.arize.com/arize#key-features): Arize AI documentation provides comprehensive guidance for AI engineers to effectively trace, evaluate, and optimize the performance of applications using large language models, traditional machine learning, and computer vision. - [](https://docs.arize.com/arize#get-started-with-our-guides): Arize AI's documentation provides a comprehensive user guide for developers to efficiently build, trace, evaluate, and monitor AI applications across various domains, including large language models, traditional machine learning models, and computer vision. - [](https://docs.arize.com/arize#see-how-it-works): Arize AI documentation provides comprehensive guidance on using their platform for monitoring, evaluating, and improving applications built with large language models, machine learning, and computer vision, including features for tracing, performance evaluation, data experimentation, and model explainability. - [](https://docs.arize.com/arize#learn-more): Arize AI's documentation provides comprehensive guidance on building, tracing, evaluating, and monitoring AI applications, with specific sections dedicated to large language models, traditional machine learning, and computer vision. - [](https://docs.arize.com/arize/what-is-llm-observability): The Arize AI User Guide provides comprehensive instructions for curating datasets, running and evaluating experiments, and monitoring LLM-powered applications to enhance performance and ensure data quality in both development and production environments. - [](https://docs.arize.com/arize/machine-learning/what-is-ml-observability): The Arize ML User Guide provides comprehensive documentation on machine learning observability, including concepts, data upload processes, model performance monitoring, explainability, evaluation metrics, and the integration of various tools within the ML lifecycle. - [Everything within ML +](https://docs.arize.com/arize/computer-vision-cv/how-to-cv/embedding-and-cluster-analyzer): The Embedding & Cluster Analyzer documentation provides guidelines on using UMAP for dimensionality reduction of embeddings, explains the clustering of data points to find meaningful patterns, and offers methods for visualizing, analyzing, and improving model performance through these clusters. - [LLM QuickstartTrace and evaluate your large language models application](https://docs.arize.com/arize/llm-tracing/quickstart-llm): The "Quickstart: LLM Tracing" documentation provides a step-by-step guide for integrating LLM tracing into applications using Arize by installing the necessary packages, obtaining API keys, adding tracing code, and executing LLM queries to collect and analyze traces. - [ML QuickstartLog inferences to monitor and debug your machine learning models](https://docs.arize.com/arize/machine-learning/quickstart): The "Quickstart: ML" documentation page provides a step-by-step guide for setting up and using Arize AI for machine learning model observability, including installation, data logging, performance visualization, monitoring setup, and creating custom dashboards. - [Tracing](https://docs.arize.com/arize/llm-tracing/tracing): The Arize documentation on tracing explains how to effectively monitor and analyze the performance of LLM applications using OpenTelemetry and OpenInference, offering easy setup through auto-instrumentation and the option for customized tracking of spans and metadata. - [Evaluations](https://docs.arize.com/arize/llm-evaluation-and-annotations/catching-hallucinations): The "How To: Evaluations" documentation page provides an overview of Arize's evaluation framework for LLM applications, detailing how to assess performance across various metrics like correctness and latency while integrating seamlessly with popular frameworks. - [Prompt playground](https://docs.arize.com/arize/prompt-engineering/prompt-playground): The Quickstart: Prompt Engineering guide provides an overview of using the Prompt Playground for refining prompt templates and LLM models to reduce inaccuracies in outputs, allowing users to save optimized prompts as experiments for future evaluation and application. - [Dataset & experiments](https://docs.arize.com/arize/llm-datasets-and-experiments/datasets-and-experiments): The documentation page provides an overview of Datasets and Experiments in Arize AI, detailing how to create datasets, define tasks, and set up evaluators to systematically test and evaluate changes in LLM applications. - [Guardrails](https://docs.arize.com/arize/llm-monitoring-and-guardrails/guardrails): The Arize documentation on Guardrails outlines various safety mechanisms for Large Language Models (LLMs) designed to ensure compliance and manage user interactions by implementing corrective actions when inappropriate content is identified in real-time. - [Production monitoring](https://docs.arize.com/arize/llm-monitoring-and-guardrails/production-monitoring): The LLM Monitoring overview in Arize Docs provides guidance on creating dashboards and monitors to track key metrics and receive notifications for issues related to language models. - [Performance tracing](https://docs.arize.com/arize/machine-learning/how-to-ml/performance-tracing): The Performance Tracing documentation provides a comprehensive guide for troubleshooting performance monitors in machine learning models by analyzing performance metrics, comparing datasets, and utilizing various visualizations such as performance heat maps, confusion matrices, and table views to identify and resolve performance issues. - [Drift detection](https://docs.arize.com/arize/machine-learning/how-to-ml/drift-tracing): The Drift Tracing documentation provides guidance on how to diagnose and troubleshoot model drift by analyzing prediction and feature drift, setting baselines for comparison, and using proxy metrics for performance evaluation, ultimately aiding in maintaining model accuracy and reliability. - [Model explainability](https://docs.arize.com/arize/machine-learning/how-to-ml/explainability): The Explainability documentation page outlines tools and methodologies for analyzing feature importance in machine learning models, utilizing SHAP values to provide insights into how individual features influence predictions and enabling users to enhance model trustworthiness and performance through improved transparency. - [Bias detection](https://docs.arize.com/arize/machine-learning/how-to-ml/11.-bias-tracing-fairness): The Bias Tracing documentation in Arize AI provides tools to analyze and mitigate algorithmic bias in models by comparing fairness metrics across sensitive attributes like race and sex, utilizing the four-fifths rule to evaluate potential biases. - [Data troubleshooting](https://docs.arize.com/arize/machine-learning/how-to-ml/data-quality-troubleshooting): The "Troubleshoot Data Quality" documentation page provides guidance on identifying and resolving data quality issues that negatively impact machine learning model performance, detailing common root causes, data quality metrics, and their implications for model processing pipelines. - [Monitors & alerts](https://docs.arize.com/arize/machine-learning/how-to-ml/monitors/setup): The documentation page outlines the various types of monitors available in Arize, detailing performance, drift, data quality, and custom metrics, along with their corresponding metrics for different model types and use cases. - [Dynamic dashboards](https://docs.arize.com/arize/machine-learning/how-to-ml/dashboards): The Dashboards documentation page provides guidance on using Arize's dashboard feature to visualize, share, and analyze model health metrics, including instructions for creating dashboards via the UI or GraphQL and exporting data from widgets. - [Retraining workflows](https://docs.arize.com/arize/machine-learning/how-to-ml/automate-model-retraining): The "Automate Model Retraining" documentation page outlines the process for setting up automated model retraining in Arize, including when to automate retraining, how the process works with configured monitors, and supported integration options. - [Embeddings analyzer](https://docs.arize.com/arize/computer-vision-cv/how-to-cv/embedding-and-cluster-analyzer): The Embedding & Cluster Analyzer documentation provides guidance on using UMAP for dimensionality reduction, visualizing embeddings, and employing clustering techniques to identify patterns and improve model performance through detailed analysis of clustered data points. - [Similarity search](https://docs.arize.com/arize/computer-vision-cv/how-to-cv/similarity-search): The Similarity Search feature in Arize allows users to identify similar items based on reference embeddings using cosine similarity, supporting both image and text embeddings while providing options to set thresholds and select multiple reference embeddings. - [Object detection tracing](https://docs.arize.com/arize/computer-vision-cv/use-cases-cv/object-detection): The documentation page outlines the process for logging and declaring schemas for object detection models in Arize, including examples for managing prediction and actual values along with associated embedding features. - [Image classification tracing](https://docs.arize.com/arize/computer-vision-cv/use-cases-cv/computer-vision-cv): The Image Classification documentation on Arize AI provides an overview of how to input images for model predictions, log relevant metadata and performance metrics, and utilize embedding features for enhanced image classification capabilities.