Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

This document assumes you have (or will create) a Kubernetes cluster in GCP, AWS, Azure, or another supported environment: a supported control plane, worker nodes sized for your profile, two object-storage buckets for Gazette and ArizeDB, and a block storage class for volumes (non-NFS). For GKE-specific steps (node labels, buckets, IAM), see GKE cluster and resources and the GCP installation hub.
Terraform modules and offline guides: Each Arize AX distribution (release tarball) includes terraform/ with modules and docs/ with the full Advanced → Helm and operational guides (open docs/index.html after extract). Use those paths for cluster layout, variables, and examples that match your release.
Key requirements:
  • Kubernetes cluster
  • Two storage buckets
  • Storage class for the creation of volumes (use block storage, non-NFS)
Arize AI recommends a minimum of two node pools to get started: a Base pool for basic functions and ingestion components, and an ArizeDB pool for historicals, which serve the data queried by the Arize AX UI. If you cannot use separate pools, Arize AX can be deployed on a shared set of nodes by specifying historicalNodePoolEnabled: false in values.yaml. The sizing table is wide; scroll horizontally on small screens if needed.
<sizing-profile>High AvailabilityNode PoolMin NodesvCPU (per node)RAM (per node)Example Instance TypeNode LabelsExample Application
nonhaNoBase116128 GBn2d-highmem-16(GCP)
r5a.4xlarge(AWS)
Standard_E16s_v5(AKS)
See labels below- Integration testing
- Staging enviroments
small1bYesBase
ArizeDB
3
2
864 GBn2d-highmem-8(GCP)
r5a.2xlarge(AWS)
Standard_E8s_v5(AKS)
See labels below- Hundreds of millions of traces or inferences
medium2bYesBase
ArizeDB
2
2
16128 GBn2d-highmem-16(GCP)
r5a.4xlarge(AWS)
Standard_E16s_v5(AKS)
See labels below- Several billions of traces or inferences
Node labels (searchable copy) — match the diagram and your cloud’s node pool UI:
  • Base pool: arize=true, arize-base=true
  • ArizeDB pool: arize=true, druid-historical=true
The base pool can be configured for autoscaling.
Node pools and auto-scaling configuration diagram

Workstation for install scripts

The distribution’s arize.sh install flows are documented and tested on macOS and Linux with Docker available. Windows is not documented for these scripts; use a Linux VM or WSL if your policy allows. The Arize AX distribution documentation (under docs/ in the tarball) lists the tools expected on the machine you use to deploy. In summary you need Helm, kubectl, curl, openssl, and Docker (recommended for image workflows). Use Terraform when you provision with the bundled modules; use your cloud’s CLI for day-one cluster and IAM tasks. Release-specific notes may appear on On-premise releases.

Tool version guidance

These are general floors; your distribution’s docs/ HTML for the exact AX release is authoritative if it differs.
  • Helm: 3.x only (Helm 2 is not supported).
  • kubectl: Use a client whose minor version is within about one minor of the cluster API server (standard Kubernetes client skew guidance). Run kubectl version and compare Client Version to Server Version.
  • Docker: A current stable Docker Engine or Docker Desktop build for image mirror/push workflows.
  • Terraform: 1.x CLI when using the bundled modules; match any required_version declared under terraform/ in your extract.
  • Cosign(Optional): To verify signed OCI images.
macOS: Prefer a currently Apple-supported macOS release on the machine that runs arize.sh and Docker—unsupported macOS may lack TLS or security updates your enterprise endpoints require.

Verify required tools

Run the following from a terminal. Compare output to Tool version guidance above and to the distribution bundle’s internal docs for your release.
docker --version
helm version
curl --version
openssl version
kubectl version
# Optional
cosign version

Cloud CLIs (use one tab)

gcloud --version
gcloud auth login