Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

Before you start

  • Complete Download and extract the distribution. You will work from the extracted folder that contains arize.sh, arize-operator-chart.tgz, and (after you create it) values.yaml.
  • Align your cluster with AKS cluster and resources (existing cluster) or provision with Terraform.
  • Contact Arize AI for clusterSizing. It must match your cluster capacity (for example small1b or medium2b).
  • Keep the example values.yaml (quick start) open as a template to compare against when you finish the steps below.
  • Secrets: Treat values.yaml as sensitive. Store generated passwords and keys in your secret manager / vault, not in git history. Rotate anything that was ever pasted into a ticket or chat.
For every field that must be stored base64-encoded in values.yaml, use (replace placeholders with your own values—do not reuse literals from documentation):
echo -n "<plain-text-value>" | base64 | tr -d '\n'
To base64-encode a whole file (for example a key export):
cat "<path-to-file>" | base64 | tr -d '\n'
For random material, you can generate candidates with openssl rand -base64 32 (or your security team’s standard)—always match what values.schema.json says about encoding and length for each field.

1. Verify cluster access and clusterName

Configure kubectl for your AKS cluster (use the Azure portal Connect flow or Azure CLI if needed):
az aks get-credentials --resource-group <resource-group> --name <cluster-name>
Confirm the API server is reachable:
kubectl -n kube-system get deployment
Set clusterName in values.yaml to the AKS cluster name (the same value as --name above). Example shape: my-production-aks.

2. Seed hubJwt (license JWT)

You need the JWT string Arize AI provided for downloads. Store it base64-encoded under hubJwt. You can append the first line to a new values.yaml:
echo -n "<JWT-from-Arize-AI>" | base64 | tr -d '\n' | sed 's/^/hubJwt: /' > values.yaml
Or edit values.yaml manually and set hubJwt to the output of the echo -n ... | base64 command.

3. Set cloud

Add:
cloud: "azure"

4. Point to blob containers (gazetteBucket, druidBucket)

List containers in your storage account and copy the names you created for Gazette and ArizeDB (see the cluster guide):
az storage container list --account-name "<storage-account-name>" -o table
Set:
gazetteBucket: "<gazette-container-name>"
druidBucket: "<ArizeDB-container-name>"

5. Set postgresPassword and cipherKey

Choose a strong Postgres password and a cipher key material (Arize AI documents the cipher length expectations in the bundled chart docs). Store both base64-encoded in values.yaml as postgresPassword and cipherKey. Example for a random 32-character cipher source then base64 (adjust to your security process):
cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | head -c 32 | base64
Use echo -n '<your-password>' | base64 | tr -d '\n' for postgresPassword if you pick the password yourself. Add the encoded values to values.yaml (generate your own secrets—do not paste example strings from documentation; see Before you start):
postgresPassword: "<POSTGRES_PASSWORD_BASE64>"
cipherKey: "<CIPHER_KEY_BASE64>"
Confirm lengths and encoding against values.schema.json in arize-operator-chart.tgz if your security team has extra constraints.

6. Set organizationName and clusterSizing

organizationName: "<company-or-org-name>"
clusterSizing: "<value-from-Arize-AI>"
clusterSizing must match what Arize AI approved for this environment.

7. Storage account name (azureStorageAccountName)

Use the Azure Storage account that holds the Gazette and ArizeDB containers (see cluster requirements). Set azureStorageAccountName to the plain storage account name (do not base64-encode it):
azureStorageAccountName: "mystorageacct01"

8. Workload Identity vs storage account key

Default (recommended): Azure AD Workload Identity is enabled on the cluster and federated credentials are in place.
azureWorkloadIdentityEnabled: true
azureWorkloadIdentityTenantId: "<tenant-id-guid>"
azureWorkloadIdentityClientId: "<user-assigned-identity-client-id>"
collectNodeMetrics: true
Omit azureStorageAccountKey or leave it unset when using Workload Identity. Fallback: If Workload Identity cannot be used, copy an access key from the storage account in the Azure portal (Security + networking → Access keys), then base64-encode the key string and set:
azureWorkloadIdentityEnabled: false
azureStorageAccountKey: "<BASE64_OF_KEY_STRING>"

9. Set appBaseUrl

Set the URL users will use for the Arize AX UI after ingress and DNS exist. You can refine it later.
appBaseUrl: "https://<arize-app.your-domain>"

10. Optional — private registry (pushRegistry, pullRegistry)

If the cluster must pull images from your registry, set pushRegistry and pullRegistry to the same hostname. Helm and the operator use pullRegistry for workload image references; arize.sh uses pushRegistry as the destination when pushing mirrored images—both should point at the registry your cluster will use (for example <registry-name>.azurecr.io). See Connected, Deployment type — Air-gapped, or Semi-restricted for when mirroring applies. Authenticate Docker on the machine that will push images, for example:
az acr login --name <registry-name>

11. Optional — mirror images before install

Before mirroring images, optionally, verify the images with cosign along with the public key signature-key.pub bundled in the distribution tarball:
# Using cosign underlying
./arize.sh verify-images -- --key signature-key.pub
If step 10 applies, run the image workflow before install (from the same directory as arize.sh and values.yaml). Typical connected-bastion flow:
./arize.sh load-remote-images
For split networks or full offline transfer, follow Deployment type — Air-gapped.

12. Install with arize.sh or Helm

Using arize.sh (recommended default): the script reads values.yaml in the current directory (or pass -f /path/to/values.yaml). Non-interactive automation can use -y.
./arize.sh help
./arize.sh install
Using Helm directly (equivalent chart install):
helm upgrade --install -f values.yaml arize-op arize-operator-chart.tgz
Both approaches install the operator chart using the same values.yaml.

Why use arize.sh?

  • Image workflows: load-remote-images, pull-images / push-images, and related commands wrap Docker and registry auth the way Arize AI tests them.
  • Local smoke tests: open-ports reproduces the port-forward set the team expects after install.
  • Less typing: one entrypoint for help text and flags (./arize.sh help).
Helm alone is appropriate when your pipeline already renders manifests, manages secrets out-of-band, and you only need the operator chart apply step.

13. Post-install: optional port-forward (before ingress)

Use port-forwarding only for early checks—not for production traffic. Configure ingress for real users (step 14 and the links at the end of that section).

14. Local access with port forwarding

Use arize.sh open-ports (easiest)

  1. After ./arize.sh install, the script starts the same forwards as ./arize.sh open-ports and prints local URLs.
  2. To start them again later, run ./arize.sh open-ports from the distribution directory with kubectl pointed at the right cluster.

Port-forward internalendpoints-app

internalendpoints-app is the main in-cluster entry point for user-facing Arize AX traffic—the same service ingress targets on 443 in a normal setup. You will usually open the web UI here, but the service handles more than the UI (for example APIs and other app paths routed behind it). For a quick local check without TLS, forward local 4040 to service port 80 (plain HTTP, not https:// on 443):
kubectl port-forward -n arize service/internalendpoints-app 4040:80
Use your namespaceArize if it is not the default arize. Then open http://localhost:4040.

Other services from open-ports

These match the bundled script (operator HTTP runs in the operator namespace; the rest use your Arize AX workload namespace unless you changed namespaceArize):
Local URLService
http://localhost:4040internalendpoints-app (main app entry; UI and other routed traffic)
http://localhost:3001operator-http (operator namespace)
http://localhost:50050receiver
http://localhost:3000grafana
http://localhost:9090prometheus
http://localhost:8888adb-router
http://localhost:9093alertmanager
http://localhost:9001minio (only when cloud is minio)
Run ./arize.sh help for the open-ports operation description.

Production next steps

15. Compare your file to the template

When finished, your values.yaml should match the shape of the example in the quick start. Use the quick start Minimum fields to verify table as a final checklist.

16. Full parameter set and long-form narrative

  • Every supported key is described in values.schema.json inside arize-operator-chart.tgz (unpack or inspect the chart).
  • For multi-cloud variants, storage class defaults, node selectors, tolerations, and extended ordering, use Advanced → Helm in the offline HTML documentation under docs/ in the tarball (open docs/index.html locally).

See also