Documentation Index
Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
What’s New
May 9, 2022
Automatic Thresholds for Monitors
The Arize platform now automatically populates monitoring** **thresholds for both **Drift Monitors and Performance Monitors. **A monitor’s threshold is the value that is compared against your model’s current calculated metric value. Thresholds are used to trigger an alert when the current value of a metric is either above or below a model’s threshold value.
Automatic thresholds help ML teams scale their ML needs, reduce time to resolution, and increase overall workflow efficiency.
Drift Monitors
Arize sets automatic drift thresholds for both prediction drift and feature drift. An automatic threshold is determined when there is sufficient production data to determine a trend.
Learn more about automatic baselines here, drift monitors here, and how automatic thresholds for drift monitors are calculated here.
Performance Monitors
Arize sets an automatic threshold for performance monitors when there is sufficient production data to determine a trend. This capability intelligently alerts ML teams when the performance metric of your choosing is not behaving as expected.
Learn more about automatic baselines here, performance tracing here, and how automatic thresholds for performance monitors are calculated here.
Enhancements
May 23, 2022
Arize users can now use Symmetric Mean Absolute Percentage Error as an accuracy metric for performance tracing. sMAPE is useful when your model is prone to over forecasting, and the shortcomings of MAPE become prohibitive for evaluating accuracy.
Learn how sMAPE is calculated here.
Support Latent Tags On Actuals
Arize users can now log latent actuals with tags. This enables users with delayed actuals to group, monitor, slice, and investigate the performance of delayed cohorts. Latent tags broadens the scope of our existing metadata monitoring features for more comprehensive ML observability.
Learn more about how to use tags here.
Additional Object Store Support - Google Cloud Storage
Arize users can now automatically upload model inference data to the Arize platform via Google Cloud Storage. With this addition, Arize users can use the File Importer feature to easily ingest their data directly from GCS.
Learn more about our File Importer and supported Object Stores here.
New Performance Metric: Weighted Average Percentage Error (WAPE)
We’ve added a new accuracy metric, Weighted Average Percentage Error — also known as MAD/Mean ratio — for more comprehensive performance tracing. WAPE is useful when your model is prone to outlier events as its calculations are based on absolute error instead of squared error.
Learn how to calculate WAPE here.
In the News
May 9, 2022
Arize AI Named To Forbes AI 50 List For Second Consecutive Year
Forbes debuted its AI 50 list** **earlier this month, with Arize recognized for the second consecutive year! Arize is the only machine learning observability platform to make the cut and is named alongside category-leaders and heavyweights like 6sense, Anyscale, Databricks, Dataiku, Generate Biomedicines, Hugging Face, and others. Read the release.
The Seven Habits of Highly Effective ML Engineers
While there are a wealth of articles and resources geared toward helping people prepare for software engineering jobs, there are relatively few guides to help prospective founding engineers. Arize founding engineer, Manisha Sharma, set out to change that in her latest piece titled “The Seven Habits of Highly Effective Founding Engineers.” Read more.
Rise of the ML Engineer: Elizabeth Hutton, Cisco
Elizabeth Hutton is the lead machine learning engineer on the Cisco Webex Contact Center AI team, where her work is relied on to provide good customer experiences across billions of monthly calls.** **In this wide-ranging Q&A, Hutton talks about how she got into the industry, best practices for NLP models, and the company’s ML tech stack. Read it.
Arize AI Launches Bias Tracing, a Tool for Uprooting Algorithmic Bias
In today’s world, it has become all too common to read about AI acting in discriminatory ways — often with tragic consequences. Thus, we launched Bias Tracing, a tool designed to help monitor and take action on model fairness metrics. Arize Bias Tracing enables teams to make multidimensional comparisons, uncovering the features and cohorts contributing to algorithmic bias in production without time-consuming SQL querying or painful troubleshooting workflows. Learn more here.
How To Know When It’s Time To Leave Your Big Tech Software Engineering Job
In “How To Know When It’s Time To Leave Your Big Tech Software Engineering Job,” Arize AI founding engineer and Forbes 30 Under 30 honoree Tsion Behailu shares why she bet on Arize (and herself) after nearly five years at Google — and couldn’t be happier. Read more.
Building the Future of AI-Powered Retail Starts With Trust
“If customers don’t trust the model, it’s useless.” So says Jiazhen Zhu, Senior Data Engineer / Machine Learning Engineer and Tech Lead at Walmart Global Tech, who doesn’t pull any punches in this wide-ranging interview on MLOps, leadership, and the importance of ML monitoring and explainability. Read it here.
The Rise of AI Risk Disclosure
Three years ago, Alphabet and Microsoft made waves when they disclosed the use of AI as a potential risk factor in their annual financial reports. Given the rapid growth of AI in nearly every industry since then, it’s worth asking: how many companies followed their lead? This brief report from Arize AI outlines:
-
The growth in AI risk disclosure by industry
-
Examples of AI risk disclosures and responsible AI approaches from Fortune 500 companies
-
Recommendations on what executives should consider when assessing an AI risk management and disclosure strategy
Download the White Paper here.