Arize AI Listed In Gartner Market Guide for AI Trust, Risk, and Security Management (AI TRiSM) For Second Year In a Row
As the pace of innovation in AI accelerates and breakthroughs in generative AI and new model architectures capture the public imagination, it feels like the dawn of a new era. Enterprises are investing billions to build and deploy models that power everything from improved detection of brain bleeds and credit card fraud to better demand forecasts – often acting with new urgency to tap AI to boost productivity given the challenging economic environment.
Unfortunately, needed investments in infrastructure and processes to effectively manage fairness, trust, risk and security for AI systems have not kept pace. Nearly one in ten (9.4%) of Fortune 500 companies cite AI and machine learning as a risk factor in their most recent financial reports – a number that is increasing 20.5% year-over-year – and over half (50.8%) of data scientists and machine learning engineers say they need better capabilities for monitoring models in production.
In a nutshell, this is why Arize exists – to offer an end-to-end ML observability platform that improves model performance and ensures that AI is used fairly, securely, and ethically. Every day, Arize provides deep introspection across billions of model decisions – offering insights to ML teams on the impact models have on both businesses and people. For thousands of users and a growing array of enterprise clients, Arize is emerging as an indispensable part of their machine learning stack.
With that in mind, we are proud to share that Gartner listed Arize as a Representative Explainability/Model Monitoring Vendor in their latest Market Guide for AI Trust, Risk and Security Management (AI TRiSM) for the second year in a row.
As the report notes, “AI brings new trust, risk and security management challenges that conventional controls do not address” — necessitating new capabilities to “improve model reliability, trustworthiness, fairness, privacy and security.” To that end, “monitoring AI production data for drift, bias, attacks, data entry and process mistakes is key to achieving optimal AI performance, and for protecting organizations from malicious attacks,” the report continues.
If you’re a Gartner client, you can access the full report here.
Disclaimer: Gartner Market Guide for AI Trust, Risk and Security Management (AI TRiSM), Avivah Litan, Jeremy D’Hoinne, Bart Willemsen, Sumit Agarwal, January 16, 2023. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation.