Webinar

Best Practices in ML Observability for Monitoring, Mitigating and Preventing Fraud

On-Demand Webinar

Every year, fraud costs the global economy over $5 trillion. AI practitioners are on the front lines of this battle building and deploying sophisticated ML models to detect fraud, saving organizations billions of dollars in the process. Of course, it’s a challenging task as fraud takes many forms and attacks vectors across industries. ML teams need an approach that is both reactive in monitoring key metrics and proactive in measuring drift, counter-abuse ML teams.

In this webinar, Reah Miyara, Arize’s Head of Product and former Google AI lead for algorithms and optimization, will cover best practices in ML observability for fraud models.

In this webinar, you’ll learn best practices for how to:

  • Account for model, feature and actuals drift to ensure your models stay relevant
  • Troubleshoot performance degradations across various cohorts
  • Avoid common pitfalls from misleading evaluation metrics to imbalanced datasets

Access Recording

Speakers

Reah Miyara

Reah Miyara is Head of Product at Arize AI, a machine learning observability company. He was previously at Google AI, where he led product development for research, tools, and infrastructure related to graph-based machine learning, data-driven large-scale optimization, and market economics. Reah’s experience as a team and product leader is extensive, building and growing products across a broad cross-section of the AI landscape. He’s played pivotal roles in ML and AI initiatives at IBM Watson, Intuit, and NASA Jet Propulsion Laboratory. Reah also co-led Google Research’s Responsible AI initiative, confronting the risks of AI being misused and taking steps to minimize AI’s negative influence on the world. He has a bachelor's from UC Berkeley’s Electrical Engineering and Computer Science program and was the founder and president of the Cal UAV team in 2014.

Ready to level up your ML observability game?

Request A Demo