bg-img

Introducing Claire Longo, Arize’s New Customer Success Lead

Arize AI is currently hiring over ten positions. Join us!

Claire Longo is Arize’s new Customer Success Lead. Before joining Arize, Longo helped lead ML engineering and data science teams at both Opendoor and Twilio. She is based in Denver, Colorado.

Can you briefly introduce yourself and share your career background?

I’m a data scientist by training, with a background in math and statistics. Early in my career, I was training a lot of machine learning models and struggling to get them into production. As a result, I started teaching myself more engineering skills and became very passionate about MLOps and ML infrastructure. Before joining the Arize team, I worked at Opendoor – a real estate disruptor using ML to price homes – as an engineering manager for the company’s ML team. Similarly, at Twilio I led a machine learning team that I got to establish from the ground up, defining ML platform infrastructure and filling in the team.

Why did you decide to get into machine learning?

With my master’s degree in statistics, it felt like a pretty natural transition. I first started working as a statistician and then people began calling it data science. I did have to learn all the machine learning skills and teach myself Python because I was coding in the programming language R at the time. So there was a lot of self-teaching, but it was mostly a seamless transition and was where the field was heading.

How would you describe your role and responsibilities at Arize?

As Customer Success Lead at Arize, my goal is just that: to make our customers successful. Whether it’s hands-on or educational, my hope is to pair with the customers in any way that’s helpful for them. Prior to joining the team, I was also an Arize customer. At Twilio, we were one of the first adopters. Then at Opendoor we evaluated Arize as a potential vendor for our monitoring and observability needs. So I’ve very much been in the customer’s shoes, and I do think that helps me coming into a role like this because I deeply understand the customer’s pain points. I’m very opinionated and passionate about MLOps and machine learning infrastructure, and I like to partner with central ML teams and data science teams to help them establish best practices.

Why is ML observability so important?

ML observability is important for a lot of reasons. It’s the final step of the perfect ML stack. A lot of companies have good infrastructure around the research and development of the models, ML model hosting, and maybe they have a feature store somewhere in the mix. But once you’ve gone through all of that and you’ve got multiple machine learning models in production serving use cases that the business is relying heavily on, then that brings up the question of how well those models are actually doing. Getting those models into production is not the last step. The last step is really maintaining the quality of them, responding to issues in a timely and automated fashion and really troubleshooting with a good, standardized workflow. That’s where Arize comes in; I think it’s the icing on the cake of the perfect machine learning tech stack.

Going back to your client days, why did you choose Arize?

I liked Arize for a lot of reasons. One is the ease of integration. It really is just a few lines of code in our serving layer if you’re using something like our Python SDK. You can get value out right away and start troubleshooting your models. Some of the other things that stood out to me were the quality and elegance of the interface. It’s not disorienting or hard to onboard. We were able to dig into it and see the different views of our data that we really wanted to see.

You deployed models in a volatile and evolving housing market. Are there any lessons about drift or outlier events that stand out?

This was a really interesting use case, and a lot of the learnings can be generalized. One thing that stands out is that we had a human in the loop system, so we weren’t completely automating everything with machine learning. We would have machine learning spit out a prediction, a human would interact with it, and adjust it as needed. We’d get a feedback loop on the quality of our models through that process, and that created a complex workflow where ML observability and model monitoring became really important. Obviously, what we want to do in a situation like that is to maintain the quality of our models and catch any issues upstream before the end user sees it. And so we needed to build a lot of automation to do triggered alerts and help us troubleshoot when there’s something wrong with our model. Because our goal is to maintain trust in our models, that means catching issues upstream as soon as they happen.

You’ve written about and published a library of metrics for recommendation systems. Can you tell us a little more?

I was inspired to do this when I was working on recommendation systems for Trunk Club, which is owned by Nordstrom. We were doing a lot of work around building outfits—what pairs well with what, so it was a really fun application of machine learning – and I found myself writing the same code over and over to evaluate the recommender systems, both creating plots and creating customized metrics to dig deeper into how a model was performing. These models are kind of unique because we’re targeting how well they personalize, which is slightly different from accuracy. And this felt like a gap. There weren’t a lot of Python libraries where I could just import these metrics and run them.

So I ended up just carrying code around for a while of some stuff that I wrote on my own and reusing it. Eventually, I figured why not make this a library and put it out there? Then I can pip install it and use it, but it also allows other people to contribute. It was kind of born out of a personal need to reuse this code, but then it became more of an open-source community project.

A fun learning with this was that managing an open-source project is a job. There is time required with it, so it’s not maintained as much as I’d like. But I do appreciate any issues that people submit and I try to look at those. It was great to see the community that came together around this. It’s small but there are people helping me improve this code over time. The same people are collaborating consistently with me on this, and it’s been really amazing to meet people across the world who are working with me to make this library better.

What’s one thing that has surprised you since joining Arize?

I was pleasantly surprised by how fast we iterate. Our engineering team has a really fast release cycle. We get feature requests from customers and we turn them around very quickly. We’re releasing new features almost every week, improving our product in an iterative way with a seamless feedback loop. I think this is an amazing way to build out a product—starting simple, getting MVPs out there, getting it in front of customers, getting the feedback, going back to engineering, and building it out even better.

What’s one app on your phone that you can’t live without (bonus points if you can name how machine learning is likely underpinning the app)?

I’ve been using Poshmark for maybe a decade. It’s similar to eBay but for resale clothing, so it’s kind of like a global thrift store in an app. It’s very easy to sell your clothing on the app and it’s very easy to buy. It’s definitely driven by machine learning, probably very similar to Trunk Club with the same kind of algorithms, recommender systems, and all those learning-to-rank models that drive personalization.

Since you’re from Los Alamos, New Mexico, I have to ask you the official state question: do you prefer red or green Hatch chile – or both (often called “Christmas”)?

This is an easy question. The answer is always Christmas.