Aman Khan PM at Arize

Introducing Aman Khan, Arize’s Newest Product Manager

Arize is currently hiring over ten positions. Join us!

Aman Khan is the latest addition to Arize’s growing product team. Responsible for helping to drive product development of Arize’s rapidly-growing ML observability platform, Aman brings a wealth of experience to the task from prior roles pioneering ML infrastructure and tooling at Cruise and Spotify.

How would you describe your role with Arize?

Aman: All things product – so working with the engineering team to help define requirements based on customer feedback and thinking ahead in terms of innovation and the future of ML observability. I plan to partner closely with the marketing team to make sure that the language that we’re presenting is clear to our customers, decision makers and other internal stakeholders. I also hope to help streamline the sales and customer success process to ensure that there’s a very tight feedback loop between what customers are asking for and the engineering team interpreting their requests.

Walk us through your career journey up to this point.

Aman: I started my journey as a Mechanical Engineering student at U.C. Berkeley with a passion for building, coding and spending a lot of time in the machine shop. While there, I took a semester off to work on product at Apple with the goal of learning about the product development process and how to take an idea from something fuzzy and turn it into something concrete.

Several years later, I found my calling in the rapidly-growing machine learning space. As a systems test engineer at Cruise, I had a multi-disciplinary role figuring out ways to test models that were making their way onto a safety-critical product – a self-driving car. I found this challenge intriguing and started writing individual test cases, then automation code to automate my job – and eventually found myself writing more documents rather than code. Ultimately, that’s what led me to become a product manager helping to build early products that resembled ML observability and evaluation tooling.

Most recently, I was a product manager at Spotify where I had the privilege of working on feature infrastructure with an incredible team.

Why make the jump to Arize? 

Aman: Arize is the perfect blend of allowing me to work on an interesting problem with an incredible team on a product that people love. It’s also an incredible opportunity to contribute to technology that has a high potential for impact.

Society is reaching an inflection point with AI where applications are touching our lives in more ways than ever – from ultrasound diagnostics to self-driving cars. That means that the bar has to be extremely high for deployed AI, and ML observability is indispensable because there is always a point where the model performance starts decaying. I think the role Arize is playing in the future of the industry is a critical one to ensure we scale AI in a responsible manner.

As someone who has worked on in-house infrastructure in ML, what do you think the build-versus-buy calculus should be for companies looking at model monitoring and observability today?

Aman: My answer today is different than if you’d asked me three years ago. Back then, in order to work on a new machine learning problem you would need to architect all of these components on your own – figuring out how to manage your model, manage your features, build your own model monitoring, and a lot more. Today, we have a suite of products servicing the full ML lifecycle from open source solutions to products like Arize that are incentivized to make sure that you’re having a great product experience.

When you are at a large enterprise, the reflex to build rather than buy is natural – there is always a temptation to think you are solving a problem that is very specific and spin up resources around it. But I would think about how much value you’re actually going to be providing by dedicating a team to a specific ML platform problem. At Arize, we have close to about 50 people working on a specialized ML observability product that has spent years in development. The scale here means that we get to see so many more problems across customer teams and we’re able to allocate more people to solving the hard problems that a lot of companies won’t be able to do internally without a massive opportunity cost. Teams can instead use some of that energy to translate what’s off the shelf into something that’s usable by the rest of an enterprise at scale.

In a recent presentation, you talked about scale and the unique challenges when you have millions of prediction requests per second. Do you think an important part of assessing an ML observability platform should be load testing and ensuring the ability to handle analytic workloads?

Aman: Absolutely. ML is becoming more real-time in general, which means making decisions based on new data faster. The only way to do that is if your database and your ML monitoring solutions are able to keep up. As companies look at the success of something like TikTok – which learns in near real-time in a session how user behavior can change – you need to have an ML observability product that can actually keep up with that request volume and latency.

This isn’t just dependent on one model or use case. What enterprises will find as they’re adopting ML is that many internal functions will also want to use ML tools – so being able to scale up internally is important, too.

Do you have any advice for those starting out their careers that want to transition into ML/PM roles?

Aman: I think you just have to be really honest with yourself about what you’re looking for and what your strengths are and play to that. It may be obvious, but if you’re better at programming or at understanding customers’ needs, lean into those strengths to find the right fit – both in terms of role, but also an organization with leaders and mentors that support your growth. That’s what enabled me to transition internally within a company to a role that is the right fit for me today.