Five Takeaways From CDAO (Fall) On AI ROI
The last 18 months have introduced unforeseen challenges and opportunities for data science and machine learning organizations. Due to the global impacts of the COVID-19 pandemic, data science and ML engineering teams have been forced to navigate uncertainty, evolve their infrastructure investments and prioritize ML monitoring and model observability in an increasingly unpredictable world.
In a recent panel discussion at Chief Data & Analytics Officer (CDAO) Fall, Arize’s co-founder and Chief Product Officer Aparna Dhinakaran led a group of industry experts on a deep dive into one of the most pressing issues facing organizations as they seek to implement ML and AI initiatives: ROI.
Aparna was joined by: Besa Bauta, chief data officer, MercyFirst; Ram Singh, chief analytics officer, Performics; Korri Jones, senior ML engineer, Chick-Fil-A; and Claire Gubian, global head of business transformation, Dataiku.
Here are five key takeaways from the panel:
1. Buy-in for AI and ML projects is tied directly to the ability to deliver agile insights and business value at scale.
MercyFirst’s Bauta notes that as a practitioner in the healthcare sector, one way her organization is utilizing artificial intelligence is to surface insights for clinicians to ensure and provide better care.
“When we deploy a model and the model identifies the potential healthcare risk, we need to verify that the physicians are seeing the same thing that the model is seeing. If the margin of error is low and you can demonstrate the ability to readjust and tweak the model for continuous improvement, that goes a long way towards showing you can scale and secure buy-in to grow the project.”
2. The COVID-19 pandemic has fundamentally changed the ROI conversation.
Performics’ Ram Singh highlights that due to COVID, his organization had to do things like look at alternate data patterns and do multiple times series computations.
“The extreme nature of the pandemic created variations across time periods that were unbelievably large. COVID forced the entire industry to re-examine how exogenous factors, at both a global and regional scale, can impact ML models and now we have been running simulation models to prepare for unexpected events moving forward,” he commented.
“The industry is now hyper-aware that there will be more situations where you will have to go back and reiterate and in some cases rebuild the model behind the scenes.”
Chick-fil-A’s Jones added, “the entire ecosystem was affected. It nuked a lot of stuff, but it also forced us to reassess how we manage organization-wide disasters. And I think this was the big thing that a lot of folks learned during this season.”
3. Business stakeholders’ expectations are high due to the AI hype, creating the challenge of quickly proving the real value of initiatives.
According to Dataiku’s Gubian, measuring ROI is difficult and not a given in the industry, but it’s an obvious way to prove value.
“Often we ask, ‘what’s the ROI, what’s the business impact?’ and we don’t have any answers, or we have people who tell us, ‘Actually, we don’t know how to quantify the impact.’”
“The reason is that not that many models make it into production and a lot of it is still very exploratory,” she added. “To get more buy-in, more momentum, you need to demonstrate value. What we encourage customers to do, and what we have seen in very successful companies that are actually driving ROI every single day, is to spend time defining a framework to qualify use cases.”
4. The ability to strike a balance between tools and technologies and measuring incremental value is critical.
The panel pointed out that it’s important to spend time qualifying the use cases of the models and then to spend time prioritizing the use cases according to two dimensions: impact and feasibility.
According to Gubian, both factor in when considering tools and technologies and evaluating the success of initiative.
“At the end of the day, striking the balance is all about being able to answer how much value is going to be created and how complicated it is going to be to design and put models into production. If you can answer those questions, you have the right framework in place to make sound decisions about AI initiatives.”
5. Ways to categorize ROI vary.
The panelists all agreed that some models have a direct impact on the business and are revenue-generating, while some may be more efficiency-based.
Ram Singh sees two broad ROI categories in the performance agency space where agencies are trying to help customers extract performance out of every penny that they are spending. The first is efficiency-based solutions and the other is revenue-based solutions. And there are distinctly different ways of measuring the impact of each.
“If you’re looking at an efficiency-based solution, you typically will have conversations around, how much cost are you saving? How much are you saving in terms of team resources? How much logistical improvements are you seeing? These are all efficiency based-solutions.”
He continued: “When you are looking at revenue driving applications, which is our emphasis, ROI is closely tied to choosing the right application. So for example, in our industry we’ll often talk about shortening the consumer journey. Even if you can take a couple of touchpoints off, that directly translates into a faster acquisition. Where you are trying to acquire a customer and it was taking you X days, now it is X minus something. That’s a faster acquisition process. And you can clearly demonstrate ROI from that calculation.”
6. Balancing short-term wins with long-term success helps sustain initiatives and deliver ROI.
Bauta adds that initially, when you create a model and deploy it into production, it should be viewed as just a tool like any other. “You’re going to have tools not operating or working the way they should,” she cautions. “This is because the inputs can vary due to the business changing. So the initial problem that the model was trying to solve may have changed over time. Ultimately, the model may need to have different specifications if it’s addressing a slightly different variant of the issue or business use case that has evolved.”