Sparking ML-Powered Innovation In the Telecommunications Industry
Habib Baluwala is Domain Chapter Lead for Commercial Data at Spark New Zealand, the country’s largest telecommunications and digital services company. There, he works with Chapter Lead for AI and fellow data scientist Aadil Dowlut. In this wide-ranging interview, the two colleagues talk about Spark New Zealand’s machine learning use cases, how they monitor and troubleshoot model performance, and advice for those starting their careers.
Can you briefly introduce yourselves and outline what brought you to your current roles at Spark New Zealand?
Habib Baluwala: I am the Domain Chapter Lead for Commercial Data at Spark New Zealand, which means I am charged with expanding the number of machine learning (ML) use cases across the company. I’m also responsible for recruitment and talent management. Before joining Spark, I worked in academia in biomedical engineering trying to solve some of the pressing problems within the medical industry. I have also worked in financial services and consulting.
Aadil Dowlut: I have been at Spark New Zealand for nearly a year. My career journey spans several industries. Upon completing my PhD, I worked in the consulting industry when the data science field was still nascent and data scientists were still often called analysts or statisticians. At one of my first jobs, a group of us decided to create a small data science club to discuss new findings and research. Inspired by a famous paper by University of Cambridge Professor Zoubin Ghahramani called “The Automatic Statistician,” we had a lot of great conversations and it led me down my current career path.
Can you tell us more about Spark New Zealand?
Aadil: Spark New Zealand is a telecommunications company, so we provide mobile, broadband, and digital services.
Habib: Spark is constantly pushing beyond what a traditional telecommunications provider might offer. As a part of that journey, Spark New Zealand launched several new digital services over the past decade. We offer Spark Health, which supports the digital transformation of the health sector.
When I joined Spark New Zealand three years ago, our senior leaders set an ambitious goal: To help all of New Zealand win big in a digital world. In order for us to excel in this task, we need to understand what New Zealanders want and what their needs are at a granular level. That’s where machine learning comes in.
What are some of your ML use cases?
Aadil: We started our journey in machine learning trying to both better predict churn and understand customer preferences. Later, we built models touching many aspects of the business including wireless broadband and fiber broadband. We also have a model that looks at whether your phone is running toward the end of its life or whether you are in the market for a new phone and – if so – what type of phone you might like so we can proactively make that easier on the customer.
Habib: That understanding of the customer is core. Since Spark’s main business is to provide network connections to the end user, understanding the network improvements and investments that are required is quite important. For example, understanding the network experience that each customer is getting from our wireless broadband or from our mobile services – and at the same time, understanding where we need to possibly make some investments in the future to get the best benefits for both Spark and the customer – ultimately helps us apply ML to help deliver a better customer experience.
We are also using machine learning models to improve business processes. Since we work with a lot of business-to-business (B2B) customers, we can do things like spend classification. Within finance, we use machine learning to help our finance team classify line items automatically to help them save time and have a much better view of what’s happening across the business. We initially didn’t think that ML might play a role in these areas but are pleased to see it adding value.
Why is ML important and what value has it delivered to the business?
Habib: Telecommunications is a very competitive market. Any investments within the ML or data space need to be coupled with tangible commercial benefits. As with any project at Spark, the ML team regularly presents to stakeholders. At the same time, we publish results on an aggregated level as a part of the company’s annual financial report. For example, in the last annual report, we showed that our in-house data capability improved marketing efficiency by almost sixteen percent year-on-year. We also published how many models we have in production along with expected improvements. That keeps us accountable.
How many models do you currently have in production and how many team members manage those?
Aadil: We have over 50 models in production running on a weekly or monthly basis forming our pipeline for campaign delivery or business process improvements. In terms of team size, we currently have over 20 data scientists and machine learning engineers. We are looking to expand by hiring new data scientists and are working closely with graduate schools to recruit new talent.
Can you tell us a bit about your ML tech stack and what new hires need to familiarize themselves with when they join Spark New Zealand?
Aadil: The majority of our work is done in Python, including model development. One good thing about Spark New Zealand is when we adopt a technology, we go all in. We build our models using the Azure ML platform, spanning end-to-end model development and the productionization of those models, and we use Arize for ML observability.
We are also privileged to have a fantastic data layer. Our data team works hard to provide our data layer using Snowflake, and we also have a feature store in Snowflake where we have those features stored for us across different models. It makes the whole thing more repeatable and process-friendly.
How has your approach changed over the past several years?
Habib: When we started the journey with ML, we were focused on getting early models working on a laptop. As we expanded the number of use cases and the size of the team, our needs evolved and we wanted to stay ahead of potential issues. As a team, you never want to hear statements like “results have dropped off, can you rerun the model” or “we are not seeing the same benefits from ML compared to two months ago.”
Data is a very dynamic asset, and it’s continuously changing. You need to add services or tools which can help monitor those changes more effectively while being more proactive in the way you approach the output of models. By taking a more data and product-led approach, we’ve shifted to operating on a much more continuous basis with high standards for performance.
On that point, what are some common challenges you deal with once deploying models into production (i.e. performance degradation or model drift)?
Aadil: Getting to production itself is a challenge because you want the results to be consistent – you don’t want to have nulls suddenly appear in a table, for example. Most of our models at the moment are batch models that run every week on the data, though we are beginning to venture into real-time prediction models. Previously, there was a lack of visibility on how each model was performing over time every week, how the data was coming downstream, what was broken and whether anything changed within the data that might impact our predictions.
Checking 50 models every week was a tedious and time-consuming task. The challenge was to find a way to surface the things that were critical while also ensuring the models were performing according to our expectations and, if not, quickly get to the root cause.
Why did you select an ML observability platform like Arize to help alleviate some of these stresses?
Aadil: We created a framework to evaluate ML observability platforms. As a data scientist, you want to measure against everything. Several things stood out about Arize.
First is the ease of use. Once the predictions are done, how much effort does it require for us to push the data to Arize? And how much effort does it take Arize to consume the data? When we started working, it was as simple as pointing an API to the dataset itself with a few clicks. It was also fast. You had the data there and ready to be observed on the Arize platform.
Second, we looked at drift. Anyone with a few toolkits can calculate data drift, but what matters is how you visualize and understand what’s happening and compare – you need a baseline versus what’s happening in production. The ease with which we can explore our data in Arize is impressive, along with how responsive the platform is while being used.
Third, once in production, we wanted to be able to monitor and have alerts. Arize provides these things.
We also looked at Arize’s tools for fairness and bias, performance tracing, and explainability and how these worked together. With fairness and bias, for example, we looked at mitigating bias by removing certain features and measuring things like recall parity at a cohort level.
Using Arize, it’s easy for us to go back to a stakeholder and show what bias looks like in the dataset and how it impacts the performance of the model. One of our data scientists actually did a presentation on fairness and bias to a wider group of stakeholders recently and the material was quite technical but the Arize platform made it very easy for the audience to to understand and follow along.
All of these factors convinced us that Arize was the right choice, both in terms of checking the right boxes and bringing new innovations to the team like monitoring embeddings.
The industry is moving fast within observability and it’s very hard to keep on top of everything. Having a specialized platform and a team that prioritizes innovation is helpful.
Habib: One more point to add: as a large corporation, partner reliability and shared values matter. Arize really values understanding and helping the data scientist – a platform built by and for tech people – which resonates with our team.
How did you approach the build versus buy decision for ML observability?
Habib: Initially we did consider an internal solution. We focused on trying to understand what kind of time investment and team expertise would be required, while also considering how quickly the field might change. We asked ourselves if this would be a one-year setup with a team building a product or if we needed a team on a more continuous basis to develop the product to meet the changing needs. In looking at all those different components, it wasn’t worth the cost for us to hire a data scientist, an ML engineer, and front-end developers to build a customizable solution in-house. It’s not part of our core business and does not make sense when there are already good solutions available.
We touched a bit on the ability to drill down into issues, which is really the difference between ML monitoring and ML observability. How are you speeding up time-to-resolution when encountering model performance issues?
Aadil: With so many models and a focus on always building new ones, it’s easy to miss model performance degradation. What tends to happen at companies is the business will come back to say, “hey, the last six months we had conversion based on this prediction of Y%, and in the last two months it has dropped significantly and response to our campaign is low. What’s happening?”
So you start pulling the training data and looking at graphs. You look at the predictions over the last six months and more specifically the two months that are not performing well — what changed? Is it concept drift? Or is it a problem further downstream, where the aggregation of the features is impacted by a new setting? This is very time consuming, involving several teams and several data sets. If the person that worked on the first model moved on, then it means a new person needs to get up to speed with the model and familiarize themselves with the data. Just to get the bottom and do analysis can take at least a month for one individual to do per model – all while you’re not making the revenue that you’re expecting to make.
At Spark, we moved completely away from that strategy in favor of a more proactive approach. One of the ways we are using Arize at the moment is to not just monitor models, but also also better understand and improve them. Take churn, for example. Is a customer leaving because of the service, or some other reason? Do we have features that capture that behavior? If not, why do we not have those features? If yes, why are those features not at the top for some customers when we are trying to predict churn? Maybe we need to do some feature engineering to focus our attention on ways of improving the model based on feedback, while also observing and monitoring what the model is doing within a given cohort. For us, Arize as a tool for model improvement and answering these types of questions has been quite successful.
How do you collaborate with business and product leads to tie model metrics to business results?
Aadil: We look at different metrics depending on the model. One thing we often look at is volume. Let’s say we have a model that predicts the number of customers who are in the market to buy a new phone. The tendency is to look at the conversion rate. However, there are a few elements that influence conversion so we separate things like volume and money spent rather than just having a single metric.
For churn, it’s slightly different. You send the customer an offer, but how many offers before the cost is larger than the revenue gained? The ML team is tasked with optimizing the volume of customers we want to reach, how precisely we should target them, and when to send offers since seasonality (i.e. back to school) can also influence consumer behavior. The more data we have to observe over time, the more we can tell about these things. Our priority is always to tie model prediction to business return on investment or other key performance indicators (KPIs).
In terms of ML infrastructure adoption, Spark New Zealand is ahead of the curve regarding ML observability. A lot of teams see it as a luxury to have until it becomes a necessity. Where do you see the MLOps space going, and what might be the next priorities for you?
Habib: For us, ML is now a core part of the business in many areas. If a model is creating business value, then it needs to be observed and there needs to be visibility on performance. That’s why the MLOps pipeline is at the heart of model development and automation. It’s not just the ML team that is interested, it’s also our leaders who are constantly asking us how our models are performing. Tools like Arize, which help us with model observability, play an important role in performance tracing and model improvement.
Aadil: As we bring on new tools and bring on new data models, integrating them seamlessly is also critical. Automation, monitoring, and observability are at the heart of everything. The only way to consistently increase performance is to really understand what worked, what’s going wrong, and how you can improve.
What advice do you have for people going into their first ML or data science role?
Aadil: I like the fundamentals. With data science being closer to academia than most other disciplines, I recommend taking your time. No one is expected to know every single machine learning technique under the sun; instead, get familiar with the statistics and the math behind several techniques. It’s also important to understand the business problem first. Is ML the right solution? When in doubt, ask questions. Do not skip any steps and go straight for the modeling. Data cleansing, exploration, and visualization are important steps. Understand the data first, then look at what model fits for what you’re trying to solve. Then once a model is deployed, look at how it is behaving and why it is making the predictions it does.
Habib: I’ll just add one thing: when you’re trying to decide on which company to work for, do a bit of investigation. What is the company’s vision for machine learning? What is the potential future for ML at the company? Is it something they are playing around with or is it something where they really understand its value and therefore want to invest more and grow? Do they have senior leadership invested in it? Another recommendation is to find out if they have a data product manager position. If so, attempt to have a conversation with that person and try to understand how they view the commercial value coming from data. The last thing you want to do is go into an organization and build models that are not used.