Most CPG brands, distributors, outlet stores, and other participants in the supply chain have already seen the need for a supply chain control center. Effective management of the various moving pieces requires real-time data (location data, quantity data, environment data, and other relevant data). Even the smallest misstep in the supply chain can result in either lost revenue by unfulfilled demand, or in lost customers due to brand switching.
Having a single unified view into the status of the end-to-end supply chain enables manufacturers to make adjustments on production capabilities and enables outlets to stock up on relevant critical materials such as semiconductors before shortages become acute. This single, unified view is known as the Supply Chain Control Tower, and enables monitoring and resolution of critical issues in the supply chain.
An element of the complexity of supply chain management stems from the fact that the “forecast” for the product drives everything from acquisition of raw materials and parts to manufacturing the products to distribution of products, and finally getting the product into the hands of customers. To create an accurate forecast, retail manufacturers are turning to artificial intelligence (AI), and creating machine learning (ML) models. An accurate forecast will allow them to create the supply chain instead of managing the supply chain.
CPG brands are also realizing that they must create many models for every single globally selling product.
Many of the attributes influencing the forecast cannot be quantified, and cannot be part of one single ML model. For example, a country’s culture shapes the demand for the color of lipsticks sold in that country, and a global model will not suffice. Data engineers must build different models for lipstick demand in the different countries.
As a consequence, CPG brands are ending up with many models to manage, each of which are extremely complex to operate. Hence, they are asking the question: We need a supply chain control tower; do we also now need a control tower for machine learning models?
First, let us step back and consider: what is an ML control tower?
An ML control tower is the single dashboard that data scientists and ML engineers can utilize to oversee all their models in development, testing, and production. Most enterprises don’t think they need an ML control tower until they start handling more than a dozen models in production simultaneously. Up to that point, ML engineers and data scientists can manage manually, perhaps using a standard ML model repository.
However, as we’ve started working with more enterprise teams, we see that it’s not just the number of models that drives a need for an ML control tower, but also the complexity of operating these models at the same time.
There are four factors at play:
1) First is the complexity of getting models online and offline. How are data scientists handing models over to ML engineers for testing? How are they managing versioning to deploy the latest and greatest model while de-commissioning outdated models, especially if this is one model that is part of a larger chain (for example, updating a segmentation model that is part of a larger dynamic pricing model)?
2) Next is the complexity around all the permutations of a single model. How easy is it to search all the models to find the one they’re looking for, check its status, and move it from the testing to the production environment?
3) Then there is the complexity of optimizing computer resources across data scientists and across different use cases. An ML control tower enables different data scientists and use cases to share resources, instead of standing up dedicated pipelines that can suck up resources even when they aren’t in use.
4) Finally, there is ongoing monitoring, testing, and troubleshooting to optimize the performance of live models. A proper ML control tower provides full model observability to ensure you have the best performing model in production.
Demand Forecasting And The Dangers Of “Drift”
Take for example demand forecasting. It is probably not an exaggeration to say that 60-80% of a manufacturer or retailer’s success depends on accurate forecasting. Forecasts determine the products and quantities produced on the manufacturing floor, which then determine what gets distributed to the different stores and stocked on shelves months later. If the forecasts are wrong then, months later, consumers will not have the product they want, leading to slow-moving inventory and, more importantly, the potential loss of customers as they substitute one product with a competitor’s product.
CPGs have embraced AI and ML to build more precise models, integrating more factors such as weather, hyper-regional economic factors and other non-quantifiable elements. While these AI-based models can be much more precision than human analysts, they are also prone to hidden biases that can lead to discrepancies with current customer preferences. And, since CPG companies can easily have millions of such ML models due to the number of products and the number of markets they operate in, the risk gets multiplied millions of times.
Recently I spoke to the head of supply chain analytics at a global CPG brand. He mentioned how, last quarter one of their AI models predicted demand that was off by more than 50% from the previous years, causing the manufacturing floor to question the model. When they dug deeper, they saw that their model had been thrown off by the massive shift in demand during COVID-19. But, now that consumption had started to shift back to normal, their AI model had failed to weigh demand signals from the past few weeks as heavily as overall demand from the last two-and-a-half years. This discrepancy is known as “drift”. If the drift goes undetected long enough, it can have disastrous consequences, particularly for core functions like demand forecasting.
The supply chain analyst understood that typical software bugs lead to obvious performance issues — for example the software is not working at all. But AI and ML models are very different. They are often not wrong because they stop working, but rather because they stop being accurate. That is, assuming incoming data is structured the same, the model will continue spitting out inferences as if everything were the same. Businesses must figure out how to detect when their models are acting as if the present is the same as the past, even after the environment changes.
For the analyst, as with other enterprises looking to generate value from AI/ML investments, there is a need for a different approach to ML model operations — one that focuses on generating ongoing value instead of a one-time software release.
Moving forward, enterprises will ask not just for a deployment solution for their ML models, but also for the full visibility and management capabilities of an ML Control Tower.
Manish Sinha is a special adviser to Wallaroo Labs.
Timely, incisive articles delivered directly to your inbox.