Once you understand your data landscape, the application of artificial intelligence becomes much more manageable, says Roger Counihan, chief revenue officer at CognitOps.
While there can be many systems in a warehouse, limitations exist, and they give way to challenges, Counihan says. “The existing software stack in the distribution center is heavily reliant on outdated technologies and very focused on individual transactional data sets — inventory sets, lists of orders, pick tickets or individual machine control tasks — but there's a massive limitation on how users and operators can actually digest that information.”
That requires operations managers to be “data super users,” but supply chain complexity, speed and customer demands have exceeded the cognitive capacity of human operators within a distribution center.
Counihan feels the amount of data generated in a DC is perfect for data science applications. In real time, different types and distributions of data can create time series-type data that can simplify everything going on in the facility. “We can use the massive computing power of the cloud to digest this stream of information, build predictions about what will happen next in the distribution center, and then simply present that to the operator.”
That should determine the next step or decision to improve performance, readjust resources, align robotic systems or configure order releases in different ways that will maximize the performance of the entire building. As opposed to systems focused on a specific functional area, which might optimize a single part of the operation, what artificial intelligence or any other data science-type model can do is assess a broad and changing dynamic.
Identifying interconnections across buildings is invaluable. It's hard to find connections across a complex environment, but not for the capabilities of data science, machine learning and artificial intelligence.
Timely, incisive articles delivered directly to your inbox.