Visit Our Sponsors
For companies looking to simulate and optimize their supply chains, technology is getting ahead of organizations' ability to manage it. Bill Benton, chief executive officer of GAINSystems, has a solution.
SCB: What do we mean when we talk about matching skills with technology? In what sense have skills not kept up with the pace and sophistication of current technology?
Benton: This is a chronic challenge. It’s also a great opportunity. Decision sciences, machine learning and artificial intelligence are driving a lot of decisions today. But most people aren't literate in how those algorithms are deriving their recommendations.
SCB: How do you use that technology to simulate supply chains?
Benton: There’s talk about things like digital twins and other methodologies for trying before you buy, but most practitioners aren't familiar with them. One plausible trend is that rather than the traditional selling of tools to people who need to be trained to use them, you provide those services to the end users. They become consumers of information and recommendations, as opposed to users of particular tools.
SCB: So the analysts, the data scientists and I.T. experts no longer are in the organization — they’re housed within the vendor that’s providing those services.
Benton: Precisely. It's a logical corollary to cloud computing. We’ve taken tools from behind the user’s firewall and moved them into the cloud, where you might have 100 people who are skilled in operating them, as opposed to a handful.
SCB: So you don’t have to acquire the skills and talent in the first place — you get them right away from the outside provider.
Benton: That's a good way to look at it. The rate of technological change is exceeding that at which people can retrain. They haven't done these things before, as opposed to taking a routine accounting task and sending it out to a third party.
SCB: I would imagine that some organizations would be a little reluctant to go that far. They end up without anybody in-house who can be trusted to analyze the data, make sense of it, and engage in the proper levels of optimization. Is that a hard sell, to get an organization to come to the realization that they need to look outside for this expertise?
Benton: The answer is yes if you try to do it too fast. It’s a bit of a cliché, but the “think big, start small” idea applies here. You pick one particular function for a subset of the company's operations, and build confidence through that. If it's done incrementally, I think it's quite plausible.
SCB: Is the inevitable end state the absence of human beings from the decision-making process, replaced by artificial intelligence?
Benton: Yes. There's a branch of cognitive computing called explicable A.I., or X.A.I., which writes machine-learning algorithms for the purpose of explaining to humans what the other machine algorithms are using as inputs. That’s something that I think will become more prevalent. There's a great book called Race Against the Machine by Eric Brynjolfsson, a professor at MIT. He talks about how people will augment and update these models. One of the issues that industry hasn't come to grips with yet is that a machine-learning algorithm has a shelf life, and needs to be constantly updated.
SCB: In what sense does it have a shelf life? What happens that causes it to expire at a certain point?
Benton: There are new factors or features that may affect the model. Each machine-learning dataset looks over a certain time series of events. It might not capture all the types of events that can occur in the universe of potential influencers. Then there are things like changing weather patterns, political climates and tariff regulations. They all need to be incorporated into updates. One of the trickiest things — and it’s more art than science — is knowing how frequently you need to refresh it. The shelf life isn't stamped onto the algorithm.
SCB: X.A.I. seems to be an answer to the fear of the black box — the idea that the data goes in one end and the answer comes out the other, and we don't know what happened to generate the decision. To what degree should humans in the organization be in the loop about how and why a decision was made by the A.I. system?
Benton: I think that's evolving. One factor is shared accountability — the extent to which the provider of this information shares in the result, good or bad. That alleviates some of the concern about having to understand all the decisions that were made and how. And X.A.I. aspires to help do this. It's still a developing part of cognitive computing, and might not get all the way that we're hoping it does.
Enjoy curated articles directly to your inbox.