Artificial intelligence has the potential to radically change just about everything for the better. Its impact is likely to match that of electricity and other general-purpose technologies that have enriched our lives. Early signs of its impact are already emerging in test cases with autonomous vehicles, including ships and planes, and the benefits go well beyond reduced manpower.
For example, with up to 90 percent of seafaring accidents attributed to human error, AI could reduce such cases significantly. Enabled by vehicle-to-vehicle communication, vehicles can share important data such as position, speed and heading. The information can be used to alert drivers and autonomous systems to possible threats and collisions, even those that are out of sight. It can also alert drivers about less visible vehicles on the road, such as motorcycles and bicycles. V2V, combined with AI, can cut down on collisions and congestion by proactively alerting drivers, pilots, and captains to anticipated problems and collisions with other vehicles, as well as other traffic hazards.
The advantages of AI in terms of productivity, innovation, and global economic growth are equally significant. McKinsey & Company estimates that the adoption of AI “has the potential to deliver additional global economic activity of around $13tr by 2030, or about 16 percent higher cumulative GDP compared with today. This amounts to 1.2 percent additional GDP growth per year.” PwC’s estimate is even higher, exceeding $15 tr.
Given this potential, it’s easy to get carried away with AI and ignore the major problems that companies face in adopting and using it effectively. Here are some of the challenges and possible solutions.
Lack of big, clean data. All computational processes need good data, and artificial intelligence is no exception. Machine learning (ML) in particular requires huge volumes of accurate data in order to train algorithms and develop predictive models. However, most companies have neither the quality nor quantity of data to accomplish this.
Companies need to improve the quality of their data through effective master data management, and by incorporating real-time data into processes and systems as much as possible. Real-time, multi-party digital business networks maintain a “single version of the truth” while continuously synchronizing external systems, ensuring that companies are running on the most complete and up-to-date information possible.
Organizations should also consider using solutions with pre-trained, ML-based algorithms that draw on large volumes of data from similar scenarios and companies. Because of their huge volumes of transactions, digital business networks can quickly hone well-trained algorithms and intelligent agents which new members of the network can leverage.
Compartmentalized AI is unintelligent AI. Supply chains are inherently cross-functional and cross-enterprise, and the data needed to operate them is scattered among internal and external partners. Companies attempting to implement AI in a fragmented fashion, while ignoring the big picture, will get poor results. Without access to all relevant data, algorithms will continue to have blind spots and miss opportunities for optimization and execution.
Companies should seek to include as many relevant systems, operations, and trading partners as possible to strengthen the accuracy, context, and completeness of data. The goal should be to connect the entire supply chain to a real-time network, from source to end customer. Only a supply chain-wide solution can fully optimize critical operations such as inventory levels and logistics management, by monitoring the full picture of demand and supply.
Black box versus explainable AI. Certain ML techniques, such as scorecards and decision trees, are easy to understand. But neural networks are more complex and mysterious. Should we act on this data, or allow the system to act autonomously if we don’t know how it arrived at such decisions?
Amazon’s experiment in using AI to recruit talent went awry when researchers noticed that the system had a strong bias for hiring males. This was due to the algorithms having been trained on predominantly male data. The AI consequently downgraded candidates from two women’s colleges, and made other inappropriate decisions based purely on gender.
AI needs to be transparent in its inputs, processes, and decisions. Companies need to know, at least in essential terms, how algorithms work, how they arrive at decisions, and how they create and distribute value. Ideally, the system should make reasons behind the decisions explicit, allowing users to view, approve and override decisions of autonomous agents. Companies should also be able to adapt the algorithms to meet their particular needs.
Short-sighted optimization. Every process and change carries a cost. When it’s not factored into decision-making, the outcome can sometimes be worse than if nothing had been done at all. In supply chains, consisting of many partners and systems, it’s easy to lose sight of the long-term consequences of an action. Many solutions fall into this trap by re-planning the entire supply chain, creating “nervousness” in the system and incurring major and unnecessary change and costs when a minor or more local change would suffice.
To avoid this problem, optimizations should be continual rather than occasional, and should be constrained to impacting the fewest entities possible in order to minimize disruption to the network. As with the case of an aircraft on autopilot, continual minor adjustments to direction can compensate for shifts in conditions, while keeping the plane precisely on course. The alternative is making a single major re-routing when the plane is way off track near the end of its journey. These continuous adjustments add up to big improvements without causing jarring shockwaves throughout your supply chain.
Over-enthusiastic AI vendors. Many software vendors have jumped on the AI bandwagon. In a sense, this is understandable, given the vagueness of its definition and its ill-defined and sprawling domain. “Machine learning” is a more well-defined term, and is often what people mean when they use the term “artificial intelligence.”
Nevertheless, vendors need to clearly explain what they mean when they use terms such as “artificial intelligence,” “machine learning,” “neural networks,” “deep learning,” and the like. Most importantly, they need to show how their AI delivers more business value than traditional, heuristic algorithms. How does it work? Does it span systems and enterprises to embrace the entire network and all its conditions and constraints? Or is it limited to a few functions or domains? Who is using it, and what results have they achieved?
The AI skills gap. Many companies are being caught short by the rapid evolution and growing viability of AI. This is because AI requires modern skills which can involve new languages, frameworks, and ways of thinking. Few companies are equipped to handle the transition and fully exploit this fast-developing field. A 2018 survey by O’Reilly suggests that the AI skills gap is the largest barrier to AI adoption.
In the long run, the market will take care of the skills deficit, but until then, companies should begin identifying their needs and potential new hires. They should also look at training existing employees, and offering incentives and new career paths to support the shift to ML and AI technologies.
Another option is to partner with a technology firm that has the network, resources, and expertise to advise, implement, and maintain an AI solution or AI-enabled platform. The latter allows companies to get started, and begin realizing gains, much more quickly.
Nigel Duckworth is a senior strategist at One Network Enterprises, provider of an AI-enabled business network.
Timely, incisive articles delivered directly to your inbox.