“Numbers” as in algorithms – the mathematical formulas that are increasingly coming to dominate all facets of supply-chain management. What’s more, they can prove especially valuable in helping companies to decide which suppliers are the most trusted partners.
The age of big data is both blessing and curse, depending on a company’s ability to make sense of the flood of information with which it’s being inundated today. That’s especially true in the case of supplier evaluations. Is that nominal partner really stable? Can it deliver on its promises? Will it still be around in a year or two? All are questions that manufacturers have grappled with for years.
If they weren’t already aware of the importance of strong supplier vetting, the Great Recession delivered the message in spades. Countless suppliers went out of business following the global economic slump that began at the end of 2007. And while things are steadier today, there’s a growing awareness of the need to be prepared for the next crisis.
So where do companies get the necessary data, and how much can they trust it? Potential sources of information are richer and more plentiful than ever before, thanks to the globalization of supply chains. At the same time, that trend has lengthened the distance between buyer and supplier, making it tougher to assess the reliability of key intelligence.
Often companies are willing, if not forced, to take suppliers at their word. Between 60 and 70 percent of supplier data is self-reported, according to Jared Smith, co-founder of audit and compliance specialist Avetta. (The name is a recent rebranding of PICS Auditing.) Even a lot of so-called secondary data ends up being traced back to the supplier under review. For example, financial data supplied by a trusted source such as Dun & Bradstreet often comes from the supplier itself. In the case of many smaller, private companies, unburdened by requirements for public disclosure, that’s all the information that D&B might have to go on.
Optimists point to the internet as an invaluable tool for dredging up supplier information that might once have gone undiscovered. The continuing maturity of that all-embracing medium has opened up even more sources of data (if not dirt). “It’s a hundred times easier than 10 years ago,” says Smith. At the same time, a flurry of trade agreements and the penetration by multinationals of new markets have raised new obstacles to making sense of the resulting information chaos.
Multi-sourced, verifiable secondary data therefore becomes more important than ever to obtain. Manufacturers can avail themselves of a host of government agencies, court records, private consultants and trade organizations. “What we focus on,” says Smith, “is trying to take advantage of as many secondary data sources as we can.”
Even that strategy doesn’t guarantee a team of completely reliable partners, though. Which is where algorithms come into the picture – the mathematical rules that generate hard numbers relating to supplier dependability on multiple fronts.
Avetta will typically gather around 1,000 data points for each supplier, derived from primary data, and an equal number from secondary sources. It will then “scrub” that data and generate a score.
The exercise isn’t as cold and unfeeling as it might seem. Smith stresses that hard numbers need to be supplemented by people. “Human insight will never go away,” he says. It takes flesh-and-blood individuals to interpret responses to questions that might be posed in slightly different ways in multiple places on the questionnaire. Results, including any anomalies that might emerge, are then forwarded to technical specialists.
What they’re looking for is the ultimate validity of the information being provided, especially as it relates to critical elements such as health and safety. They can then go back to the contractor under review to clarify its answers and expose any discrepancies.
The use of advanced algorithms for supplier vetting is relatively new, Smith says. “Before us, or organizations like us, there wasn’t enough data in one spot to be able to score it appropriately.” Avetta’s first generation of algorithms dates back to 2008, and the firm is preparing to roll out a new generation, which he vows will be “bigger and better.”
In the process, computers will handle nearly all of the heavy lifting in that regard. Even granting the continued need for living, breathing expertise, “a computer is able to check patterns of bad information better than humans,” says Smith.
Oddly, however, the increasing sophistication of algorithms has little do with computers, he says. The main driver is the emergence of big data, which by its very nature demands a more advanced method for collection and organization. An especially valuable tool in this regard is the NoSQL database, which dispenses with the traditional relational database in favor of one giant table, a better means of interpreting the mass of data generated by developments such as the internet of things. According to Smith, it is “transforming the world of big data and causing people with high-intensive apps like ours to work orders of magnitude faster.”
There’s plenty of work to be done, to fine-tune critical intelligence. When it comes to accurately assessing suppliers, Smith believes, companies are “decades away from having good, reliable secondary data.” What’s needed is further investment from trade organizations, government entities and companies willing to share information. Some are adamantly opposed to doing so, in the belief that their supplier intelligence is proprietary and of competitive advantage.
So be it. In the meantime, mathematics will continue to insinuate itself into the everyday world of supply-chain management, especially in the case of supplier relations. For the foreseeable future, however, don’t count the human expert out.
Timely, incisive articles delivered directly to your inbox.