Lee Coulter arrived at a point when he recognized within himself a chronic, “near-medical-level” case of frustration. After decades as a shared-services executive and ceaseless immersion in that function’s accelerating technology requirements, he came to the conclusion that he didn’t know what anyone was talking about anymore.
“I don’t know what or whom to believe,” says Coulter, CEO of the shared services subsidiary of Ascension, a large health services provider. “You can’t have a meaningful conversation about [artificial intelligence] with two different people.”
Coulter is doing something about the problem. In 2016 he successfully pitched to the IEEE Standards Association the idea of assembling a working group that would start by settling on definitions for the various categories of robotic and “smart” technologies that reside at the leading edge of corporate automation.
The IEEE Working Group on Standards in Intelligent Process Automation, chaired by Coulter, is aiming to finalize the first phase of its effort — called “Terms, Nomenclature, and Concepts” — in 2018. And work has already begun on a second phase, “Technology, Taxonomy, and Classification.”
In the meantime, though, the lack of a common, clear definition of AI is a potentially troublesome issue for CFOs when deciding on new technology purchases, especially given the current proliferation of new vendors pushing AI or machine-learning solutions. “So many vendors are out there saying they have AI within their products. Most of the time, that’s a load of bull,” says Coulter, echoing many other enterprise technology experts. But the confusion over AI also could cause finance chiefs to become too cautious. They may underrate the possible long-term value that might be attainable from steadily adopting such technologies, even when they’re still untested and imperfect.
How are AI and machine learning best defined? In general, both are used to classify things and predict outcomes based on crunching big data. Here’s a short definition of AI offered in a June 2017 report by McKinsey: “the ability of machines to exhibit human-like intelligence — for example, solving a problem without the use of hand-coded software containing detailed instructions.”
As for machine learning, which many consider a subset of or stepping stone to AI, the IEEE group’s working definition may be a bit harder to grasp: “detection, correlation, and pattern recognition generated through machine-based observation of human operation of software systems along with ongoing self-informing algorithms … leading to useful predictive or prescriptive analytics.”
At least 1,000 startups with products that include purported AI and machine-learning capabilities launched in the past two years, Coulter says. CIOs bent on playing with “shiny new objects” might be taking the bait, but few CFOs are. Acquiring technology billed as AI is a particularly dubious proposition. While the concept of AI is at least 60 years old, only in recent years have computational horsepower and the size of datasets reached levels where it could be practical for a wide spectrum of use cases.
What are those use cases? Insurers and claims adjusters are deploying advanced “machine vision” technology to triage accident damage to automobiles based on photos submitted by clients. Credit-card issuers are using programs that scour the public Facebook pages of cardholders to understand how events like marriages and births impact spending habits. Manufacturers are using machine-learning technologies to more easily spot defects like wrong colors, shapes, and packaging. And many companies with customer-service operations are using website chat-bots based on natural language processing.
The fact is, though, that most solutions hyped as AI-driven are very unlikely to solve companies’ most profound problems. A recent publication of the Shared Services & Outsourcing Network, called “Global Intelligent Automation Market Report” and co-authored by Coulter, notes that most things labeled as AI are “limited cognitive solutions centered around a very narrowly defined knowledge domain.”
Weston Jones, global robotics and intelligent process automation leader for Ernst & Young, puts it differently: “Innovation gets meetings, but improvements get funded,” he says. “A lot of this AI stuff can’t be broadly applied just yet so as to provide the payback CFOs want. A vast majority of them don’t see the value.”
Giant technology companies are the exceptions. They spent between $20 billion and $30 billion on AI technology last year, according to the McKinsey report. For example, Google uses “reinforcement learning,” an AI-related capability, to reduce power consumption in its data centers by more than 10%. Facebook and other social media firms use automatic language translation on their sites, vastly improving customer engagement.
Still, McKinsey writes, overall there is only “tepid” demand for AI in the business world. In a recent survey of more than 3,073 “AI-aware” C-level executives, the management consulting firm found that not only are many business leaders uncertain what AI can do for them, they also struggle with “where to obtain AI-powered applications, how to integrate them into their companies, and how to assess the return on investment in the technology.”
Only 20% of the survey respondents said they were using any AI-related technology at scale or in a core part of their business, according to McKinsey. But it’s unclear what that means, in terms of truly quantifying the extent of such technologies’ penetration into the corporate world. “AI covers a broad range of technologies and applications, some of which are merely extensions of earlier techniques [while] others … are wholly new,” the report acknowledges.
Further, the McKinsey report notes, “There are several ways to categorize AI technologies, but it is difficult to draft a list that is mutually exclusive and collectively exhaustive because people often mix and match several technologies to create solutions for individual problems.”
But if companies aren’t yet willing to make investments in AI, at least their interest level is rising. Whit Andrews, lead artificial intelligence analyst for Gartner, says the number of AI-related inquiries to the firm shot up by 200% in 2016 and is up a further 100% in 2017.
Just about all Fortune 1000 companies are at least looking into machine learning, according to John Parkinson, a longtime technology strategist and an affiliate partner with Waterstone Management Group. Among the relative few that are actually using or nearly ready to employ some applications, some are using off-the-shelf software, some are renting machine-learning capabilities in the cloud, and some are developing their own systems.
Parkinson doesn’t have anything to say about companies’ use of AI. In fact, he suggests that “intelligence” is an unfortunately misleading word in this context that could trip up technology buyers. Nobody actually knows how human intelligence works, he says, so the idea that software could be written to mimic it is “farcical.”
“I don’t care what IBM says about Watson,” he adds, referring to the tech heavyweight’s positioning of its high-profile knowledge system as an AI platform. “It’s not cognitive at all. It’s very clever software that’s been trained using a mathematical model.”
CFOs might begin to appreciate the potential value of AI and machine learning if they can get over the idea that automation of this kind has to have a large, immediate payoff. “We’re telling [clients] not to approach this as a traditional IT project,” says Gartner’s Andrews. “For many organizations, whatever they’re going to do with AI, it may not pay back in the classic formation. Characterize your ROI as the demonstration of the lessons you will be learning that are unique to you.”
To begin, Andrews counsels that companies should look for individual AI solutions to address problems they’ve never had enough people to overcome. In all likelihood, fixing such problems will not be transformative. Indeed, trying for a moonshot in the early days of using AI may be fraught with danger. “When organizations begin by trying to transform themselves, they place themselves in a position where they face a tremendous amount of risk,” Andrews says.
If a company were to apply “intelligent” automation to, say, 10% of its operations, it could see some benefits relatively quickly, according to Parkinson. But the real payoff would come much later, from steadily chipping away at processes that will become more efficient when automated. “More and more routine business work will get subsumed into automatic systems,” he says.
Jones of EY advises that companies do a complete assessment of existing and foreseeable projects in order to identify and rank the business cases for addressing them with advanced automation. It’s a crucial step, he says, because EY estimates that, at this point, 30% to 50% of AI-type projects fail. But it’s important to understand that advanced automation cannot overcome a lack of data or data that can’t be trusted.
“AI is not a magic way to reveal value within data,” Andrews says.
That means companies will experience greater and greater demand for data scientists. A severe shortage of them is widely expected to materialize in the next few years. But many companies that do hire such positions are destined to be disappointed, according to Coulter. “Graduating with a degree in data science does not mean you’re a data scientist,” he says. “If you’ve successfully applied machine learning to something for 5 or 10 years, you can start to call yourself a data scientist.”
Companies in general, and finance departments in particular, may also be significantly influenced by external parties’ use of AI and machine learning. For example, several major accounting firms reportedly are working to transform the auditing process so that it examines 100% of transactions rather than small random samples.
Also, one Big Four firm is working on a product based on IBM Watson technology that could radically reduce the time and effort companies spend in the merger and acquisition due-diligence process. The product, said to be on tap for release within two years, is being designed to consume all available information about any particular company — structured and unstructured data alike — and produce a precise valuation estimate with a high degree of confidence.
Such a capability may not, as Parkinson notes, be evidence of intelligence. At the same time, an interesting thing about technology classified as machine learning-enabled is that it actually works better than the theory on which it’s based says it should. “And we don’t know why,” Parkinson says.
At a base level, it’s trial and error that allows the development of technology that no one fully understands. “We, the people who work in this field, are always looking at better ways to work at the edge of what software systems can do and [identify] the exceptions that will drive us to build better and better core systems,” says Parkinson. “That will continue to accelerate. And as we hook those systems up to each other, we’re going to discover things they do in combination that we haven’t predicted.”
However someone defines the capabilities that drive the cutting edge of corporate automation, they’re only going to get bigger and better. CFOs have to decide whether to start tapping into the well of opportunity.
David McCann is a deputy editor of CFO.