It would be impossible to predict what artificial intelligence will look like 10 or even 5 years from now. Even a modest goal like making a 12-month forecast could prove challenging.
However, PricewaterhouseCoopers has done just that in a new paper, released in mid-January, that offers up several predictions for how AI will unfold in 2018.
- AI will impact employers before it impacts employment.
While forecasts of advancing AI technology displacing workers are commonplace, it hasn’t really happened yet. Rather, new jobs offset those that are lost. That will continue this year, PwC says. “People will still work, but they’ll work more efficiently with the help of AI,” the professional services firm contends.
Using AI capabilities, advanced computers usually beat top grandmasters at chess. However, it turns out that a human getting advice from AI but afforded the ability to override that counsel usually defeats an AI chess master.
“Consider how AI is enhancing the product design process: A human engineer defines a part’s materials, desired features, and various constraints, and puts [all of that] into an AI system, which generates a number of simulations,” PwC writes. “Engineers then can choose one of the options, or refine their inputs and ask the AI to try again.”
While there have been other, more-dire forecasts, a PwC survey across 29 countries, to be released next month, will reveal that the proportion of jobs at high risk of automation will only be 1% by 2020.
- AI will come down to earth — and get to work.
The immediate future of AI will not be fleets of autonomous cars that never crash or encounter traffic jams or “robot doctors that diagnose illness in milliseconds,” PwC says.
The near-term benefits will be more modest but still valuable. That value “lies not in creating entire new industries (that’s for the next decade), but rather in empowering current employees to add more value to existing enterprises,” the firm writes.
That empowerment is coming in three main ways:
- Automating processes too complex for older technologies
- Identifying trends in historical data
- Providing forward-looking intelligence to strengthen human decisions
This kind of practical AI is often “sneaking in through the back door,” with enterprise application suites from Salesforce, SAP, Workday, and others increasingly incorporating AI capabilities, according to PwC.
3. AI will help answer the big question about data.
For all the excitement about big data in recent years, companies typically haven’t yet seen a big payoff. “The learning curve was steep, tools were immature, and [corporate executives] faced considerable organizational challenges,” the paper says.
Now, some companies are rethinking their data strategy and “asking the right questions.” For example: How can we make our processes more efficient? What do we need to do to automate data extraction?
Such straightforward questions will lead to what PwC calls “the right approach to data.” That is, it’s rarely a good idea to start with a decision to clean up data, the firm says. It’s almost always better to start with a business case and then evaluate options for how to achieve success in that specific case.
Say, for example, that a health-care provider aims to improve patient outcomes. Before beginning to develop an automated system to facilitate that, the provider would quantify the benefits that AI could bring. The provider would next look at what data is needed — electronic medical records, relevant journal articles, and clinical trials data, among others — and the costs of acquiring and cleansing this data.
“Enterprises that have already addressed data governance for one application will have a head start on the next initiative,” PwC writes.
- Functional specialists, not techies, will decide the AI talent race.
AI increasingly will require knowledge and skill sets that data scientists and AI specialists usually lack, according to the paper.
Consider a team of computer scientists creating an AI application to support asset management systems. “The AI specialists probably aren’t experts on markets,” says PwC. “They’ll need economists, analysts, and traders working at their side to identify where AI can best support the human asset manager.”
And, since the financial world is in constant flux, once the AI is up and running it will need continual customizing and tweaking. For that, too, functional specialists, not programmers, will have to lead the way.
- Cyberattacks will be more powerful because of AI — but so will cyberdefense.
AI has already shown superiority over humans when it comes to hacking. For example, machine learning, often considered a subset of AI, can enable a malicious actor to follow a person’s behavior on social media, then customize phishing tweets or emails just for them, PwC says. And the more AI advances, the more its potential for mounting cyberattacks will grow.
On the other hand, “scalable machine learning techniques combined with cloud technology are analyzing enormous amounts of data and powering real-time threat detection and analysis,” PwC points out. AI capabilities can also quickly identify “hot spots” where cyberattacks are surging and provide intelligence reports.
Still, “cyber-wars won’t simply be two sets of computers battling it out,” the firm adds. “Even in cybersecurity, some things only people can do. Humans are better at absorbing context and thinking imaginatively.”
- Opening AI’s black box will become a priority.
Many AI algorithms are beyond human comprehension, the paper notes. And some AI vendors will not reveal how their programs work, to protect intellectual property. “In both cases, when AI produces a decision, it’s end users don’t know how it arrived there. Its functioning is a black box,” PwC writes.
Such realities are part of what gives rise to horrific scenarios of humans being wiped out or enslaved by machines. These risks are real too, but manageable, according to the firm.
“Here’s the secret about AI that many of its proponents don’t like to mention: It’s not that smart — at least not yet,” the paper notes. It’s “still just following rules that humans devised.”
But we may not have to look far into the future to envision scenarios like these: What happens when AI-powered software turns down a mortgage application for reasons that the bank can’t explain? How about when an AI trader, for mysterious reasons, makes a leveraged bet on the stock market?
AI manifested as “black boxes” may meet a wave of mistrust that limits its use, according to the paper. Vendors and users of AI likely will face growing pressure to deploy AI that is explainable and transparent. Doing so will reduce risks and help establish stakeholder trust, PwC opines.