That machines are destined to become smarter and smarter as the years roll by, and that businesses will increasingly use them in attempts to differentiate products and boost profits, are not matters of debate.
The real question is whether change wrought by artificial intelligence will, all things considered, ultimately prove to be helpful or harmful, and to what degree.
A paper released on Thursday by Allianz, the world’s largest insurer and a manager of some $2.2 trillion in assets, offers a stark portrayal of how the development of AI could go spectacularly wrong.
Some of the potential eventualities are socially calamitous on a global scale. But even keeping the focus strictly on business risks, there are hefty concerns. The paper cites a 2017 survey by PricewaterhouseCoopers in which 67% of CEOs said they believed AI and automation will have a negative impact on stakeholder trust over the next five years.
And, according to the Allianz Risk Barometer 2018, the impact of AI and other new technologies already ranks as the seventh-top business risk — ahead of political risk and climate change.
“AI exposes businesses to threats that could easily counterbalance the huge benefits of a revolutionary technology,” Allianz writes.
Signs are evident already. Look, for example, at Microsoft’s 2016 AI experiment, in which a bot named Tay was kicked off Twitter the same day it launched for becoming “a sexist, racist monster,” as TechRepublic put it.
Indeed, one of the leading current uses of AI is powering customer-service chatbots. “Autonomous chatbots trained on language texts are prone to learn and perpetuate human prejudices and unfairness,” observes Allianz.
The specter of AI agents that get smarter on their own also presents confounding implications for legal liability. Even after AI products have been tested and are in the market, it’s difficult to identify what exactly might go wrong before damages take place, Allianz says.
“AI decisions that are not directly related to design or manufacturing, but are taken by an AI agent because of its interpretation of reality, would have no explicit liable parties under current law,” the paper states. “Leaving the decision to courts may be expensive and inefficient if the number of AI-generated damages starts increasing.”
Perhaps even more worrisome from a liability standpoint, AI is unable, at least currently, to comprehend abstract concepts such as loyalty, happiness, hurt, or values. That could lead an AI agent to act against human interests.
Consider an example given in the paper. Say an AI robotic agent is trained to maximize the well-being of elder-care patients with dementia. Specifically, the agent is designed to provide continuous attention to avoiding risky situations. Thus, in order to reduce the risk of falls, the agent starts controlling a patient’s opportunities to leave their living quarters. That would reduce social contact and potentially lead to a spiral of depression.
Such scenarios may pose bewildering hurdles for companies developing AI technology. “The challenge when developing AI agents is to instill [them] with a distinction between good and bad,” Allianz says.
There are also frightening implications for information security. While AI can be used to detect and prevent cyber-attacks, the opposite is also possible.
“AI could facilitate serious incidents by lowering the cost of devising new tools and weapons to launch attacks,” the paper points out. “AI could even be used in the future to weaken cyber defense mechanisms by utilizing social engineering to psychologically manipulate people into performing [certain] actions or divulging confidential information.”
Even a slight, unintentional AI error in an internal system could quickly escalate into a major problem that damages a company’s reputation and bottom line. A programming error could be replicated on any number of machines, leading to an unforeseen accumulation of losses.
Further, AI raises concerns around the use of personal data to increase the intelligence of agents. Data-protection regulation in Europe already contains conspicuous limitations on the adoption of AI systems.
“Businesses will need to reduce, hedge, or financially cover themselves from the risks of non-compliance with new data protection regulations in the future,” the paper states.
With respect to climate change, AI is already helping to combat its impact by reducing emissions through the use of smart technology and sensors. However, the paper points out, AI is also “a key component in the development of nanobots, which could have dangerous environmental impacts by invisibly modifying substances at nanoscale.”
Of course, as an insurance company, Allianz profits from risks faced by businesses and individuals. So it’s not altogether surprising to hear the firm warn of the business risk posed by AI.
But, judging by Allianz’s paper, all the ramifications of insurers using AI themselves are positive.
“AI applications will improve the insurance transaction process, with many benefits already apparent,” Allianz writes. “Customer needs can be better identified. Policies can be issued, and claims processed, faster and more cheaply. Large corporate risks, such as business interruptions, cybersecurity threats, or macroeconomic crises can be better predicted.”
Additionally, the insurer says, insights gained from AI-powered analytics “could expand the boundaries of insurability, extending existing products as well as giving rise to new risk-transfer solutions.”
Such solutions could become available for things like non-damage business interruption and reputational damage, the paper suggests.