On Friday we published an article, based on an Allianz report, that explored the business risks emanating from the continued development of artificial intelligence tools. Here, attorney Coleman Watson shares his views on legal issues related to AI.
As noted in last week’s report from Allianz, AI offers both great potential and an assortment of business risks. Some of the latter are legal in nature and suggest a number of questions that corporate executives may reasonably ask.
For example, what types of new laws or regulations should there be for companies that create or sell artificial intelligence software? Are there any current laws in place that govern the latest advances? And if so, are they keeping pace with this rapidly growing field?
The fact is, law is light-years behind rapid advancements in technology such as AI. That’s because enacting laws necessarily requires a majority of the members of Congress or state legislatures to come to a common agreement on any given issue. That alone is difficult to achieve in our current political environment.
As a result, no current laws are applicable solely to AI. Instead, courts attempt to apply general laws to AI that in many instances were written decades before any development on AI began. For example, plaintiffs have brought a number of cases against robotics manufacturers in the past several years, which have been resolved by applying product-liability law.
The issue always turns on whether the robot or underlying software was dangerous or defective when it left the manufacturer’s hands. The problem is that AI reinforces itself by learning from its own past experiences. In other words, AI will make an adjustment, on its own, to make itself more efficient. That means that at the time of any injury, the robot might not be the “same robot” that left the manufacturer’s hands.
Another question might be, how advanced can the development of AI technologies become, from a legal standpoint?
To me, the answer lies in connecting three facts related to AI. First, AI systems are already everywhere. They are in apps that allow you to deposit checks into your banking account by using the camera on your cell phone. They power Snapchat filters that you use to make entertaining photos to share with friends.
Second, as noted above, AI reinforces itself by learning from its own past experiences, generally without much guidance from humans.
Third, our legal system is currently capable of interacting with AI only to the extent of imposing liability on the company or person who developed or manufactured the AI. And even then liability is not automatic, because the company or person could very well have fully complied with all regulations when developing or manufacturing the AI.
Based on those three facts, my opinion is that presently there is no legal outer limit to how far AI development can go.
Our legal system consists of criminal and civil law. Criminal law focuses entirely on mens rea (i.e., intent), and an artificial system is incapable of forming criminal intent.
In the civil context, because AI makes adjustments without human intervention, no fault can be ascribed to humans for injuries resulting from such adjustments. As a matter of tort law, it’s unlikely that a foreseeability of injury can be established.
A danger related to AI is that our legal system is directed only to regulating human behavior. If a bee stings you or a dog bites you, we do not haul the bee or dog into court to answer to legal claims. AI is no different in the sense that it, too, is non-human.
I think the only workable solution is to develop a system whereby AI developers and manufacturers agree to adhere to certain ethical guidelines to govern AI. That could form a framework courts could use to resolve legal claims where AI is implicated.
Coleman Watson is managing partner of law firm Watson LLP. He is a registered patent attorney who specializes in advising clients on technology-related issues.