We know artificial intelligence will remake — is already in the process of remaking — both business and the broader world beyond. What we don’t know yet is what unintended consequences AI will wreak as it becomes more advanced and commonplace.
One hindrance to envisioning that future is that AI is not “a technology,” in the same sense that ERP, for example, is a technology. While there are different flavors of ERP, with differing sets of capabilities, we generally understand that it’s software designed to integrate an organization’s operational and financial processes into a unified system.
Artificial intelligence, though, is “a diverse set of methods and tools continuously evolving in tandem with advances in data science, chip design, cloud services, and end-user adoption,” as Ernst & Young (EY) put it in a recent paper.
That points to a future — after some period of continuing evolution and increasing end-user adoption — in which a majority of companies’ key technology systems could be under the control of artificial autonomous agents.
Because such agents will be designed to learn on their own, they could develop capabilities allowing them to also decide and act on their own. The potential perils attendant to that development are discussed widely today, of course.
EY’s paper, released in September, is something of a call to action.
On the one hand, it urges companies to get moving on digital transformation by building up their usage of robotic, intelligent, and autonomous systems. “Enterprises that are further along in their digital journey will be able to more quickly adopt and realize benefits from AI,” EY reasons.
On the other hand, such systems “can malfunction, be deliberately corrupted, and acquire (and codify) human biases in ways that may or may not be immediately obvious,” warns EY.
The solution to these concerns, according to EY, is to build “trust in AI.”
What does that mean? “It is increasingly important for designers, architects, and developers of such systems to be fully aware of downstream and adjacent implications, including social, regulatory, and reputational issues,” EY says.
That should go for commercial developers as well as for companies creating proprietary tools. Companies that buy these tools from others should be wary of the risks as well.
According to EY, before starting an artificial intelligence project, a company should ensure that four conditions have been met:
Ethics: The AI system needs to comply with ethical and social norms, including corporate values. “This condition, more than any other, introduces considerations that have historically not been mainstream for traditional technology, including moral behavior, respect, fairness, bias, and transparency.”
Social responsibility: The AI system’s potential societal impact should be carefully considered, including its impact on the financial, physical, and mental well-being of humans and the natural environment. Society impacts might include workforce disruption, the need for skills retraining, discrimination, and environmental effects.
Accountability and explainability: The AI system should have a clear line of accountability to an individual, who should be able to explain the system’s decision framework and how it works. “This is about demonstrating a clear grasp of how AI uses and interprets data, how it makes decisions, how it evolves as it learns, and the consistency of its decisions across sub-groups.”
Reliability: This involves testing the AI system’s functionality and decision-framework detect unintended outcomes, system degradation, or operational shifts — “not just during the initial training or modeling, but throughout its ongoing operation.”
And all of that is just to get started. Going forward with the AI project, in order to achieve and sustain “trust in AI,” the company must “understand, govern, fine-tune, and protect all components embedded within and around the AI system,” EY stresses.
These components include data sources, sensors, firmware, software, hardware, user interfaces, networks, and human operators and users.
Unfortunately, the paper notes, the development of artificial intelligence functionality is outpacing developers’ ability to ensure that systems are transparent, unbiased, secure, accurate, and auditable. That creates a need for an AI governance model.
EY recommends the following best practices for establishing a trusted AI ecosystem:
AI ethics board: This is a multi-disciplinary body providing independent advice and guidance on ethical considerations. Advisers should be drawn from disciplines including ethics, law, philosophy, technology, privacy, regulations, and science. The ethics board should report to the board of directors.
AI design standards: These should include an AI ethical code of conduct and AI design principles. The standards should define and govern the AI governance and accountability mechanisms to safeguard users, follow social norms, and comply with laws and regulations.
AI inventory and impact assessment: This is an inventory of all algorithms and key details of the AI, generated using software discovery tools. The risks involved in developing and employing each algorithm should be assessed.
Validation tools and techniques: These should ensure that algorithms are performing as intended and producing accurate, fair, and unbiased outcomes. These tools can also be used to monitor changes to each algorithm’s decision framework.
Awareness training: This involves educating executives and AI developers on the potential legal and ethical considerations of AI development and their responsibility to safeguard impacted users’ rights, freedoms, and interests.
Independent audits: These are ethical and design audits by a third party on internal AI policies and standards, as well international standards. The audit would evaluate the sufficiency and effectiveness of the governance model and controls across the artificial intelligence lifecycle, from problem identification to model training and operation.