Finance has in many ways spearheaded business automation in the past few years — but will that necessarily happen with artificial intelligence, particularly generative AI?
Vendors are pushing that theme, but uncertainty around AI governance, controls, output accuracy, and security means history may not repeat itself.
In a session, “AI in the Audit — What Finance Executives Need to Know,” at the FEI’s virtual Corporate Financial Reporting Insights conference earlier this week, panelists revealed some of the early adoption strategies in finance and auditing.
Ed Moran, managing director of innovation at KPMG and panel moderator, said that the Big Four firm’s generative AI was filling “real capacity gaps,” including drafting policies and procedures and performing simple audit tests. “I think this is going to be as ubiquitous as spreadsheets,” he commented during a separate session on auditing regulation.
But finance teams, especially their controllers, will be weighing finance’s need for completeness and accuracy against the potential benefits of quick implementations of generative AI.
Controls and Governance
Payments company PayPal Holdings has been using generative AI and exploring use cases in functions like compliance, customer service, fraud, human resources, and credit risk decision-making. But finance lags. “[Finance’s] beginning point is just to be skeptical and [ask] what are the controls necessary to realize it within our function, which, of course, [handles] lots of material nonpublic information,” said Hasitha Verma, PayPal’s chief accounting officer.
Companywide, PayPal is taking a centralized approach to governance. The payments company has set up an AI center of excellence to guide and train employees and develop and approve use cases. It also drafted a responsible AI framework. But in these early days, the most crucial piece might be the written guidelines for individual employee use of generative AI tools.
"If you’re submitting confidential information, the AI tool may produce an output that substantially resembles one of those proprietary inputs."
Chief accounting officer, PayPal
“One of the distinctions we're making is using free public tools [like ChatGPT] versus subscription-based ones or vendor-provided services,” said Verma. When using open-source AI tools, the guidelines tell employees never to upload sensitive, confidential corporate information. For a payments company, that includes its financials and “high-stakes” data on consumers, merchants, bank instruments, and credit and debit cards.
“Generative AI works by taking all the submitted data points and using them to generate some output. If you’re submitting confidential information, the AI tool may produce an output that substantially resembles one of those proprietary inputs,” Verma said.
Early Auditing Uses
The other panelist, audit partner Mike Sawyer of KPMG, said the Big Four firm is in the vanguard of companies using generative AI but is cautious about using it responsibly.
KPMG has been trying algorithmic approaches in auditing for years, Sawyer said, but it required substantial upfront investment and programming. This past Summer, KPMG developed its version of a generative AI tool that reduces administrative burdens and assists with audit quality, said Sawyer. Alongside the tool, KPMG released a “prompt library.” The first tier of the library explains large language models and describes how to best interact with KPMG’s tool.
A higher-level tier of the library helps enhance audit quality. A chatbot prompts engagement team members about documentation and other aspects of an engagement and compares the inputs to the relevant audit standards. The output is an observation like, “Hey, your documentation is great, but maybe think about these two or three things,” said Sawyer.
Generative AI “can get things wrong,” said Sawyer, “but just interacting with it, reading prompts that have standards listed, those are huge triggers to think about everything that needs to go into the audit file.”
“One of the challenges of AI tools is there’s a bit of opaqueness in the output, in the processes that generate the output."
Somewhat ironically, a key barrier to the use of generative AI in finance could be auditability. “One of the challenges of AI tools is there’s a bit of opaqueness in the output, in the processes that generate the output,” said Verma. “How would our auditors look at it?”
Sawyer agreed. “We’re thinking about how do we apply the same kind of lens to completeness and accuracy of data and how management is controlling the data,” Sawyer said. “We’re seeing a lot of use of generative AI in low-risk areas,” he said, helping KPMG develop a model of how to audit it.
Wait or Move Forward?
With the shortened development lifecycles in generative AI — OpenAI announced a customizable version of ChatGPT last week — it may seem like finance will be at a disadvantage. But because of the speed of development, companies that quickly adopt the most recent innovations in generative AI and roll them into the business may not gain a sustainable competitive advantage.
Organizations have to assess what’s going to be available two, three, or four months down the road, said Sawyer. “All of a sudden there will be an off-the-shelf solution.” The cost-benefit analysis, said Sawyer, will be, “How much time do we invest now” in a custom tool “versus waiting a couple of months” for an enterprise solution?
On the other hand, standing still won’t be a realistic option when service providers adopt AI. Microsoft, for example, is embedding AI into its software and platforms.
“When onboarding a vendor, there’s [now] another question you must ask,” said Verma. “Do they use AI in any of the services they provide you? And have they covered all the bases you would cover if you were the direct user?”