How artificial intelligence will operate within workflows continues to be a much-discussed topic among finance leaders looking to upgrade their tech stacks and navigate a digital transformation. However, the use of AI comes with its risks, as highlighted at CFO networking events last year.
These risks, like leaking proprietary information or having employees pass off GenAI work as their own, can be detrimental. Many companies have begun limiting autonomy around the usage of GenAI tools. So much so that Cisco's 2024 Data Privacy Benchmark Report, an annual survey of 2,600 privacy and security professionals, found that 27% of organizations surveyed banned GenAI use altogether.
While other surveys show most finance executives haven't begun using GenAI, privacy and security professionals have expressed concerns about its impact should it be used by all levels of employees. Regardless of where in th a workflow GenAI is applied, those surveyed suggest it can expose organizations to major risks if not managed.
Nearly seven in 10 of those surveyed said concerns around intellectual property rights (69%), unwanted sharing of information (68%), and wrong results (68%) were major concerns for companies that allow employees to use GenAI. Just under two-thirds (63%) said artificial intelligence is detrimental to humanity and could replace other employees (61%).
There are also concerns about the type of information are being input into GenAI tools. Sixty-two percent of respondents said information about internal processes is being shared. Just under half said nonpublic information about the company (48%) and employee names or information (45%) are among the information being used to prompt GenAI.
To take action, professionals have begun to put limitations on use of GenAI. Just under two-thirds (63%) have implemented data limitations around the technologies, with 61% adding restrictions around the tools GenAI employees can access. Over a third have also implemented data verification requirements.
At least half of all those surveyed said they have taken some step at a company level in creating a process and approach to GenAI use. Fifty percent say they are explaining how the respective GenAI program in use works, with an identical number saying they are ensuring a human is involved in the process. Nearly half (49%) said they have implemented an AI ethics program.
As for the data security professionals themselves, despite their concerns, they, too, have found value in supplementing their work with GenAI tools. Seventy-nine percent of those surveyed said these kinds of tools bring significant or very significant value to their work.
To best showcase their efforts to executive management and the board of directors, data and security professionals recommend business leaders create privacy metrics to do so. Nearly all (98%) of respondents report one or more privacy metrics to the board, and over half report three or more. The top privacy metrics include audit results (44%), data breaches (43%), data subject requests (31%), and incident response (29%). Privacy metrics around the training of employees, which in previous data points seemed like a focus of survey respondents, only garnered 22% of the responses.