In a February editorial about the buildup of cyberattacks between the United States and Iran, The New York Times quoted President Obama’s observation that, compared with conventional weaponry, cyberweapons provide “no clear line between offense and defense.” Thus, getting into the enemy’s networks to exploit its weakness and disable its ability to attack you is both offense and defense. Citing “major banks, Sony Pictures Entertainment, [and] an electrical utility,” the newspaper observed that such recent examples reveal that even corporate computer systems once considered impregnable are vulnerable to attack.
In the borderless world of information technology, in fact, computer-security specialists and corporate risk managers have begun working on the assumption that it’s impossible for companies to keep their networks completely free from penetration. Given that reality, they’re zeroing in on the need to detect hackers once they’re inside the system and respond to the attack, rather than just focusing on sealing networks from every possible breach.
“Traditionally, cybersecurity has been focused on the front protection piece,” including internal controls, employee training, and firewalls, according to Heather Crofford, CFO of shared services at Northrop Grumman, the big aerospace and defense contractor. For Northrop and many other companies, however, “detection, response, and recovery are where the increasing investment needs to be,” she says.
But what kinds of solutions should companies invest in, and how much should they invest? The answers to those questions are complicated by companies’ relative lack of experience with cyber risk and the lack of widely available information about the impact that hacker attacks are having on company balance sheets.
Even more challenging is the problem of keeping up with the ever-shifting techniques of hackers. “Once hackers get in, the attack becomes automated. They’re using a machine-written automation system that’s going at a speed at which they can shut down your system and exploit and change data extremely quickly,” says Crofford.
Yet, complex as the issues are that must be tackled in determining how much of an investment to make to mitigate cyber risk, risk management experts agree on the first step finance chiefs should take: Assess the risk—first by gaining a firm grasp of the company’s data assets, then by ranking them in order of their value to the company.
“Look at the relative importance that different information or applications have to the business in terms of the amount they contribute to revenue or the cost that would be incurred if that asset were not available,” advises James Cebula, technical manager of cyber risk management at the CERT Division of the Software Engineering Institute at Carnegie Mellon University. “This allows you to make informed decisions about your overall level of risk and how much you’re comfortable with as an organization.”
That, however, may be the easy part of the assessment. Much tougher is gauging the likelihood that a company’s most valued data will be threatened—that is, how alluring a target is it for cybercriminals? To answer that question, a growing number of insurance-related firms are developing models of the risk fed by data on corporate cyber break-ins. The models represent a chance for finance chiefs and risk managers to get a better sense of their companies’ risks of being hacked.
The history of cyberattacks is short, however, and information of which such events can cost a company is hard to come by. Relative to hazards like earthquakes, fires, and lawsuits—not to mention economic perils like capital-market volatility—the threat of bad actors breaking into computer systems and stealing vital information or creating havoc is in its infancy. Unlike, say, the knowledge amassed about hurricanes in the United States, which have been tracked for about 165 years, data about corporate cybercrime has only been recorded since the emergence of the Internet as a vehicle of commerce in the mid-1990s.
But the short duration of cybercrimes has been partly balanced by its intensifying frequency in recent years. Scott Stransky, manager and principal scientist at AIR Worldwide, a catastrophe-modeling software firm that has developed a prototype model aimed at measuring cyber risks, notes that hurricanes have been relatively infrequent. Between 1900 and 2010, only 284 hurricanes struck the U.S. mainland, according to the National Oceanic and Atmospheric Administration. In contrast, AIR’s prototype cybermodel is based on 4,500 events.
By the time AIR’s full model hits the market in the next two or three years, the firm is “hoping to go an order of magnitude higher in the number of events” used, Stransky says. “These events are happening very frequently, which of course has its disadvantages. But it also means that there’s a lot of data out there,” he adds. “It’s just hard to get hold of.”
Unlike data on hurricanes, which can be downloaded for free from the National Hurricane Center website, nobody produces such a full, readily available data set for cyberattacks, according to Stransky. Thus the firm and other risk modelers must accumulate and tailor raw data from various insurance industry sources. AIR’s model currently uses only publicly available information, but the firm hopes to tap the far more robust private insurance industry databases.
Those databases are a particularly apt source for cyber incidents, because insurers collect vast portfolios of property and liability claims information that span the gamut of corporate clients and industries. For risk modelers’ purposes, the data is particularly useful because it can reflect the risks overlapping many corporations.
Further, insurance companies have a powerful incentive to trace those overlaps. For example, notes Stransky, an insurer might be writing cyberinsurance for many clients that happen to be using the same cloud-computing provider. If a hacker hit that cloud provider, the attack could “cause many different [corporate insurance] accounts to pay out all at once,” representing a kind of “Hurricane Andrew of cyber,” he says, referring to the 1992 storm that cost the insurance industry $15.5 billion in claims payouts.
While most large companies have some involvement with cloud providers, they don’t tend to rely on them for every aspect of their IT infrastructures. But a small or midsize enterprise, which may have its entire online operation situated on a single cloud application, tends to run a much greater risk if the application is disabled by hackers, Stransky says, adding that there is a “big common vulnerability” for underwriters of a bunch of SMEs.
Similarly, Stransky points out, a single insurance carrier could write insurance for many clients that still use Windows XP on their computers. According to Microsoft, which no longer provides support for the operating system, such users are “five times more vulnerable to security risks and viruses.” That could mean a big payout for the insurer in the event of an attack on the system.
Adding to insurers’ worries about such correlated risks is their current inability to quantify them on their own, contends Stransky, thereby creating a demand for the risk modeling his firm hopes to market.
AIR isn’t the only organization modeling the threat of data breaches. Willis, the large insurance broker, has for the last five years or so been using its own PRISM data-privacy risk model “to project the likelihood that an organization will suffer a loss, as well as the magnitude of the loss should it occur,” according to a marketing brochure. The model employs about 10 years of data, which it gets via subscription from Risk Based Security, an information provider.
The database obtained by Willis consists of unstructured information on about 15,000 to 20,000 breached records, including the number of records compromised in each breach, the industry in which it happened, and the method of the break-in, according to Neeraj Sahni, vice president for cyber and errors and omissions for Willis North America.
The broker’s analytics staff proceeds to segment the data by industry and remove the information on companies in industries it doesn’t do business with, such as pornography and gambling, Sahni says.
For each corporate cyberinsurance client, Willis calculates how many records are likely to be breached over a given period of time—that is, the company’s probable frequency of cyberlosses. The brokerage bases that calculation on the number of breaches that have happened within the company’s industry group, the total number of companies within that industry group, and the relative quality of the company’s controls.
The modelers then use Monte Carlo simulations to calculate the probable severity of a breach occurring during different amounts of time. Unlike standard property or liability losses, which happen frequently but aren’t too damaging when they do happen, “in cyber, when [a breach] happens, the assumption is that it will be a high-severity loss,” says Sahni. For that reason, the firm’s brokers advise clients not to bother with developing strategies to mitigate routinely occurring small-scale breaches, which tend not to cost much, but to focus on preparing for rare but devastating major attacks. Such mental triage enables companies to put their greatest efforts into planning for the worst.
Yet while the models and the available data can help finance chiefs with internal assessments of their companies’ risks, they fall short on providing CFOs with support in determining how much they need to budget to mitigate the damage caused by a breach. There is, in short, still much to be learned about the potential costs of a cyberattack.
To be sure, information about the number of records and individuals affected by major breaches at organizations like Home Depot, Sony Pictures, and Anthem has been widely reported, notes David Heppen, a managing director at Marsh, which last July launched Cyber IDEAL, a breach modeling tool to help clients manage risks and make insurance-buying decisions. “On the other hand, there isn’t very much data out there on what [breaches] cost—even with the well-publicized ones,” says Heppen. “And we’re not going to know for some time how much they cost.”
Willis’s Sahni seems to agree. Concerning the information his firm’s model can generate about a company’s cyber risk, he says, “the missing piece is the cost of the breach.” To help corporate clients grapple with the question of how much expense they should plan for, brokers use an “in-house data breach calculator,” plugging in the company’s industry, number of digital records at risk, and potential regulatory and legal costs if a breach happens. “It’s part science and part art,” says Sahni.
In any case, for a company with a great deal to lose if it’s attacked by hackers, cost may not be the factor it usually is for a CFO in drawing up a risk management budget. At Northrop Grumman, notes Crofford, “we do a lot of classified work in national security, and our reputational risk is huge. For me the issue isn’t trying to figure out what the right amount to invest [in security] is, but knowing that the downside is so significant that you can’t afford to cut corners in this area.”