When it comes to deciding how much property-casualty insurance to buy for their companies, CFOs and corporate risk managers face an essential dilemma. On one hand, they don’t want to buy more insurance than their companies need because the cost depletes capital that could be used for other business purposes. On the other hand, if they buy too little insurance, they could put their company’s balance sheet at risk if a significant loss occurs.
Difficult as deciding how much insurance to buy is, it can be even harder when a company has little or no history of large losses on which to base such decisions. The use of analytics can help executives make more informed decisions about how much insurance coverage is appropriate to buy during their spring renewal, however.
Although it may be tempting for CFOs and/or risk managers who have never experienced a large loss at their firms to discount the possibility of one occurring in the future, the exposure still exists. Just as insurers must price the coverage to match their exposure to risk to succeed in the long term, executives should consider the possibility of a large event when evaluating the appropriate amounts of coverage to buy from insurers. One need look no further than the aftereffects of recent large retail data breaches and catastrophes like Superstorm Sandy to know why.
Quantifying a Big Exposure
Quantifying a company’s exposure to large losses involves two steps: estimating the likelihood of a large-scale event occurring, and estimating the cost of such an event if it does occur.
For a company with little to no loss history, it’s useful to consult large losses experienced by other companies in the same industry. That requires a robust source of industry loss data. A threshold representing a large event needs to be selected for this exercise, such as all industry loss events above $10 million.
By considering the number of such events across the industry (on an annual basis), in combination with the company’s size relative to the size of the industry, one can estimate the company’s likelihood of a large event occurring. These initial estimates can be further refined based on company-specific considerations that include loss control measures, nature of exposures and business models.
Using that data, analytics can then help provide a reasonable basis for estimating the potential costs associated with such an event.
It’s important to recognize, however, that although industry losses serve as a useful guide, they’re not a definitive statement of loss potential. For example, if the largest industry event is $100 million, you shouldn’t assume that this is the maximum loss the company could experience. Instead, CFOs and their staffs should consider a range of potential loss outcomes if a large loss occurs, over and above the range suggested by the individual losses.
One way to accomplish this is through loss distributions. Each individual large loss in a data set can be viewed as a result drawn at random from an underlying loss distribution. By fitting a curve to its losses, a company can estimate the potential distribution of costs above and beyond what has already been seen from the available data.
Consider the following example: Company XYZ, which has a 1 percent market share of its industry, is beginning its insurance-renewal discussions. Let’s say the company, which has never experienced a loss above $10 million, wants to figure out if $75 million in general liability insurance is the best amount for it to buy.
Upon further investigation, the finance department learns that 50 general liability losses of more than $10 million have occurred in the industry over the last five years. Insured losses from those events range from $10 million to $125 million, with an average loss of $50 million.
With 50 industry losses of more than $10 million over a five-year period, the number of industry events above $10 million can be estimated as 10 per year (50/5). Since Company XYZ has a 1 percent market share, its likelihood of a loss above $10 million can be estimated at 10 percent per year: (10 losses per year)(1%) = 0.10, or 10% per year.
In this example, we assume that the distribution that best fits the industry data yields the following results:
- Average loss: $50 million.
- 1-in-5 loss (80th percentile) = $75 million.
- 1-in-10 loss (90th percentile) = $100 million
- 1-in-100 loss (99th percentile) = $150 million
Given these results, how should Company XYZ evaluate what buying $75 million in total coverage would mean? The thought process may go as follows:
- If we have a large loss, the likelihood that that that amount of coverage would be enough is 80 percent (a 1-in-5 loss).
- But there’s a 10 percent chance that the loss will be at least $25 million above those coverage limits ($100 million – $75 million), and a 1 percent chance that it will be at least $75 million above our limits ($150 million – $75 million).
- Our likelihood of experiencing a large loss is 10 percent.
- By combining the 10 percent likelihood of a large loss with the 80 percent likelihood that our current coverage limits contain large losses, our limits appear to be adequate 98 percent of the time (100 percent – (10 percent x (1-80%)) = 98%).
- If we want limits to be sufficient 99 percent of the time, we would need to increase them to the 90th percentile of the loss distribution, or $100 million (100 percent – (10 percent x (1-90%)) = 99%)
Aside from insurance limits there are other important considerations concerning the best use of risk transfer. A company’s risk-bearing capacity and appetite, and its market-based insurance premiums, are important aspects when deciding how to retain and transfer risk. Sophisticated techniques can be used to ensure all relevant factors are part of the decision-making framework.
Analytics can provide the credible supporting documentation needed for insurance limit and other risk transfer discussions at the executive and board levels. Particularly with company boards being more demanding and specific about the risks facing companies today, the better prepared executives are with data heading into those discussions, the more satisfied a company’s stakeholders will be.
Claude Yoder is head of Marsh Global Analytics and Dave Heppen leads the Marsh Global Analytics North American unit.