Risk Management

Risk Psychology Invites Exposure to Black Swans

Why your company is likely to be unprotected against occurrences that rarely happen but cause extreme pain when they do.
Frank LicataSeptember 11, 2018

With weather events swirling and cyber-hackers swarming, CFOs need to keep a clear head and have a coherent risk strategy.

Too often, however, companies fail to take to take proper precautions against Black Swans — outlier events, like last year’s extreme flooding in Houston. Their occurrence isn’t likely, but their effects are vastly greater than those caused by the run-of-the-mill events that happen constantly.

Risk managers have always been concerned with severity vs. frequency. But while the rare-but-severe event is the more important, business managers and insurance brokers alike tend to focus more on frequency. That’s understandable, but a mistake nonetheless.

Very frequent loss-causing events are a cost of doing business. They are not insurable on a basis that makes business sense. An insurer won’t offer coverage unless the premium is at least 165% of the average annual loss.

Somewhat frequent losses are the ones that receive the most attention from CFOs and CEOs. These don’t occur every day but do happen enough and cost enough to be a concern. Your insurance broker will make sure these losses are covered.

What about rare but truly severe events — the ones that can bring a company to its knees? These typically go unmanaged, unless there’s a focused risk management culture. Company leaders have a way of brushing aside concern for such events, except, perhaps, for some vague unease in the back of their minds.

Why the Lack of Urgency?

First, there are some psychological phenomena that trigger faulty judgments. We tend to assess the probability of an event happening in the future by how readily we can bring it to mind (“availability bias”) or by how recently it has happened (“hindsight bias”). The “bystander apathy effect” allows us to waive off concern if no one else in a group raises it.

There’s also the “problem of induction.” With inductive reasoning we project the future based on events we’ve observed in the past. If it hasn’t happened to us, we assume it won’t happen.

Next, consider the motivations of insurance brokers. They want to sell policies, and to do that they need happy customers. They don’t want to get bogged down in what would seem to buyers like irrelevant talk about events that hardly ever happen. And buyers certainly don’t want to pay the high premiums that insurers would demand for covering such events.

Brokers can’t be expected to critique the terms and conditions of their own products, except with respect to losses they know are bound to happen in the short term, which they emphasize in their proposals.

Finally, Black Swans happen so rarely that if one does happen, a broker often will lose just one customer. Here’s a good example from the commercial property insurance space; the insurance products described in the article behind the link epitomize the “don’t worry about it — it’ll never happen” syndrome.

Note that this isn’t a criticism of brokers; their behavior is simply a product of the way the insurance marketplace is structured.

Let’s look at a case study. Our firm has extensively researched the 2010 BP oil disaster in the Gulf of Mexico, which cost $60 billion (to date) and took 11 lives. We’ve reviewed all the government investigative reports, those by industry groups, and BP’s own analysis.

Days before the event, BP received a safety award (for activities aboard that very same rig) from the Minerals Management Service, the U.S. government agency that was in charge of oversight at the time of the disaster.

The award wasn’t just an odd quirk. It reflected something that happens constantly in businesses of all kinds — the aforementioned risk focus on somewhat frequent events, while managers are oblivious to the weak signals of much bigger problems brewing beneath the surface.

Frequency is easier to manage than severity because it is visible, and there is immediate feedback as to whether it’s being managed. BP actually had abundant warning that the well was getting out of control, but the culture was focused so much on cost and speed, and so little on risk management, that the company was somehow able to ignore one sign after another. (See my article on the spill.)

Seeing the Signals

Managing severity isn’t really all that difficult; it does take a risk management culture, though.

Severe events don’t happen suddenly without warning. It just seems that way because low-volume signals aren’t recognized and acted upon.

There is lots of noise in the operations of any organization. But some of what seems like noise are those weak signals of trouble ahead. Being mindful enough to see the difference is the essence of managing severe risk.

We know that a faint wisp of smoke is the first warning of fire, and not many of us ignore it. Busy executives, though, may pay no mind to such signals until it’s too late.

Here’s another phenomenon: Companies far and wide have instituted safety rules mandated or recommended by OSHA, other government agencies, insurance companies, and loss-control experts. Almost of these rules call for redundancies and safety margins in all operations.

But disasters happen anyway. Why? In practice, the margins are not always completely observed. Cheating goes on, in the interest of speed and cost, and usually nothing happens. If cheating on the tolerances caused a disaster every time, the cheating would stop.

Workers know they can hedge a bit — they know the margins are there. But sometimes, on a particular job that requires safety measures, a second margin will get shaved, and maybe a third. The defects are additive and/or multiplicative, and the cumulative effect can be a disaster.

For example, despite heavy safety oversight, cranes continue to collapse. For discussion purposes, assume three safety factors: a weight capacity on the material being lifted, a level base, and wind speed. Slightly exceeding the limit on any one of those can be tolerated, but all three at the same time will cause a collapse.

Frequency-vs.-severity thinking should apply to the purchase of insurance also. Those who aren’t risk managers put severity way in the back of their minds, and their insurance brokers are more than happy to go along.

The psychological phenomena described above are dangerous. For example, because of hindsight bias, people take comfort from the fact that a certain kind of event “hasn’t happened here in [X number of] years.” That’s faulty logic. Severe events don’t happen to any single company with that kind of frequency.

Only insurance companies — and depending on an event’s severity, only the larger ones — have the critical mass to create models that incorporate both frequency and severity. For an individual company, the faulty logic serves as an excuse to ignore the potential problem, or a defense mechanism if a loss has already happened.

Understand the Black Swan problem, its causes, and how to properly manage it, and you’ll be in the top 20% of businesses.

Frank Licata is president of Licata Risk Advisors, a risk and insurance advisory and management firm.

© 2018 Licata Risk & Insurance Advisors, Inc.