Catastrophe modeling is growing, propelled by IT and a really disastrous Q1.
David Rosenbaum, CFO.com | US
May 2, 2011
Modeling quantifies exposures in a given array; that is all it is intended to produce and all it does. As such modeling has its place in the measurement phase of any risk management exercise. Issues arise when non-risk management people "assume" the results predict an outcome. Use of modeling to predict results will generally lead to an unfortunate ending.
Posted by Gregory Sosbee | May 12, 2011 11:05 am
There is a fundamental rule of statistics that says that you can't get more out of a data set than what is in it. You can't get 10,000 years of information out of 100 years of data. "At best, we have 100 years of historical data," says Jayanta Guin, AIR's senior vice president of research and modeling. "That's not enough." So instead of simply using the data from those 100 years to make a model, AIR runs 10,000 possible permutations of that historic data to provide what Guin calls "the full universe of possible outcomes." This will increase precision but not accuracy. AIR only has 100 years of data, with all of the quirks of of the period. Do those 100 years adequately represent all of the past? How could you know? Are there no serial correlations between years? Only by assuming no serial correlations can you use years randomly to create an alternate future that looks a lot like the last 100 years, but is more stable. We need to be skeptical about statistical methods, much as we might not like the results of the restrictions. There is entropy in statistics -- we never get more information than the data holds. The benefit of that humility is that corporations will not make as many mistakes over uncertain events. Better that a corporation be less certain, and take defensive measures, than it be improperly certain, and do nothing.
Posted by David Merkel | May 03, 2011 01:00 am© CFO Publishing Corporation 2009. All rights reserved.