The first generation of catastrophe modeling technology, based on proprietary “black box” models produced by a few vendors, is 25 years old. It’s now generally known that despite significant investment, these models will never be accurate. At least for the foreseeable future, the models will provide only rough estimates of catastrophe losses because there is so little reliable data in most peril regions.
Executive SummaryOpen platforms enable insurers and reinsurers to incorporate and test the impacts of new scientific research much faster than using the traditional vendor models. Karen Clark explains how these new advanced tools are already being used for greater visibility into the key drivers of profit and loss and to gain competitive advantage.
Of course, the model vendors know this fact, which is one reason they’re actively encouraging companies to “own the risk” and not rely on their models. Because model differences and volatility are driven primarily by changing assumptions and not new science, insurers and reinsurers are expected to develop their own views of risk that they fully understand, control and can explain to internal and external stakeholders, such as regulators and rating agencies.
The closed, proprietary vendor models may be reaching their limit in terms of value per investment dollar. As companies become smarter about catastrophe risk, they’re demanding newer, open platforms that make it more efficient and cost effective to build their own customized views of risk. Given the growing sophistication of model users, the logical next step in the evolution of this important technology is models that are fully transparent and readily customizable.
The Power of an Open Platform
When it comes to catastrophe modeling, many companies still think they have only two choices: either license external third-party models or build models from scratch. But there is a third option based on open platforms that is already being adopted by insurers, reinsurers and ILS investors.
An open platform starts with “reference” models for combinations of peril and region. All of the components of the reference models are fully transparent and accessible by the user. As one example, the author’s company, Karen Clark & Co., has developed the RiskInsight open platform, which comes with reference models that can be used as is or be efficiently customized with built-in tools, such as WindfieldBuilder and DamageRatesManager. (See related article, “RiskInsight Model Tools Explained,” for more information on WindfieldBuilder and DamageRatesManager.)
Users do not start from scratch but rather begin with robust models that are fully transparent and for which the assumptions are easily accessible. With an open platform, you don’t need teams of specialized computer programmers to customize your assumptions. You can refine your models with your own in-house knowledge experts.
Even if you don’t want to modify the reference models, you should know what your model assumptions are. The real power of an open platform is enabling you to see the assumptions driving your loss estimates and to test alternative assumptions more quickly and scientifically as new research becomes available.
An Earthquake Example
Recent earthquake activity has caused the scientific community to rethink a longstanding assumption about how faults rupture. Since magnitude is related to rupture area, scientists generally assumed the magnitudes of earthquakes were bounded by the “locked” fault segments. The more stable, slowly moving, so-called “creeping” segments were thought to be barriers to events of larger magnitude. According to scientific theory, 2011’s magnitude 9.0 Tohoku earthquake could not happen.
New studies undertaken since Tohoku postulate that the more stable fault segments, instead of blocking adjacent ruptures, can actually join in to create larger magnitude events under certain circumstances. These newer studies suggest, for example, that the northern and southern ends of the San Andreas fault system could rupture in the same event, causing a larger magnitude than anticipated by the USGS Seismic Hazard Maps. While allowing for larger magnitude events than the 2008 report, the most recent 2014 USGS report still assumes that a creeping segment separating the two ends of the San Andreas fault will stop an earthquake from impacting both San Francisco and Los Angeles.
No one knows the correct answer when it comes to the probability of a large magnitude event in a specific geographical region. New research typically produces theories rather than facts. Most of the model assumptions are based on expert judgment—and experts often disagree, which leads to the significant model differences.
An open platform enables you to use a catastrophe model as it should be used—as a tool that lets you scientifically test different sets of credible assumptions and come to your own conclusions.
For example, using an open risk modeling platform, you can create the intensity footprints for larger magnitude earthquakes on selected faults, such as the San Andreas, and superimpose these new hypothetical events onto your exposed property values to estimate the impacts to your portfolio losses. Once you’ve analyzed the potential impacts, you can make a fully informed decision as to whether or not you want to base any risk management decisions on this new research. You no longer have to wait for a vendor model update that may or may not give you intuitive information or reflect how you think about the risk.
A Hurricane Example
Climate change is another area of wide uncertainty, and scientists do not yet have the skill to quantify how a warming climate will ultimately impact hurricane frequency and severity. Scientists can’t give you definitive numbers, but they can give you credible ranges of estimates. For example, according to the Intergovernmental Panel on Climate Change (IPCC), the most likely scenario is for hurricane wind speeds to increase by 2-11 percent over the next several decades.
An open platform enables you to create a new catalog of hurricanes with intensity footprints reflecting the new wind speed assumptions. Once you’ve created the new events, you can run them on your portfolio to test the impacts on your loss estimates. What you’ll find is while 2-11 percent doesn’t sound like a lot, losses increase exponentially with wind speed.
These examples clearly highlight that no catastrophe model—your own or a third party’s—can give you definitive answers. The models give you loss estimates based on sets of assumptions, and only with full visibility on each assumption and how it impacts your loss estimates can you interpret the model. The ability to test different assumptions and to control those assumptions are two of the most powerful benefits of an open platform.
Using an Open Platform for Unmodeled Peril Regions
There are many peril regions for which vendor models don’t yet exist. Open platforms can fill this gap.
Most of the hazard data underlying the catastrophe models is publicly available and can be gathered without too much difficulty. Chances are, for an unmodeled region, there will not be a lot of reliable data. Using the available data and scientific studies, you can create a catalog by defining the locations, magnitudes and frequencies of hypothetical future events.
The process is similar to what many companies already do to evaluate a vendor model. You can’t properly test a third-party model without detailed knowledge of the historical data and scientific information for that peril region. How else do you benchmark the model output?
For unmodeled perils, you simply collect the scientific information in advance and use it to build your own set of events in an open platform.
Once you have a catalog of events, you can assign intensities that relate to a set of damage functions. Because there are very few peril regions for which there is actual loss data from recent events, chances are there will not be much claims data for the construction of the vulnerability curves. Instead, you can extrapolate from other regions that may be better studied, have had recent events or have similar types of properties. The financial model built into the open platform will apply the appropriate policy conditions to convert the damages into insured loss estimates.
Once you’ve parameterized the model components, you can run event scenarios to estimate the losses. You can compare your model loss estimates to actual loss data for validation purposes and fine-tune your model assumptions as necessary.
While you can’t build a perfect model, with an open platform you can build a robust model that is fully transparent and for which you have complete understanding of and control over the important assumptions.
Why Open Platforms Make Sense
Whenever there are significant advances in technology, there are the naysayers who can tell you all the reasons it won’t work and can list all the potential negatives of a new approach. There may be comfort in the status quo of simply adjusting the vendor model output or model blending to develop your view of risk despite the inefficiencies and shortcomings of these approaches. Taking ownership of the modeling process with an open platform can sound daunting—so why bother?
Other than the fact that peer companies are getting a head start, the benefits of an open platform are many:
- Full transparency on model assumptions.
- Control over model assumptions.
- More efficient modeling processes.
- Ability to test and incorporate new scientific research as soon as it’s published.
- More complete understanding of key drivers of large losses.
- More consistent risk management decisions.
- Better retention of most profitable clients.
Open platforms give you a better line of sight into your most profitable business so you can identify and nurture your best accounts.
Open platforms are not in the distant future. Insurers and reinsurers are already using these new tools to better leverage their own data and knowledge for competitive advantage. Naysayers will be left behind because open platforms simply make too much sense—if you understand catastrophe risk—and it’s only logical that this is the next evolution of catastrophe modeling technology.
More Information on Catastrophe Risk Models
Carrier Management has published a series of articles written by Karen Clark about risk models.
- What Rating Agencies Really Want to Know About Your Catastrophe Risk
- Next Generation Cat Modeling: Multimodel, Open Source or Open Platform—What’s the Difference?
- The Current Scientific Consensus on Climate Change and Hurricanes—It May Surprise You
- Flooding and Flood Models Explained: Karen Clark Reviews the Basics
- Understanding Hurricanes: The Basic Facts for Carrier CEOs