The new year is ushering in change with respect to catastrophe risk management. New tools and technology are available, providing insurance companies with:
Executive SummaryInsurance companies have been challenged in the past to develop consistent risk management strategies—using opaque catastrophe models producing output that changes frequently and significantly. The situation is now changing, according to Karen Clark.
- Open platforms for transparency and flexibility
- Stable risk metrics for individual account underwriting
- Efficient frontier analytics for developing an optimal property portfolio
For the past two decades, insurance companies have relied on catastrophe models for their risk management infrastructure, building important pricing and underwriting processes around the models and the model output. While the models are very valuable tools, they have certain shortcomings, such as a lack of transparency. Newer technologies provide more sophisticated, open, and robust platforms for catastrophe risk management.
Open platforms for estimating and managing catastrophe losses are fully transparent and flexible. This means insurance companies are able to see all of the assumptions underlying the loss calculations and can fine-tune those assumptions to better reflect their own beliefs and books of business.
Damage functions—a primary model component—provide a good illustration of the efficiencies and benefits of an open platform. It takes years after significant events occur for the model vendors to collect and analyze claims data from the insurance companies willing to share this information. The updated damage functions are averaged across these companies and not specific to any one company.
This is less than ideal because insurance companies have distinct claims-handling practices, different policy conditions, and other individual nuances that set them apart from the average. An open platform enables companies to use their own detailed claims data and internal experts to fine-tune and customize their damage functions, thereby leading to more credible loss estimates for their specific books of business and, in turn, competitive advantage.
(Editor’s Note: RiskInsight®, a framework developed by the author’s company, Karen Clark & Company, is one of two catastrophe risk tools offering an “open global platform for catastrophe risk management.” Oasis Loss Modelling Framework provides another. Oasis is a not-for-profit company supported by Lloyd’s and funded by a community of well-known insurers and brokers in the UK, Bermuda, Zurich and the United States. KCC is an associate member.)
Even companies that do not have a large volume of claims data from actual events may have specific expertise about certain types of risks. Actual site inspections, familiarity with management, and engineering reports provide valuable and credible information that should be reflected in the loss estimates. New approaches are not “either/or,” but rather allow underwriting and engineering expertise to be combined with scientific assumptions to enhance the quality and reliability of the risk assessment process.
Open platforms also enable insurance companies to more efficiently utilize the expertise of many scientific organizations around the world. While the model vendors employ very knowledgeable experts, these experts have their own subjective opinions and biases. Open platforms enable sophisticated insurance companies to pick and choose the most credible assumptions for their books of business and to build their own proprietary views of risk.
Stable risk metrics
A key challenge facing insurance companies is developing consistent risk management strategies when the model output changes so frequently and significantly. Much of the model volatility is a reflection of the wide uncertainty driven by the lack of data supporting many of the model assumptions.
Because scientific research can be seductive, it’s easy to forget that most of this research produces new theories and not new facts. The actual facts—the data—underlying the most fundamental model assumptions, such as the frequency and severity of events by peril region, are very sparse with only a few exceptions.
For example, a major hurricane has not made landfall in the Northeast since 1938, and scientists don’t know if the 1938 event was a Category 2 or 3 storm. Even in the Southeast region north of Florida, there have been less than a handful of major hurricanes since 1900.
While it is well known that the central United States can experience a large magnitude earthquake, the last series of major events occurred in 1811-1812, and scientists don’t know if the largest magnitude event was a 7.4 or 8.1, or if the return period for another major event is 500 or 1,000 years. Even in California much remains unknown about the frequency-magnitude relationships on the most studied faults.
The new tools available to insurance companies do not eliminate the uncertainty around catastrophe risk, but they do provide better methods for making decisions in light of this uncertainty. Rather than producing opaque information that changes frequently, the new tools provide stable and transparent risk metrics—representing event frequency and severity assumptions—to guide pricing and underwriting decisions. Stable risk metrics enable companies to implement more consistent risk management strategies—strategies that don’t have to change every time there is a model update.
Efficient frontier analytics
Once a company has a platform in place a platform that provides stable risk metrics for measuring and monitoring risk, it can test alternate property portfolios to determine the trade-off between large loss potential and profitability. In general, one takes on more risk to increase returns, but many companies have suboptimal portfolios—meaning for the same amount of risk, profits could be higher or for the same amount of profit, the large loss potential could be lower.
New tools enable companies to calculate for a given region and type of business, the maximum amount of expected profit available for different levels of loss. These tools allow companies to proactively test alternative growth strategies and how large losses will be impacted before selecting the desired strategies for implementation. Figure 1 is a hypothetical efficient frontier showing the maximum amount of profit available for a range of 100 year event loss levels.
The new generation
Along with senior management and boards of directors, external stakeholders, such as rating agencies and investors, have growing expectations with respect to how companies develop their own proprietary views of risk. Only a few of the largest companies have the internal resources to build their own models. Because new technology is open, efficient, and cost effective, it empowers more companies to evaluate the relevant scientific information and embed their own conclusions into their day-to-day decision making.
The catastrophe modeling methodology was developed 25 years ago and it has served the industry well, but insurance companies are now embracing a new generation of tools and approaches. These new tools, already available, do not replace the models, but rather complement and extend the capabilities available to insurers and reinsurers. Today’s generation of tools enables much more sophisticated risk management frameworks that are informed by the models, but not based on the models.