Free Preview

This is a preview of some of our exclusive, member only content. If you enjoy this article, please consider becoming a member.

Beginning with the completion of my academic studies in earthquake engineering and continuing throughout my 25-year career in property/casualty insurance, I have been driven by the need and the challenge to understand, address and mitigate the impact of large-scale natural catastrophes.

Through the years, as natural and man-made catastrophes have become more frequent, widespread and dangerous, my efforts, along with those of colleagues and competitors, have been informed by numerous lessons learned—both from the successful application of modeling to save lives and mitigate damage, and from inaccuracies that failed to prevent devastating losses.

Today, perceptions about catastrophe modeling range from ardent believers in the value of modeling to mitigate disasters to skeptics who feel modeling cannot consistently deliver the results intended. This extreme disparity in perception may stem from a combination of unrealistic expectations about what even the best catastrophe models can accomplish and a lack of understanding of what’s needed to optimize their application and results. In fact, the current state of catastrophe modeling lies somewhere in the middle.

With that controversy as a backdrop, here are six insights to help clarify misunderstandings about why catastrophe models succeed or fail, as well as strategies for members of the risk and insurance industry to get the most from their efforts to model catastrophic risk.

  1. Effective modeling goes beyond the models. What makes one modeling approach better than another, or effective in one scenario and not in another? It often boils down to three key factors: (1) the quality and precision of the model, itself; (2) the availability and timely input of quality data (discussed in more detail in the following point); (3) the application of robust analytics to effectively interpret and leverage the model’s output. Without all three elements, it’s nearly impossible to differentiate one model from another or to have any sustainable success with a modeling exercise.
  1. Garbage in, garbage out. The undeniable value of good data. The lack of quality data is often the biggest impediment to effective modeling. Even the most rigorously calculated scientific assumptions require good inputs. For instance, most hurricane models can differentiate between various roof types, the presence of hurricane shutters and other construction factors. Accordingly, they generate different outcomes based on a dozen or more different construction variables. However, it is still common for these data elements to be missing or inaccurate. Ultimately, if you put garbage in, it doesn’t matter how good the model is—you’re going to get garbage out. If you’re modeling flood risk and looking at rooftops instead of street level, you’re already off by 10 meters. So, you’ll get answers that lead to poor decisions.
  1. Adjusting expectations for the “flaw” of averages. One factor contributing to the skepticism surrounding catastrophe modeling is that all models inherently contain a degree of uncertainty. Even the best model cannot predict outcomes with a 100 percent confidence level. It helps to understand that models are probabilistic and based on simulations of multiple possible outcomes. Consequently, even with the best data and analytics, they still can be somewhat inaccurate predictors of individual events as they unfold.
  1. Understand the difference between precision and accuracy. With steady advances in computational power in recent years, models have become increasingly precise. Many widely available models of flood risk are precise within 10 meters, which was unthinkable just a decade ago. Meanwhile, traditional models for hurricane and earthquake are delivering higher resolution than ever thanks to dramatic improvements in precision. The rub is that greater precision doesn’t automatically translate into better accuracy, for some of the same reasons discussed earlier. After all, these are still simulations—and, in fact, there is even more onus on better data quality and downstream analytics.
  1. Modeling and the catastrophe wild card: addressing the impact of climate change. Even as some aspects of climate change continue to be debated, others already are affecting catastrophe models and compromising their accuracy. Sea-level rise, the proliferation of wildfire events around the world, what previously were considered 100-year events occurring more frequently among other impacts are breaking historical paradigms so that frequency is now hard to predict and severity keeps evolving. Catastrophe modeling of weather-related events ultimately is still founded on historical data. However, climate change is making this a moving target. New specialist modeling firms are emerging in different areas of the world that are focusing on climate change and emerging perils such as wildfire. It’s worth monitoring their progress and how effectively they’re able to model weather-related incidents in the coming years.
  1. Taking advantage of the proliferation of catastrophe models. Right now, the catastrophe modeling industry is on the precipice of a new era. There are more entrants coming into the industry, including a growing number of firms offering robust capabilities and high-quality models to assess regional perils. Faced with increasing competition and the need to drive results, the insurance industry is continuing to evolve. The growing interest in parametric insurance changes how models are constructed and applied, ultimately to calculate actual insurance payouts. Given all these dynamics, the pace of model development will accelerate. Today, the modeling industry is moving in new directions as insurers, risk executives and other buyers pressure these firms to collaborate in the development of a set of standards. This will make it much easier to work with multiple models.

A final word of caution: Don’t approach the purchase of catastrophe models as you would other technology. Some buyers may choose modeling companies and their solutions using the same approaches for selecting technology vendors. In this scenario, they might choose one modeling firm to assess all their risks in various areas of the world, just to avoid the technology hassles.

Instead, they should start by understanding their own risk profile. Know what perils you’re exposed to and regions you’re most concerned about. Certain models are more credible at predicting some types of perils in specific regions. So, there is considerable value to be gained by completing the necessary due diligence and knowing exactly what you need. If the optimal solution involves using a multimodel technology setup, the initial costs may be higher, but the return and results are likely to be worthwhile.