Big data for catastrophes sounds like an oxymoron. It’s precisely because of the lack of historical data that actuaries have left catastrophe modeling and loss estimation to external third parties specializing in model development. The catastrophe model vendors have developed proprietary techniques for extrapolating from the limited data using statistical analysis, scientific opinion and expert judgment.

Executive Summary

Catastrophe modeling is the antithesis of big data analytics. Hurricanes and earthquakes that produce major losses are rare phenomena, so there's not a wealth of scientific data for estimating the frequencies of events of different magnitudes in specific locations. But when a significant event occurs, the tens of thousands of resulting claims provide the big data surrounding catastrophes, and this data is very valuable for improving catastrophe models. Here, Karen Clark explains how insurers can leverage their own claims data for more credible catastrophe loss estimates and for competitive advantage.

But when a major event occurs, there’s a wealth of claims information that can be used to improve the models. In particular, claims data can be analyzed to fine-tune the damage functions in the vulnerability module—one of the four primary model components. In order to fully leverage this valuable information, insurers require the right tools and processes in place.

Historically, insurers have not put too much emphasis on the efficient collection and analysis of their catastrophe claims data for modeling purposes because it was not possible to directly leverage this information in the third-party models. At best, insurers could give their loss data to the model vendors to help the vendors improve their generic or “industry average” damage functions. In general, the data available to the modeling companies has been limited in scope and the resolution and quality has varied widely.

Enter your email to read the full article.

Already a subscriber? Log in here