Free Preview

This is a preview of some of our exclusive, member only content. If you enjoy this article, please consider becoming a member.

As an insurance carrier executive, how often have you asked for an internal data report only to be told it will take weeks or months and thousands of dollars to develop and cannot be easily modified once it’s done?

Executive Summary

Data reporting and analysis is often hampered in insurance organizations by siloed functional systems that effectively maintain “disconnects” between data reported for the same policy but for different purposes. A companywide approach to defining and reporting premium and claims for discrete coverages can make it much easier for executives to request and receive data reports for strategic analysis.

For all the advances made in gathering and analyzing data, insurers all too often find themselves constrained by outmoded data structures embedded in legacy systems. The problem is compounded in companies that have multiple systems, which are commonly found in merged organizations and in companies that have grown in an ad hoc fashion.

When policy administration systems reflect functional silos, carriers are often left with two less-than-desirable options for regulatory reporting and strategic analysis:

  • Manually extracting data into numerous spreadsheets, where managers and specialists work to balance and reconcile data into compatible formats.
  • Writing lines and lines of coding to query, organize and report the requested data—a costly, cumbersome and time-consuming process that produces results that are not easy to maintain or modify.

Executives (other than CIOs) generally don’t want to “get into the weeds” of data architecture, but they are well advised to understand that their exposures and losses are not only expressed in the words and numbers of policy forms, rating manuals and declarations.

Exposures and losses are also expressed through an underlying data architecture, principally embedded in a policy administration system, that could (but often doesn’t) relate the elements listed above in a manner that allows for fast and efficient reporting and analysis.

To that end, executives are also well-advised to develop a basic understanding of the principles of what’s known as “relational database management.”

What Is a “Coverage”?

Suppose you asked your colleagues and subordinates to define a “coverage.” Would they define it the same way you do? Or would their responses reflect different orientations depending on whether they worked in underwriting, claims, regulatory compliance or some other function?

Even if your staff members across different functions have a similar idea of what constitutes a coverage, you may find that your policy administration system treats each coverage differently from one function to another. That would depend on, among other things, the structure of a particular system (underwriting, claims, regulatory reporting, etc.) and the purpose for a data record.

As a result, staff members rarely share a common perspective regarding data elements. Underwriting staff often does not know how a claim will be coded for statistical reporting purposes, claims staff is probably unaware of how premium is derived and allocated in the rating algorithm, and so on.

Unless directly addressed by strategic management, this “data disconnect” makes it virtually impossible to establish the comprehensive, enterprisewide data structure needed to maintain and update systems efficiently and to provide powerful reporting capabilities.

Relational Keys

To make this process efficient and cost-effective, developers need to define the “relational keys” among data elements and records.

Whether done by carrier staff, software vendor staff or consultants, the major function of insurance IT developers is to identify and understand how a legal contract, the insurance policy, translates to specific data elements.

We all know that a premium is paid in exchange for agreements made in the contract, and then, as fortune dictates, claims are paid. These agreements, commonly referred to as coverages, provide the links from data on the premium side to data on the claims side.

That seems pretty straightforward, but you would be surprised at the variety of responses you get when you ask several insurance professionals—in the same organization—to define an insurance coverage and identify its elements. The key to defining a “coverage” for purposes of optimum reporting and analysis lies in understanding and tracking how the premium paid relates to the coverage provided.


Data is not captured and utilized by insurance organizations in a straight-through, linear chain from a beginning to an end. Rather, it passes through a series of relationships and is recycled as it is gathered from various sources and utilized for different purposes.

Consider the typical insurance policy workflow.

Usually, that workflow begins with an application. Application data can come from an agent, through a portal, directly from an insured, from a web channel or from an underwriter. The application data is then augmented by risk information from third-party services such as geo-based risk reports, credit risk reports, loss history reports and reports on estimated replacement costs.

Next, the resulting data is fed through a rating algorithm that cross-references the augmented application data against tables of rates and factors loaded into the policy administration system. This, in turn, generates new data elements: the premium charges.

As a simple example, consider a location address entered into the system through an application. That address may be fed into a third-party verification service to produce a more precise, standardized address. The optimized address is fed into a rating system to determine the risk’s rating territory, then passed along to the policy issuance system to be entered onto a declarations page and onto the billing system for invoicing. Later, it may be passed along to a reinsurer to measure geographic concentrations of risk exposures.


We define a coverage as a specific agreement to pay compensation for a certain type of loss. Coverages thus defined serve as the data building blocks of an insurance policy.

For purposes of data architecture, a coverage is represented by a cell in a table that designates the promise to pay and the covered exposure, subject to several well-known “boundaries” that limit the scope of the promise to pay, including:

  • The application of coverage (named insureds, insureds by definition, additional insureds, scheduled locations, etc.).
  • Restrictions on coverage (excluded perils, activities, persons/organizations, premises/operations/products, etc.).
  • The time period covered (policy periods, reporting periods, income coverage periods after expiration, etc.).
  • The geographic extent of coverage (coverage locations, coverage territory, transit coverage limitations, etc.).
  • The dollar amount of recovery (limits, sublimits, deductibles, coinsurance, loss settlement, etc.).


Having established and designed a data element we call a coverage, we can now link it to the policy administration process, starting with premium calculations.

To calculate premiums, values are captured from product rating tables and fed through a rating algorithm. These values typically include class codes, base rates and factors. Of course, all of this data also needs to be stored and cataloged both in their raw form and as they apply to each specific policy.

The rating algorithm is typically composed of premium segments, from which is derived the final premium. For example, there may be a rate calculation for the fire peril for the building and another for the extended coverage perils for the building, fire for personal property, and so on.

In the case of extended perils coverage for the building, the premium segment is not configured on a one-to-one basis with the coverage. The single extended coverage perils premium pays for losses resulting from many perils, including wind, hail, water, etc.

Therefore, a policy administration system will need to identify each coverage for each extended peril separately, then map them all back to the single extended perils coverage premium segment.

Claims and Reporting

The relationship of coverages to premium is one side of the equation; the other side, of course, is losses.

An effective claims processing system should be able to record the coverages included in a particular policy and provide the boundaries and limitations of each coverage. This allows claims staff to adjust claims accurately, according to the coverage intended by the policy. It also allows for more streamlined statistical reporting, which leads us to our next linkage.

How often have you seen management and regulatory reporting become an afterthought in system design—something that is considered only after the quoting, issuing and billing components have been deployed?

A well-designed system will consider annual statement and statistical reporting, as well as management reporting for operating results and performance metrics, as the underlying data structure is designed and built. With this approach, a carrier can implement and execute sophisticated reporting with relative ease because data already is being recorded in a condition that “relates” it to different carrier functions.

For immediate purposes, better definition, mapping and linking of data allows for much easier statutory reporting and better business intelligence. Beyond that, however, attention to data architecture vastly improves understanding of how the different components of an insurance product perform. (See sidebar “Data Architecture: Transforming Bridged Siloes Into Data Connectivity.”)

Who’s Going to Do This?

It is truly rare to find the individual who understands both cross-functional insurance concepts and the data architecture that undergirds them.

When a carrier implements a new IT system, somebody—from either the insurance side or the IT side—needs to bridge this gap. Often, a carrier will pay its IT vendor to develop this understanding.

However, if the resources are available, an investment in developing the cross-functional understanding needed to establish data linkages is certainly worthwhile.

Once your data connections are defined and understood, your implementation can be effectively tested and validated, your reporting can be streamlined, and the resulting insights and metrics from a well-designed system can be acted upon.