Price optimization and the controversy over whether or not to use it in insurance continues even though many states have banned the practice, panelists said during the Casualty Actuarial Society’s 2018 Ratemaking and Product Management seminar, held in Chicago from March 19-21.

Price optimization is the systematic use of demand elasticity—the relative tolerance of buyers for rate increases—to maintain satisfactory levels of policyholder retention while instituting rate increases and decreases to meet underwriting goals.

In other words, price optimization is a scientific way to establish caps and floors on rate changes. The practice has been criticized by consumer advocates and some regulators for departing from traditional ratemaking principles based on the cost of risk and introducing new “demand” factors. Some view it as targeting consumers according to their tolerance for rate hikes, essentially charging them as much they will bear, despite their risk profile.

In light of that concern, panelist Wanchin Chou, chief actuary of the Connecticut Department of Insurance, reminded attendees that a task force of the National Association of Insurance Commissioners (NAIC) determined that certain practices associated with price optimization are incompatible with prevailing statutory requirements on insurance ratemaking.

These practices include factors in rating plans that reflect a policyholder’s propensity to ask questions, file complaints or shop for coverage. (One fear in this regard is that relatively passive consumers may be charged more than more price- and value-sensitive ones.)

Deviations

For many insurers and industry consultants, however, price optimization is, for the most part, a more systematic and objective way of doing something insurers have always done: deviate from rate changes based on cost-based indications in order to cushion the impact of rate increases on policyholders and decreases on an insurer’s earnings.

“[With price optimization], we’re selecting a price that deviates from cost-based indications,” said panelist Serhat Guven, regional director of Willis Towers Watson’s consulting division for the Americas. “That’s a standard, accepted practice,” he added. “We’ve always selected prices that deviated from the cost-based indications.”

“Price optimization is a more scientific process that leverages elasticity models to make adjustments from the cost-based price,” Guven said. “These adjustments are designed to be systematically aligned with business objectives.”

Elasticity models are already incorporated in rating plans and rate filings, Guven said, but they generally reflect a company’s competitive rating position and the likely impact of rate changes—factors long used and accepted in insurance rating.

“The idea of deviating from indications is not a new idea,” Guven said. “Price optimization creates a more scientific framework. It leverages information that we gather from an elasticity model to make more scientifically appropriate deviations from the indications.”

Big Data Concerns

While proponents of price optimization see it as a refinement of longstanding practice, consumer advocates see it as another innovation arising from opaque mining and analysis of “big data” collected about consumers without their fully informed consent.

“The advent of big data in insurance raises real challenges for regulators and changes the dynamic between insurers and consumers,” said panelist Birny Birnbaum, executive director of the Center for Economic Justice. “Big data has huge implications for affordability of insurance and for regulators’ ability to keep up with the changes. The current regulatory framework doesn’t provide regulators with the tools to respond effectively.”

As Birnbaum sees it, “opaque algorithms” can use data elements that are not subject to statutory protections or regulatory oversight, with no practical ability of consumers to ensure the accurateness or completeness of the data.

“The concept of [preventing] unfair discrimination becomes meaningless when you have rate filings with millions of rate classes,” he said. “Also, new [data-defined] risk classifications can serve as proxies for protected classes (race, ethnicity, religion and national origin). These new risk classifications would have the effect of discriminating against those [protected] classes of people.”

To illustrate his point, Birnbaum cited the example of a service promoting a score developed for individuals based on official records of citations against them by public authorities. Birnbaum asked attendees to consider the impact of that type of scoring on African-Americans in Ferguson, Mo., where the U.S. Dept. of Justice determined that African-Americans were disproportionately cited for offenses related to perceived behavior.

“Data mining can inherit the prejudices of prior decision-makers or reflect widespread biases that persist in society at large,” Birnbaum said. “Often the patterns discovered are pre-existing societal patterns of inequality and exclusion. Unthinking reliance on data mining can deny members of multiple groups full participation in society.”

Birnbaum had a rather surprising proposal for the group: Allow, or even require, insurers to develop rating plans using controlled variables for the generally protected classes of race, ethnicity, religion and national origin, then disqualify use of any variable found to replicate the results of the controlled variables.

“When you develop a model, you would include race, explicitly,” he said. “When you deploy the model, you would exclude these prohibited factors.”