As environmental, governance and social (ESG) concerns permeate corporate agendas, businesses of all types need to be especially sensitive about how they use customer data.

Executive Summary

The way insurers store and manage customers’ data will be critical when consumers, suppliers and potential new employees assess their ESG credentials, notes Xceedance VP Michael Parcelli, who provides a six-point checklist for successful and transparent AI implementation.

Insurance companies are increasingly using artificial intelligence (AI) to manage manual, low complexity workflows—such as data ingestion, processing and entry into enterprise systems—all of which dramatically increase operational efficiency. However, AI is not an agnostic technology, and it can be used in ways that reinforce its creators’ biases. As a result, insurance organizations need to be especially sensitive to ensure they develop and use AI ethically and manage their customers’ data with watertight controls.

In June 2021, the European Insurance and Occupational Pensions Authority published a report on digital ethics, including principles on the ethical use of AI in the European insurance sector. Multiple frameworks for the ethical use of AI are being developed in the United States, covering issues such as the ways businesses extract and store data and how they engage with their customers. More and more enterprises are recognizing the concerns around AI. They are actively trying to explain to customers how they use the technology rather than being reluctantly dragged into the process under threat of regulation. Further, the insurance industry can treat its data management declarations in a more transparent and less complicated fashion. The way insurers store and manage customers’ data will be critical when consumers, suppliers and potential new employees assess their ESG credentials.

While companies must be open and honest about the way they use AI and other technologies, there will always be a commercial tension between being transparent on the one hand and trying to protect their “secret sauce” on the other hand. Insurers must carefully consider managing the balance between openness and protecting commercial secrets while at the same time maintaining the trust of customers.

Insurers must carefully consider managing the balance between openness and protecting commercial secrets while at the same time maintaining the trust of customers.

In property insurance, this is generally straightforward. Insurers can use AI and machine learning to scrape data from photographs to help evaluate the scale of damage at an accident or the scene of a catastrophe. With a comprehensive suite of reliable data, loss adjusters can avoid significant manual work and leverage their experience as the final arbiter of whether a claim should be paid and the value of the settlement. If AI can reduce variability in the claims process by around 50 percent, it is a massive step in the right direction.

Executing a transparent Artificial Intelligence plan requires not only meticulous tactical and process steps but also sensitivity regarding audience perceptions. Here are six steps to consider for a thoughtful and successful AI implementation:

  1. Identify those specific processes that can be improved—on both execution and output—through the application of AI technology.
  2. Clearly narrate the benefits to both the business and consumers in uncomplicated terms.
  3. Demonstrate how data will be collected, exactly what data points will be used and why.
  4. Ensure and attest that the AI implementation will be under constant assessment for the material value and principled use of applicable data.
  5. In non-regulated locations, afford transparency of findings for specific data points (such as individual credit reports provided by credit reporting agencies).
  6. Avoid or discontinue AI-augmented processes that do not add tangible value for consumers.

When it comes to using AI, there can be ethical complexities, such as facial recognition technology used as a diagnostic tool. AI is currently being used to extract data from photographs and forms. Still, the next stage in the technological journey is to use machine learning to process that information against established business rules—in other words, to automate what happens to the information extracted from the original data source. To that extent, we are still at the first level of maturity with AI in insurance.

The increased use of parametric-based insurance in areas such as travel is an instructive paradigm for forthright AI deployment. Parametric insurance, where an agreed event triggers a claim, is generally a simple insurance arrangement. For example, if a train or a plane is delayed beyond a certain amount of time, the policy pays out immediately without the policyholder having to make a claim. The simplicity of this type of insurance can help the industry build “technological confidence” with consumers.

Let’s also keep in mind that attitudes on data privacy differ markedly depending on age. To young people who have grown up surrounded by social media, the idea of privacy is not especially prevalent. They feel comfortable sharing most aspects of their lives online and on social platforms. Older people tend to be more sceptical about what companies may do with their data, as has been seen by the backlash against social media providers for selling their users’ information to advertisers. Businesses, including insurers, need to reassure all consumers, and especially those in older generations, that their data is being collected and used ethically.

There is widespread recognition that businesses will increasingly use AI to boost efficiency and improve the customer experience as large segments of the economy transition to digital interactions. With ESG concerns so important to consumers now, this is an ideal opportunity for insurers to re-evaluate data strategies and assess initiatives, including in the use of AI technology, to establish and retain customer confidence.