Artificial intelligence? Machine learning? Thinking big is great, but before insurers begin to pursue such advanced big data projects, they must not lose sight of the fundamentals. They must first improve their data entry. After all, big data tech requires a firm base of fundamental data quality in order to function correctly. It’s the old adage: “garbage in = garbage out.”

Executive Summary

Many major insurers still rely on manual or semi-automated data quality checks, which are time consuming, costly and have a high margin for error. As a result, before they pursue advanced data projects such as artificial intelligence or machine learning, they need to have confidence in their data basics, writes Nick Mair, CEO and co-founder of Atticus DQPro, a London-based data monitoring and compliance platform for global re/insurers.

The current debate in InsurTech is around how data generated from emergent technologies such as artificial intelligence (AI) and the Internet of Things (IoT) can be harnessed alongside existing carrier data to produce new insights on risk, pricing and customer engagement.

Insurance is transitioning from being a data-generating market to a data-powered market. Clearly, data is no longer a by-product of selling or administering insurance; it is now the key driver of business development and operation. And it will only continue to play a greater role in the future once data-hungry technology such as AI and machine learning become more mainstream analytical and operational tools for the global re/insurance industry.

But are we getting ahead of ourselves? It’s one thing to fire out buzzwords like AI or predictive analytics or to debate the different applications and ramifications of new technology, but it’s quite another to implement them in practice.

It would surely be fruitless to apply an advanced machine learning algorithm to analyze datasets and look for trends and opportunities if you are not confident in the quality of the data fundamentals that it will be learning from. Just one data point entered incorrectly at the start of a process can skew the results and ultimately cast doubt on an entire model or program.

Manual Data Entry Still Common

High-quality data fundamentals are the backbone of every modernization and technology initiative, both within individual companies and for the market as a whole. But at present, many leading specialty carriers still struggle to have a clear view of what data is being stored and actually used across their operations.

In fact, and rather embarrassingly, far too many major carriers still rely on manual or semi-automated data quality checks, which are not only time consuming and costly, but also leave an unacceptably high margin for error.

This ad-hoc approach to data is not likely to help insurers build efficient processes, which they must do or face disruption from companies that are able to adopt the best InsurTech developments to improve the customer experience—when they buy coverage and when they submit a claim. After all, insureds have high expectations of digital services from banks and e-commerce companies, which offer personalized online solutions. Therefore, it is not enough that insurers maintain the status quo.

Data-Powered Compliance

Ultimately, those companies and individuals tasked with overseeing the implementation of any AI or similar project will be responsible for the output of the technology. As such, we can expect issues around the quality of the data to come under more regulatory scrutiny, particularly if decisions that directly impact clients—such as whether to challenge a claim—are being made by AI rather than humans.

Data quality is key to compliance present and future—and again, a data-powered approach, rather than simply a data-generating approach, is the key to evidencing a quality dataset with the correct controls in place. Becoming compliant and maintaining compliance in a shifting regulatory landscape need not be a complex process for insurers, even for those operating across multiple territories, such as carriers with U.S. operations underwriting in different U.S. states.

Move Away From Manual Entry

Clearly, global carriers must move away from manual, ad-hoc processes and establish automatic data checks to monitor and evidence their compliance—for example, to verify all incoming policy coding and check that business is not being written in territories that are subject to sanctions, or to provide a Solvency II audit trail or evidence of compliance with other regulations such as Lloyd’s standards in the specialty London Market.

Poor-quality data not only affects an individual insurance company, it also can cause contagion further downstream for any partner companies when databases are shared or integrated. Applying the same data integrity checks across multiple platforms helps to identify issues upstream before they cause errors or incur cost in secondary systems.

The global insurance market is generating data at an exponential rate, but there is still a very high proportion of duplication and unnecessary manual data entry, resulting in an unacceptable margin for error. The daily back-office cost caused by disorganized, remedial workflows is significant.

Data Confidence

Surely it is time we question the value of continuing to use manual, people-intensive methods and spreadsheets to check for data errors often created by other people and spreadsheets. It is time the industry recognizes the need to act and invest now to ensure these essential issues are put right, before ushering in the big data technology era—at mind-boggling cost—and relying on its outputs.

It is completely in the control of carriers to ensure they get these fundamentals right, with minimal cost and disruption, and move into an InsurTech future with data confidence.

*This story was originally published by Insurance Journal