Mind Foundry Ltd., an artificial intelligence startup vying to help insurers decide which drivers should be covered, has raised $22 million in funding, the latest sign of growing demand to deploy AI in critical sectors where there’s little room for error.

The startup’s AI tools are being used to detect cognitive decline in older drivers in Japan to aid Asian insurance giant, Aioi Nissay Dowa Insurance Co., in predicting and preventing accidents. Aioi invested in the funding round along with Parkwalk Advisors and the University of Oxford, said Brian Mullins, Mind Foundry’s chief executive officer. The Series B round brings the startup’s total funding to $44 million.

Traditionally, insurers have relied heavily on details such as the type of car and the driver’s age to predict who is more likely to be involved in serious accidents — and to set insurance premiums. In recent years, some insurers have leaned on AI software both to expedite settling claims and to analyze driving data to come up with more precise risk assessments for customers. AI’s increasing influence on insurance coverage decisions may worry some, but Mullins said there’s an upside when it comes to seniors.

“If a person is over a certain age, their insurance policy is gone, their independence is gone,” Mullins said. “Not all individuals age the same way. Now data can tell us that the person is driving safely. That’s a responsible use of AI.”

Mind Foundry was founded in 2016 by a duo of University of Oxford computer scientists with the goal of designing “AI for high-stakes applications,” including defense and public infrastructure issues. It began working on the AI product for cognitive decline earlier this year and now wants to take its insurance offering to other geographies.

The Oxford-based startup’s data scientists collaborated with insurance experts to build an AI system that finds direct links between cognitive decline and road accidents. It was developed with enormous datasets, including 9 billion miles of telematics trip data, sensor measurements and dashcam accident footage, to help understand and underwrite risk. The AI system studied behavior patterns like harsh braking, erratic swerving and sudden acceleration among drivers who had been in the most severe, large-loss accidents—incidents with catastrophic personal injury and high costs, among other factors. It then analyzed similar patterns among older drivers to flag cognitive decline before it became dangerous.

“Large loss cases are not just the biggest risk in an insurance portfolio but also the most difficult to model and predict,” Mullins said.

Aioi is currently piloting Mind Foundry’s system with 2 million older drivers in Japan who’ve consented to let it analyze their telematics data, including dashcam footage. That could lead to more personalized insurance plans with lower premiums for those whose driving behavior doesn’t exhibit risky patterns, Mullins said. The startup is looking at ways to inform drivers, through an app or other means, about their own AI-gleaned driving patterns to convey if it’s safe or risky.

“We believe the output of this pilot can potentially lead to enhanced safety and extended driving longevity for seniors,” said Jun Ikegami, who heads Aioi’s R&D laboratory. “We must tackle the challenges presented by aging populations,” to ensure equity, he said. In Japan, more than 10% of the population is age 80 or older.

There has been a frenzy around all things AI this year, but with it has come added public pressure to make AI services more transparent and accountable, particularly as these technologies are applied in health care, criminal justice and insurance — industries where biases baked into AI systems can carry far greater risks.

“It’s one thing for AI to misdiagnose cancer, another thing to play a tune I don’t like,” said Vincent Muller, professor of philosophy and ethics of AI at the University of Erlangen–Nuremberg. With insurance, there’s also the question of accountability and remedies if technology leads to an error. “Humans make mistakes, sometimes they can be trusted less than the machines, but at least they can be held responsible,” Muller said.

Humans are involved in the process, according to Mind Foundry, including by providing feedback on the data used to develop its AI systems. The company said AI does not make the decisions, but rather informs insurance experts about a potential customer’s risks before determining the coverage and pricing for that individual.

“We go through a responsible AI process before we choose an AI technology to develop a product,” said Alessandra Tosi, a senior scientist at the startup. The startup works in highly regulated industries where it’s necessary to help regulators understand how its technology works to ensure it isn’t unfair or discriminatory.

Mind Foundry has also built AI tools to assess other driving issues. In the first 30 days of being deployed, one of its AI models helped Aioi’s European unit detect over 40,000 off-policy delivery trips and found 100 policy holders using personal vehicles for commercial deliveries—helping underwriters review such claims case by case.

The company applies AI in other segments too, working to help evaluate risks in public infrastructure such as railways or bridges so governments or cities can prioritize maintenance budget allocations. It builds AI systems to study risks from climate change and has deployed AI to optimize placement of EV charging infrastructure in complex road networks in the UK’s Oxfordshire County Council.

Its toughest task, however, may be trying to use AI to insure AI. Mind Foundry is working to assess the risks of AI for autonomous vehicles, shifting from the current system of looking at drivers’ individual liability to looking at risk from the vehicle and its software across the fleet. “Understanding the risks associated with AVs is the first step towards creating an insurance product to safeguard against that risk,” Mullins said.