Much of the discussion around how to manage the advanced forms of artificial intelligence—machine learning, generative AI, large language models—deals with them only as technologies. This is a mistake.

Executive Summary

"Like any employee, AI must be onboarded to learn 'how we do things around here,' " SAS Institute's Insurance Industry Advisor Mike Fitzgerald. Here, he notes that advanced forms of AI have characteristics that set them apart from other technologies—including the fact that they can make recommendations and decisions on their own that may not reflect corporate values—suggesting that insurers need to address this by applying standard human resource processes to advanced AI to keep them in line.

There are characteristics of these tools which require insurers to apply some of their traditional human resources tools to ensure adequate governance and to maintain an acceptable risk exposure.

Advanced AI is Different

The fundamental problem in treating advanced AI as only another technology is that these tools can:

Learn on their own. Generate output on their own. Make recommendations or decisions on their own, which may—or may not—reflect corporate values and also may create—or destroy—trust with customers and employees.

Unlike traditional technologies, AI can perform these activities without the direct involvement of a human. There is no programmer or manager to act as a stopgap, ensuring that corporate guidelines are being followed, that bias and discrimination are not present, that reputational risk does not take place, etc.

Enter your email to read the full article.

Already a subscriber? Log in here