TLDR:
- Explainable AI is important for the insurance industry
- Insurtech companies should embrace AI and promote their efforts to investors
Fintech’s generative artificial intelligence (AI) made significant progress in 2023 with the release of Google’s ChatGPT. However, explainable AI, which helps humans understand how AI models make predictions and decisions, is just as crucial for the insurance industry. Insurance companies have been discussing AI, even before ChatGPT’s release in November 2022. Insurtech companies should fully embrace AI and showcase their efforts to investors. Advancements in explainable AI are essential for addressing regulatory concerns and have more practical applications in the industry than generative AI.
Improving customer experience is a popular goal among insurance companies. Companies like Lemonade Inc. and Reliance Global Group Inc. are using AI-powered systems to handle customer inquiries and offer smart coaching to agents. AI is also used in sales and marketing for market segmentation and product recommendations. In back-office operations, AI has various applications, including risk assessment and claims handling.
In terms of sector usage, property and casualty insurers accounted for half of the analyzed companies in the study, followed by managed care companies and life and health insurers. An analysis of surveys conducted by the National Association of Insurance Commissioners (NAIC) indicates that life insurers lag behind auto and home insurers in their adoption of AI and machine learning.
Regulation is a critical factor in the future of AI in insurance. While certain AI use cases may not attract much scrutiny, policy pricing using AI models is likely to face heightened regulatory scrutiny. State regulators demand detailed explanations from actuaries on how they arrived at their decisions, which can be challenging for some AI models like neural networks. Explaining the inner workings of these models can be complex, as they often find intricate relationships between data points that are difficult for humans to understand. However, explainable AI techniques, such as Shapley values and partial dependence plots, are emerging as potential solutions.
The article suggests that advancements in explainable AI are more likely than increasing regulators’ comfort with neural networks. It also emphasizes the importance of generative AI being able to explain its conclusions or at least provide the sources of its information.
Post Views: 368