Explainable AI for Actuaries: Concepts, Techniques and Case Studies
The increasing use of artificial intelligence (AI) and machine learning (ML) in the insurance industry in general and in actuarial issues in particular presents both opportunities and risks. Acceptance of complex methods requires, among other things, a degree of transparency and explainability of the underlying models and the decisions based on them.
Welcome to this four-part training. In the first part, we will deal with a qualitative discussion of the concept of explainable artificial intelligence (XAI). In addition to characterising model complexity itself, issues such as when a model can be said to be sufficiently explained and who needs to be able to review and understand such models will be discussed. Issues of actuarial diligence are also addressed. The first part concludes with an illustrative and comprehensive overview of explainability techniques and a compilation of useful and helpful notebooks.
In the second part, we will focus on specific standard methods for XAI. Here, we explain how the model-agnostic methods “Individual Conditional Expectation”, “Partial Dependence Plot”, “Counterfactual Explanations” and “Local Interpretable Model-Agnostic Explanations” work and refer to well-known Python packages. Additionally, we examine the model-specific tree-based “Feature Importance” of the Python package “scikit-learn”. Throughout this part, we also discuss aspects of actuarial diligence and limitations of the considered methods.
The third block will introduce the participants to variable importance methods. These methods try to provide an answer to the question: “Which inputs are the most important for my model?”. We will provide a general overview of variable importance methods and introduce some selected methods in depth. In addition to providing examples and use cases, we will cover enough of the theory underlying the methods to ensure that users have a good understanding of their applicability and limitations. Throughout, we will also discuss practical aspects of actuarial diligence such as how to interpret and communicate results from these methods.
The last part of the seminar provides an interactive, hands-on experience with explainable AI using a Jupyter notebook designed around an actuarial use case. Participants will be guided through a comprehensive machine learning workflow before delving into the implementation of various XAI techniques. In exploring selected XAI methods, we will focus on their mathematical foundations to critically assess their applicability and suitability within the actuarial field. The interactive segment concludes as participants are presented with an additional case study to tackle, applying the XAI methods they have learned to deepen their understanding.