Explainable Machine Learning

Monday 27th April 2020 12 PM BST

Your details:

Our registration process uses cookies, by submitting this registration form you agree to our cookie policy.

(*) denotes required form field(s)


Explainable Machine Learning

Monday 27th April 2020 12 PM BST


Explainable Machine Learning

Machine learning techniques have become more and more popular within the financial industry mainly because of 

  • the potential to capture complex interactions from data
  • the potential for better predictive models than traditional statistical models
  • the ability to capture non-linear interactions within a range of inputs

Machine Learning techniques have been viewed as useful additions in the actuary’s modelling toolkit that could enable insurers to process and learn from more data

Nevertheless, these models - sometimes viewed as black box models - can sometimes be hard to interpret, audit and debug, subsequently making it harder to trust and use the outcome of the prediction resulting from these models.

Building on from Actuartech’s previous webinar on Interpretable Machine Learning we are pleased to bring another insights session on the topic of explainable Machine Learning - presented by Reacfin.

We will talk about some of the worries around delegating decisions to machines, and how to overcome some of these challenges. We will touch on some of the issues surrounding the trade-off between predictive power and explainability.


We will also expand beyond some of the technical concepts and explore ways to design and build these models in order to make them usable within stakeholder communication scenarios as well as make them suitable to meet professional conduct requirements and usable in the context of achieving fairness in insurance pricing.


During this webinar we will expand on some of the previous techniques highlighted and will introduce additional techniques that can be used in order to better understand and interpret machine learning models and results, showing why they need not be viewed as ‘black box models’. 

We will build on this to help identify ways to obtain sufficient comfort in the models in order to make business decisions and to be able to explain the impact to stakeholders.

These interpretability tools make the use of ML techniques much more relevant in practice...


Samuel Mahy
Head of the Non-Life Centre of Excellence
View Biography
Xavier Marechal
Founder and CEO
View Biography