Explainable AI: Basics and Recent Developments

Monday, 27 June 2022, 15:00 – 19:00 CEST (Central European Summer Time - Gothenburg, Sweden)
Back to Tutorials' Program

 

Wojciech Samek (short bio)

Fraunhofer Heinrich Hertz Institute, Germany

 

Objectives:

Being able to explain the predictions of machine learning models is important in critical applications such as medical diagnosis or autonomous systems. The rise of deep nonlinear ML models has led to massive gains in terms of predictivity. Yet, we do not want such high accuracy to come at the expense of explainability. As a result, the field of Explainable AI (XAI) has emerged and has produced a collection of methods that are capable of explaining complex and diverse ML models. This tutorial will give a structured overview of the approaches that have been proposed for XAI. In particular, it will present motivations for such methods, their advantages/disadvantages and their theoretical underpinnings. It will also show how these techniques can be extended and applied in a way that they deliver maximum usefulness in real-world scenarios.

 

Content and benefits:

The first part of the tutorial will present motivations for XAI, in particular, it will give examples where simple non-XAI validation techniques can strongly mislead the user in his assessment of the model performance. The second part will present several XAI methods that successfully cope with highly nonlinear ML models used in practice and discuss their theoretical underpinnings. The third part will present recent developments in XAI.

The topics covered include:

  • Motivations: Black-box models and the “Clever Hans” effect:
  • Explainable AI: methods for explaining deep neural networks
  • Unifying views on explanation methods & theoretical underpinnings
  • Evaluating explanations
  • Applications of XAI
  • Explaining beyond deep networks, single-feature attributions, and individual predictions
  • XAI-Based model improvement

 

Target Audience:

This tutorial targets core as well as applied ML researchers. Core machine learning researchers may be interested to learn about the connections between the different explanation methods, and the broad set of open questions, in particular, how to extend XAI to new ML algorithms. Applied ML researchers may find it interesting to understand the strong assumptions behind standard validation procedures, and why interpretability can be useful to further validate their model. They may also discover new tools to analyze their data and extract insight from it. We expect 300 participants. Participants will benefit from having a technical background (computer science or engineering), and basic ML training.

 

List of recommended papers:

W Samek, G Montavon, S Lapuschkin, C Anders, KR Müller. Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications, Proceedings of the IEEE, 109(3):247-278, 2021.
https://doi.org/10.1109/JPROC.2021.3060483

G Montavon, W Samek, KR Müller. Methods for Interpreting and Understanding Deep Neural Networks, Digital Signal Processing, 73:1-15, 2018.
https://doi.org/10.1016/j.dsp.2017.10.011

G Montavon, S Lapuschkin, A Binder, W Samek, KR Müller. Explaining NonLinear Classification Decisions with Deep Taylor Decomposition, Pattern Recognition, 65:211–222, 2017.
http://dx.doi.org/10.1016/j.patcog.2016.11.008

L Arras, A Osman, W Samek. CLEVR-XAI: A Benchmark Dataset for the Ground Truth Evaluation of Neural Network Explanations, Information Fusion, 2022.
https://doi.org/10.1016/j.inffus.2021.11.008

J Sun, S Lapuschkin, W Samek, A Binder. Explain and Improve: LRP-Inference Fine Tuning for Image Captioning Models, Information Fusion, 77:233-246, 2022.
https://doi.org/10.1016/j.inffus.2021.07.008

CJ Anders, L Weber, D Neumann, W Samek, KR Müller, S Lapuschkin. Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models, Information Fusion, 77:261-295, 2021.
https://doi.org/10.1016/j.inffus.2021.07.015

 

Interesting link regarding this tutorial

http://www.heatmapping.org

Bio Sketch of Presenter:

Wojciech Samek is head of the Department of Artificial Intelligence and the Explainable AI Group at Fraunhofer Heinrich Hertz Institute (HHI), Berlin, Germany. He studied computer science at Humboldt University of Berlin, Heriot-Watt University and University of Edinburgh and received the Dr. rer. nat. degree with distinction from the Technical University of Berlin in 2014. During his studies he was awarded scholarships from the German Academic Scholarship Foundation and the DFG Research Training Group GRK 1589/1, and was a visiting researcher at NASA Ames Research Center, Mountain View, USA. Wojciech is associated faculty at the BIFOLD - Berlin Institute for the Foundation of Learning and Data, the ELLIS Unit Berlin and the DFG Graduate School BIOQIC, and member of the scientific advisory board of IDEAS NCBR. Furthermore, he is an editorial board member of PLoS ONE, Pattern Recognition and IEEE TNNLS and an elected member of the IEEE MLSP Technical Committee. He is recipient of multiple best paper awards, including the 2020 Pattern Recognition Best Paper Award, and part of the expert group developing the ISO/IEC MPEG-17 NNR standard. He is the leading editor of the Springer book "Explainable AI: Interpreting, Explaining and Visualizing Deep Learning" and organizer of various special sessions, workshops and tutorials on topics such as explainable AI, neural network compression, and federated learning. Wojciech has co-authored more than 150 peer-reviewed journal and conference papers; some of them listed as "Highly Cited Papers" (i.e., top 1%) in the field of Engineering.