Image:
Eirik Østmo / Torger Grytå

VI seminar 2021 #4 - Explainable AI workshop

The program will be available shortly. Please check back later.

Explainable AI workshop

This workshop aims to bring together both academic researchers and industrial practitioners to share visions on the explainable artificial intelligence (XAI), and the practical usages in different AI applications, such as biomedical images and marine science.

Program

Dr.  Alexander Binder (UiO)

Title: Explainability beyond eyeballing heatmaps: towards improving models.

Abstract: Many who read research in explainable machine learning will remember colorful heatmaps which are visually appealing, thought without clear use cases. We will consider how explainability can be used in neural network pruning, image captioning and few shot classification to improve models.

Mara Graziani (University of Applied Sciences of Western Switzerland)

Title: Sharp-LIME: Sharpening Local Interpretable Model Agnostic Explanations for Digital Pathology

Abstract: Being accountable for the signed reports, pathologists may be wary of high-quality deep learning outcomes if the decision-making is not understandable. Applying off-the-shelf methods with default configurations such as Local Interpretable Model-Agnostic Explanations (LIME) is not sufficient to generate stable and understandable explanations. This work improves the application of LIME to histopathology images by leveraging nuclei annotations, creating a reliable way for pathologists to audit black-box tumor classifiers. The obtained visualizations reveal the sharp, neat and high attention of the deep classifier to the neoplastic nuclei in the dataset, an observation in line with clinical decision making. Compared to standard LIME, our explanations show improved understandability for domain-experts, report higher stability and pass the sanity checks of consistency to data or initialization changes and sensitivity to network parameters. This represents a promising step in giving pathologists tools to obtain additional information on image classification models. The code and trained models are available on GitHub.

Dr. Shujian Yu (UiT)

Title: Information Bottleneck for the Understanding of Deep Neural Networks in the Training and the Decision-Making Processes

Abstract: In this talk, we first present a novel matrix-based Rényi’s entropy functional estimator, which measures information-theoretic quantities in terms of the eigenspectrum of the symmetric positive definite (SPD) matrices, without explicit density estimation and distributional assumption. Based on the new estimator, we demonstrate how information theory can be brought to bear on DNNs in unorthodox and fruitful ways, thus providing a principled way to analyze their training behaviors, trade-offs, and to improve their practical performances. In particular, we show how to use information bottleneck approaches to understand the training and the decision-making processes of DNNs.

Kristoffer Wickstrøm (UiT)

Title: RELAX: Representation Learning Explainability

Abstract: While interpretability is a fundamental challenge in deep learning, most research has been limited to explaining task-specific decisions such as predictions. However, with the recent advances in unsupervised representation learning, the ability to explain representations is becoming increasingly important. In this talk, we present RELAX, the first method for explaining representations that indicate both the importance of each input feature and models the uncertainty in the importance. RELAX is based on measuring similarities in the representation space between an input and occluded versions of itself. Results show that seemingly similar representations can utilize very different input features and that there can be a big difference in the certainty of an explanation for a representation.

This seminar is open for members of the consortium. If you want to participate as a guest please sign up.

Sign up here