Image:
Eirik Østmo / Torger Grytå

Concept-based explainability

The program will be available shortly. Please check back later.

Concept-based explainability

Abstract:

Explainability is of key importance to ensure trustworthiness and transparency in deep learning systems. Early methods to provide explainability have shown that it is possible to provide some degree of transparency into deep learning systems, but they also have numerous limitations.  Example of limitation could be disagreement between different explanation methods and explanations that are incomprehensible to the human evaluator. Recent research on prototype-based and concept-based explainability has shown promise to address some of these limitations. This workshop will give short introduction to the benefits of prototype and concept-based explainability, and host presentations on very recent and leading research on concept-based explainability.

Preliminary program:

- Beyond post-hos explanations; on the benefits of prototype and concept-based explainability (Kristoffer Wickstrøm, UiT The Arctic University of Norway)

- From attribution maps to human-understandable explanations through Concept Relevance Propagation (Reduan Achtibat and Maximilian Dreyer, Fraunhofer Heinrich-Hertz-Institute)

- Exploring Concept-Based Explainability in Breast Cancer Classification (Alba Ordoñez and Amund Vedal, Norwegian Computing Center)

Organizing committee:

Kristoffer Wickstrøm (UiT) and Alba Ordonez (NR)

This workshop is open for members of the consortium. If you want to participate as a guest please sign up.

Sign up here