Explainability and reliability

Visual Intelligence is developing deep learning methods which provide explainable and reliable predictions, opening the “black box” of deep learning.

Motivation

A limitation of deep learning models is that there is no generally accepted solution for how to open the “black box” of the deep network to provide explainable decisions which can be relied on to be trustworthy. Therefore, there is e a need for explainability, which means that the models should be able to summarize the reasons for their predictions, both to gain the trust of users and to produce insights about the causes of their decisions.

Solving research challenges through new deep learning methodology

Visual Intelligence researchers have proposed new methods that are designed to provide explainable and transparent predictions. These results include methods for:

• content-based CT image retrieval, imbued with a novel representation learning explainability network.

• explainable marine image analysis, providing clearer insights into the decision-making of models designed for marine species detection and classification.

• tackling distribution shifts and adverserial attacks in various federated learning settings involved in images.

• discovering features to spot counterfeit images.

Developing explainable and reliable models is a step towards achieving deep learning models that are transparent, trustworthy, and accountable. Our proposed methods are therefore critical for bridging the gap between technical performance and real-world usage in an ethical and responsible manner.

Highlighted publications

Visual Data Diagnosis and Debiasing with Concept Graphs

September 26, 2024
By
Chakraborty, Rwiddhi; Wang, Yinong; Gao, Jialu; Zheng, Runkai; Zhang, Cheng; De la Torre, Fernando

Interrogating Sea Ice Predictability With Gradients

February 14, 2024
By
Joakimsen, H. L., Martinsen I., Luppino, L. T., McDonald, A., Hosking, S., and Jenssen, R.

Other publications

FLEXtime: Filterbank Learning to Explain Time Series

By authors:

Thea Brüsch, Kristoffer Wickstrøm, Mikkel N. Schmidt, Robert Jenssen, Tommy Sonne Alstrøm

Published in:

Explainable Artificial Intelligence. xAI 2025. Communications in Computer and Information Science, vol 2579. Springer

on

October 14, 2025

From Colors to Classes: Emergence of Concepts in Vision Transformers

By authors:

Teresa Dorszewski, Lenka Tětková, Robert Jenssen, Lars Kai Hansen, Kristoffer Knutsen Wickstrøm

Published in:

Communications in Computer and Information Science, vol 2576. Springer 2025

on

October 12, 2025

Addressing Label Shift in Distributed Learning via Entropy Regularization​

By authors:

Zhiyuan Wu, Changkyu Choi, Volkan Cevher, Ali Ramezani-Kebrya

Published in:

International Conference on Learning Representations 2025

on

April 29, 2025

REPEAT: Improving Uncertainty Estimation in Representation Learning Explainability

By authors:

Kristoffer Wickstrøm, Thea Brüsch, Michael Kampffmeyer, Robert Jenssen

Published in:

Proceedings of the AAAI Conference on Artificial Intelligence, 39(8), 8341-8350

on

April 11, 2025

From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation

By authors:

Wickstrom, Kristoffer; Höhne, Marina; Hedström, Anna.

Published in:

European Conference on Computer Vision (ECCV) 2024 Workshop: Explainable Computer Vision: Where are We and Where are We Going?, 2024.

on

December 7, 2024