Opening the "black box" of deep learning to give explainable and reliable predictions.
Opening the "black box" of deep learning to give explainable and reliable predictions.
Visual Intelligence is developing deep learning methods which provide explainable and reliable predictions, opening the “black box” of deep learning.
A limitation of deep learning models is that there is no generally accepted solution for how to open the “black box” of the deep network to provide explainable decisions which can be relied on to be trustworthy. Therefore, there is e a need for explainability, which means that the models should be able to summarize the reasons for their predictions, both to gain the trust of users and to produce insights about the causes of their decisions.
Visual Intelligence researchers have proposed new methods that are designed to provide explainable and transparent predictions. These results include methods for:
• content-based CT image retrieval, imbued with a novel representation learning explainability network.
• explainable marine image analysis, providing clearer insights into the decision-making of models designed for marine species detection and classification.
• tackling distribution shifts and adverserial attacks in various federated learning settings involved in images.
• discovering features to spot counterfeit images.
Developing explainable and reliable models is a step towards achieving deep learning models that are transparent, trustworthy, and accountable. Our proposed methods are therefore critical for bridging the gap between technical performance and real-world usage in an ethical and responsible manner.
By authors:
Sun, Jiamei; Lapuschkin, Sebasian; Samek, Wojciech; Binder, Alexander.
Published in:
Information Fusion, Volume 77, 2022, Pages 233-246
on
January 1, 2022
By authors:
Nils Olav Handegard, Line Eikvil, Robert Jenssen, Michael Kampffmeyer, Arnt-Børre Salberg, and Ketil Malde
Published in:
Journal of Ocean Technology 2021
on
October 6, 2021
By authors:
Alexander Binder, Michael Bockmayr, Miriam Hägele, Stephan Wienert, Daniel Heim, Katharina Hellweg, Masaru Ishii, Albrecht Stenzinger, Andreas Hocke, Carsten Denkert, Klaus-Robert Müller & Frederick Klauschen
Published in:
Nature Machine Intelligence volume 3, pages 355–366 (2021)
on
March 8, 2021
By authors:
Kristoffer Wickstrøm, Michael Kampffmeyer, Robert Jenssen
Published in:
Medical Image Analysis, Volume 60, February 2020, 101619
on
November 14, 2019