Explainability and reliability


When black-box algorithms like deep neural networks are making decisions that were previously entrusted to humans, it becomes more and more necessary for these algorithms to explain themselves.


To a large degree, our user partner’s applications involve imaging the unseen – the inside of the human body, the sea, and the surface of the earth seen from space independent of daylight and weather conditions. Impact of innovative technology for users depends on trust. A limitation of deep learning models is that there is no generally accepted solution for how to open the “black-box” of the deep network to provide explainable decisions which can be relied on to be trustworthy. There is therefore a need for explainability, which means that the models should be able to summarize the reasons for their predictions, both to gain the trust of users and to produce insights about the causes of their decisions.

Main objective

To open the "black box" of deep learning in order to develop explainable and reliable prediction models.

Highlighted publications

New Visual Intelligence paper accepted to NeurIPS
September 23, 2022
ProtoVAE explainability paper by Srishti Gautam and co-authors is published to NeurIPS 2022.
Multi-modal land cover mapping of remote sensing images using pyramid attention and gated fusion networks
August 1, 2022
We present a novel pyramid attention and gated fusion method (MultiModNet) for multi-modality land cover mapping in remote sensing.
Using Machine Learning to Quantify Tumor Infiltrating Lymphocytes in Whole Slide Images
June 21, 2022
Developing artificial intelligence methods to help pathologists in analysis of whole slide images for cancer treatment and detection.