Confidence and uncertainty

Background

Deep neural networks are powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong or whether the input is outside the range of which the system is expected to safely perform. For critical or automatic applications, knowledge about the confidence of predictions is essential.

Challenges

For safety-critical applications, e.g. in health, a limitation of current deep learning systems is that they are in general not designed to recognize when their predictions may be wrong or to recognize with some certainty that the input is inside the range of which the system is expected to safely perform. The simple regularization technique "Dropout" provides a measure of variability, but not a statistically sound quantification of uncertainty propagating from input to output. Bayesian deep models are emerging but have so far been challenging to develop for complex image data due to the complexity of the input data and the nonlinear nature of the data processing.

Main objective

To develop deep learning models that can estimate confidence and quantify uncertainty of their predictions.

Highlighted publications

New Visual Intelligence paper accepted to NeurIPS
September 23, 2022
ProtoVAE explainability paper by Srishti Gautam and co-authors is published to NeurIPS 2022.
Multi-modal land cover mapping of remote sensing images using pyramid attention and gated fusion networks
August 1, 2022
We present a novel pyramid attention and gated fusion method (MultiModNet) for multi-modality land cover mapping in remote sensing.
Using Machine Learning to Quantify Tumor Infiltrating Lymphocytes in Whole Slide Images
June 21, 2022
Developing artificial intelligence methods to help pathologists in analysis of whole slide images for cancer treatment and detection.