Confidence and uncertainty

Background

Deep neural networks are powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong or whether the input is outside the range of which the system is expected to safely perform. For critical or automatic applications, knowledge about the confidence of predictions is essential.

Challenges

For safety-critical applications, e.g. in health, a limitation of current deep learning systems is that they are in general not designed to recognize when their predictions may be wrong or to recognize with some certainty that the input is inside the range of which the system is expected to safely perform. The simple regularization technique "Dropout" provides a measure of variability, but not a statistically sound quantification of uncertainty propagating from input to output. Bayesian deep models are emerging but have so far been challenging to develop for complex image data due to the complexity of the input data and the nonlinear nature of the data processing.

Main objective

To develop deep learning models that can estimate confidence and quantify uncertainty of their predictions.

Highlighted publications

Visual Data Diagnosis and Debiasing with Concept Graphs
October 17, 2024
We propose ConBias, a bias diagnosis and debiasing pipeline for visual datasets.
Reinventing Self-Supervised Learning: The Magic of Memory in AI Training
October 17, 2024
MaSSL is a novel approach to self-supervised learning that enhances training stability and efficiency.
Modular Superpixel Tokenization in Vision Transformers
August 28, 2024
ViTs partition images into square patches to extract tokenized features. But is this necessarily an optimal way of partitioning images?