Confidence and uncertainty

Background

Deep neural networks are powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong or whether the input is outside the range of which the system is expected to safely perform. For critical or automatic applications, knowledge about the confidence of predictions is essential.

Challenges

For safety-critical applications, e.g. in health, a limitation of current deep learning systems is that they are in general not designed to recognize when their predictions may be wrong or to recognize with some certainty that the input is inside the range of which the system is expected to safely perform. The simple regularization technique "Dropout" provides a measure of variability, but not a statistically sound quantification of uncertainty propagating from input to output. Bayesian deep models are emerging but have so far been challenging to develop for complex image data due to the complexity of the input data and the nonlinear nature of the data processing.

Main objective

To develop deep learning models that can estimate confidence and quantify uncertainty of their predictions.

Highlighted publications

On the Effects of Self-supervision and Contrastive Alignment in Deep Multi-view Clustering
December 19, 2023
We propose DeepMVC – a unified framework which includes many recent methods as instances.
Merging clustering into deep supervised neural network
June 8, 2023
Introducing the SuperCM technique to significantly improve classification results across various types of image data.
Addressing Distribution Shifts in Federated Learning for Enhanced Generalization Performance
June 4, 2023
Training and test data from different clients pose a challenge.