Confidence and uncertainty

Background

Deep neural networks are powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong or whether the input is outside the range of which the system is expected to safely perform. For critical or automatic applications, knowledge about the confidence of predictions is essential.

Challenges

For safety-critical applications, e.g. in health, a limitation of current deep learning systems is that they are in general not designed to recognize when their predictions may be wrong or to recognize with some certainty that the input is inside the range of which the system is expected to safely perform. The simple regularization technique "Dropout" provides a measure of variability, but not a statistically sound quantification of uncertainty propagating from input to output. Bayesian deep models are emerging but have so far been challenging to develop for complex image data due to the complexity of the input data and the nonlinear nature of the data processing.

Main objective

To develop deep learning models that can estimate confidence and quantify uncertainty of their predictions.

Highlighted publications

Principle of Relevant Information for Graph Sparsification
May 20, 2022
How can we remove the redundant or less-informative edges in a graph without changing its main structural properties?
Using Machine Learning to Quantify Tumor Infiltrating Lymphocytes in Whole Slide Images
March 9, 2022
Developing artificial intelligence methods to help pathologists in analysis of whole slide images for cancer treatment and detection.
Detection and classification of fish species from acoustic data
March 7, 2022
Using deep learning to assess fish stocks from acoustic images.