Confidence and uncertainty

Visual Intelligence aims to develop models that can estimate confidence and quantify the uncertainty of their predictions involving complex image data.

Motivation

Deep neural networks are powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong or whether the input is outside the range of which the system is expected to safely perform. For critical or automatic applications, knowledge about the confidence of predictions is essential.

Solving research challenges with new deep learning methodology

Visual Intelligence has developed novel methods which better estimate the confidence and quantify the uncertainty of their predictions. Examples include methods for:

• quantifying uncertainty in pre-trained networks for sandeel segmentation in echosounder data.

• quanityfing the uncertainty when identifying geological layers.

• oil spill detection, with a particular emphasis on achieving uncertainty quantification in deep learning models for remot sensing data analysis.

By better estimating confidence and quantifying uncertainty, our proposed methods contribute to making deep learning models more robust, reliable, and trustworthy. They also become more useful in real-world scenarios where uncertainty might be inevitable.

Highlighted publications

Understanding Deep Learning via Generalization and Optimzation Analysis for Accenerated SGD
November 15, 2024
We provide a theoretical understanding on the generalization error of momentum-based accelerated variants of stochastic gradient descent.
Visual Data Diagnosis and Debiasing with Concept Graphs
October 17, 2024
We propose ConBias, a bias diagnosis and debiasing pipeline for visual datasets.
Reinventing Self-Supervised Learning: The Magic of Memory in AI Training
October 17, 2024
MaSSL is a novel approach to self-supervised learning that enhances training stability and efficiency.