Eirik Østmo / Torger Grytå

VI workshop 2021 #3 - Confidence & Uncertainty

The program will be available shortly. Please check back later.

Workshop on Confidence & Uncertainty

Fred Godtliebsen (UiT) and Anne H Schistad Solberg (UiO)

Topic: In real-world decision-making systems, based on deep learning as well as other models, classification networks must not only be accurate, but should also indicate when they are likely to be incorrect or uncertain. One relevant aspect concerns the probability associated with the predicted class label, which should reflect its ground truth correctness likelihood. This is important for interpretability. Another relevant aspect concerning uncertainty, is Bayesian approaches. In this workshop, we have invited speakers that will give some insight both in classifier calibration as well as in Bayesian deep learning, to shed light on one of the main research challenges in Visual Intelligence, namely Confidence and Uncertainty.  



Professor Anne H Schistad Solberg (UiO) and  Professor Fred Godtliebsen (UiT)

Introduction to confidence and uncertainty in deep learning.

Probability Calibration for Predictive Machine Learning

Research Associate Hao Song (University of Bristol, UK)

With the recent developments in data-driven approaches like deep neural networks,researchers have started to investigate different verification techniques beyond standard measures like accuracy and mean squared error.

The research area of probability calibration refers to a set of work that focuses on the uncertainty and confidence of model predictions. On the top level, we want the models to be well-calibrated on the predicted probabilities. That is,the target variable should follow closely to the distribution as indicated by every distinct prediction.

In this talk, I will provide an overview of the research area, including typical definitions, evaluation measures, and approaches that can improve the level ofcalibration.

Bayesian methods within machine learning

Professor Geir Storvik (UiO)

Machine learning methods, such as deep neural networks, have been shown to be very successful for prediction in many different applications. Standard use of such methods do however not account for or underestimate the full uncertainty related to these predictions.

The Bayesian approach allows for a formal way of making proper uncertainty quantification. Recently, such methods have also gained popularity within the machine learning community. In this talk we will describe how the Bayesian methodology can be applied to machine learning.We will discuss both advantages and challenges related to apply such methods in practice.


Learning Network Architectures with Bayesian Neural Networks

Master student Jonathan Edward Berezowski (UiT)

For certain Machine Learning problems, Bayesian Neural Networks are an alternative approach to classic NN models, offering "built-in" uncertainty measures and convenient regularization. Performing inference on a BNN results in a joint posterior distribution of network parameters, which can provide insight into what makes for a well-specified network for a given problem.

With this insight in mind, BNNs can then be extended to hierarchical models with parameters controlling architecture specification such as the number of hidden layers and the number of nodes per layer. We will discuss how to define a BNN with these features and introduce the method of Reversible Jump Markov Chain Monte Carlo as one potential approach to inference.

In compliance with GDPR consent requirements, presentations given in a Visual Intelligence context maybe recorded with the consent of the speaker. All recordings are edited to remove all faces, names and voices of other participants. Questions and comments by the audience will hence be removed and will not appear in the recording.  With the freely given consent from the speaker, recorded presentation may be posted on the Visual Intelligence YouTube channel.

This seminar is open for members of the consortium. If you want to participate as a guest please sign up.

Sign up here