Machine learning methods, such as deep neural networks, have been shown to be very successful for prediction in many different applications. Standard use of such methods do however not account for or underestimate the full uncertainty related to these predictions. The Bayesian approach allows for a formal way of making proper uncertainty quantification. Recently, such methods have also gained popularity within the machine learning community. In this talk Professor Geir Olve Storvik from UiO will describe how the Bayesian methodology can be applied to machine learning.We will discuss both advantages and challenges related to apply such methods in practice.
The research area of probability calibration refers to a set of work that focuses on the uncertainty and confidence of model predictions. On the top level, we want the models to be well-calibrated on the predicted probabilities. That is, the target variable should follow closely to the distribution as indicated by every distinct prediction. In this talk, Research Associate Hao Song from University of Bristol will provide an overview of the research area, including typical definitions, evaluation measures, and approaches that can improve the level of calibration.
Bayesian Neural Networks are an alternative approach to classic NN models, offering "built-in" uncertainty measures and convenient regularization. Performing inference on a BNN results in a joint posterior distribution of network parameters, which can provide insight into what makes for a well-specified network for a given problem. Master student at UiT, Jonathan Edward Berezowski, discusses how to define a BNN with these features and introduce the method of Reversible Jump Markov Chain Monte Carlo as one potential approach to inference.
In this talk, we will develop a conceptual approach by combining the model-based method of sparse regularization by shearlets with the data-driven method of deep learning. Our solvers are guided by a microlocal analysis viewpoint to pay particular attention to the singularity structures of the data. Focussing then on the inverse problem of (limited-angle) computed tomography, we will show that our algorithms significantly outperform previous methodologies, including methods entirely based on deep learning. Finally, we will also briefly touch upon the issue of how to interpret such approaches
Many deep learning studies are not designed to provide unbiased estimation of the system's performance in the intended application. Reports of overoptimistic estimates and opportunities may inflate the expectation of what is currently possible, misguide resource allocation, and hamper the progression of the field. In this talk, we will look into how the performance of a deep learning system in an intended application could be estimated more reliably than what is currently common practice, even if restricted to using retrospective data.
The recent advances in deep learning and drastic increase in number of imaging satellites, new levels of automation are possible and necessary. KSAT is investing significantly into modern MLOps practices to achieve this and intends to use its membership in Visual Intelligence to solve the research aspects of this transformation.
Deep learning can bring time savings and increased reproducibility to medical image analysis. However, acquiring training data is challenging due to the time-intensive nature of labeling and high inter-observer variability in annotations. Rather than labeling images, in this work we propose an alternative pipeline where images are generated from existing high-quality annotations using generative adversarial networks (GANs). Annotations are derived automatically from previously built anatomical models and transformed into realistic synthetic ultrasound images with CycleGAN.
In fisheries acoustics, echo sounding is applied to detect fish and other marine objects in the ocean; a central tool for stock assessments and establishing fishing quotas. Fish detection and species classification from echo sounder data is typically a manual process. In our work, we automate this process by training a convolutional neural network for semantic segmentation using supervised learning. The talk will describe the data, the CNN-approach used for segmentation – as well as issues related to the training data, such as the quality of annotations when used in a machine learning setting.
Obtaining fully labeled datasets suitable for machine learning can be expensive, time-consuming, and impractical in many fields, limiting the applicability of the commonly used supervised approaches. Alba Ordoñes from NR presents their work with limited training data in the marine domain.
Benjamin Kellenberger from EPFL presents his work with limited training data in applications of unmanned aerial vehicles in earth observation to monitor wildlife. He presented his work "When a Few Clicks Make All the Difference: Improving Weakly-supervised Wildlife Detection in UAV Images", on the first Visual Intelligence Workshop on Limited training data.