Image:
Torger Grytå / Petter Bjørklund / Private

VI Seminar #93: Underwater Uncertainty: From Human Labelling to Synthesizing Turbidity

The program will be available shortly. Please check back later.

VI Seminar #93: Underwater Uncertainty: From Human Labelling to Synthesizing Turbidity

Presented by Postdoctoral Research Fellow Malte Pedersen and Doctoral Research Fellow Vasiliki Ismiroglou,  The Visual Analysis and Perception lab at Aalborg University and the Pioneer Centre for AI, Denmark

Presentation 1

From Ambiguity to Insights: Annotating and Using Visual Data from Complex Marine Environments

Presenter: Malte Pedersen

Abstract: Visual data acquired in underwater environments is subject to significant variability and degradation caused by many different factors. Visibility often degrades to a point where object boundaries and identities become indistinguishable within a few meters. Meanwhile, data collection in marine environments is logistically challenging, often resulting in limited and suboptimal datasets compared to terrestrial counterparts.

Beyond acquisition, manual annotation can be a necessity in order to either train or validate the performance of machine learning models. However, unlike many terrestrial settings, true ground truth is rarely accessible underwater, which makes manual labeling inherently ambiguous and subjective. Additional challenges, such as turbidity, marine snow, occlusion, and general perceptual uncertainty further complicate the process. This raises some fundamental questions: when can an object be considered sufficiently identifiable, how is the uncertainty represented in the annotations, and how does this uncertainty affect downstream machine learning tasks? In this talk, we explore the nature of uncertainty in underwater visual data, with a particular focus on annotation practices.

Presentation 2

Beyond Aesthetics: Quantifying Information Loss in Turbid Scene

Presenter: Vasiliki Ismiroglou

Abstract: Underwater visibility can deteriorate rapidly in turbid conditions, yet the impact of turbidity on computer vision models remains poorly understood. Acquiring confident labels for images captured in low-visibility environments is inherently challenging and very limited data currently exists. As a result, synthetic datasets are often used to train models to cope with turbidity; however, traditional synthesis methods relying on the image formation model fail to capture the structural degradation of visual information caused by natural turbidity, limiting model robustness under real-world conditions.

In this talk, we examine how turbidity alters the structural information in underwater images and the consequences for tasks such as instance segmentation. We introduce a new metric, PCD, derived from phase congruency, which quantifies information loss in a manner that strongly correlates with model performance, unlike commonly used metrics that often fail to reflect real-world challenges. This work highlights the importance of evaluating computer vision models under realistic conditions and provides a step toward more robust underwater perception systems.

This seminar is open for members of the consortium. If you want to participate as a guest, please sign up.

Sign up here