Limited training data

Visual Intelligence aims to develop new deep learning models that solve problems involving complex images from limited training data.

Motivation

The performance of deep learning methods steadily improves with more training data. However, the availability of suitable training data is often limited. Additionally, labelling complex image data requires domain experts and is both costly and time-consuming.

This research challenge is heavily stressed by a majority of our user partners as an immediate need. To succeed in our innovation areas, it is absolutely necessary to research new methodology which learn from limited and complex training data.

Solving research challenges through new deep learning methodology

Methods which exploit weak, noisy and incompletely labelled data, be it through semi-supervised or semi-supervised approaches, make up a significant portion of our portfolio. Examples include the following:

• A self-supervised approach for content-based image retrieval of CT liver images.

• Explainable marine image analysis methods validated on multiple marine datasets, such as multi-frequency echosounder data and aerial imagery of sea mammals captured by drones.

• A self-supervised method for automatically detecting and classifying microfossils.

• Methods for automatic building change detection in aerial images based on self-supervised learning.

These methods represent time-effective and cost-effective approaches which make deep learning models less reliant on large data samples and labeled data. These improve the models’ efficiency and ability to generalize, making them more applicable in real-world settings.

Highlighted publications

Modular Superpixel Tokenization in Vision Transformers

August 28, 2024
By
Marius Aasan, Odd Kolbjørnsen, Anne Schistad Solberg, Adín Ramirez Rivera

Reinventing Self-Supervised Learning: The Magic of Memory in AI Training

July 29, 2024
By
Thalles Silva, Helio Pedrini, Adı́n Ramı́rez Rivera

Other publications

Mixing up contrastive learning: Self-supervised representation learning for time series

By authors:

Kristoffer Wickstrøm, Michael Kampffmeyer, Karl Øyvind Mikalsen, Robert Jenssen

Published in:

Pattern Recognition Letters, Volume 155, March 2022, Pages 54-61

on

February 12, 2022

Anomaly Detection-Inspired Few-Shot Medical Image Segmentation Through Self-Supervision With Supervoxels

By authors:

Stine Hansen, Srishti Gautam, Robert Jenssen, Michael Kampffmeyer

Published in:

Medical Image Analysis

on

February 11, 2022

Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN

By authors:

Zhenyu Xie, Zaiyu Huang, Fuwei Zhao, Haoye Dong, Michael Kampffmeyer, Xiaodan Liang

Published in:

Advances in Neural Information Processing Systems 34 pre-proceedings (NeurIPS 2021)

on

December 23, 2021

Machine Learning + Marine Science: Critical Role of Partnerships in Norway

By authors:

Nils Olav Handegard, Line Eikvil, Robert Jenssen, Michael Kampffmeyer, Arnt-Børre Salberg, and Ketil Malde

Published in:

Journal of Ocean Technology 2021

on

October 6, 2021

Self-supervised Multi-task Representation Learning for Sequential Medical Images

By authors:

Dong, N., Kampffmeyer, M., Voiculescu, I.

Published in:

Oliver, N., Pérez-Cruz, F., Kramer, S., Read, J., Lozano, J.A. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2021. Lecture Notes in Computer Science(), vol 12977. Springer, Cham

on

September 11, 2021