The Visual Intelligence management team, consisting of Line Eikvil (NR), Anne Solberg (UiO), Inger Solheim (UiT) and Robert Jenssen (UiT)
Image:
Petter Bjørklund/SFI Visual Intelligence

The Visual Intelligence management team, consisting of Line Eikvil (NR), Anne Solberg (UiO), Inger Solheim (UiT) and Robert Jenssen (UiT)

Four Innovative Years of SFI Visual Intelligence!

2024 marks the research centre's fourth year of researching the next generation of deep learning methodology for extracting knowledge from complex image data. We look back at various innovation highlights achieved in the first half of Visual Intelligence's run.

Four Innovative Years of SFI Visual Intelligence!

2024 marks the research centre's fourth year of researching the next generation of deep learning methodology for extracting knowledge from complex image data. We look back at various innovation highlights achieved in the first half of Visual Intelligence's run.

By The Visual Intelligence Management Team

It has already been four years since SFI Visual Intelligence launched in December 2020. The centre’s activities revolve around developing and enabling the next generation deep learning methodology for extracting knowledge from complex image data.

These years have been filled with fruitful collaborations, new innovations, and transfer of knowledge and technology between our research and user partners. Our centre's interdisciplinarity and complementary areas of expertise are core to our four years of accomplishments, as they allow for high quality research which drives deep learning forward.

We are highly grateful for the exceptional efforts from our researchers and partners, who continue to work diligently on making the fields of medicine and health, marine science, energy, and earth observation smarter and more efficient.

Throughout the last four years, Visual Intelligence has produced new methods which better learn from limited training data, better estimate the confidence and uncertainty of their predictions, exploit context and dependencies for more robust and efficient solutions, and provide explainable and reliable predictions. Our research has been published in top journals such as Pattern Recognition, Medical Image Analysis, and Remote Sensing, and presented at top scientific venues such as ICLR, CVPR, ICML and NeurIPS.

As we look forward to another four-year long journey, we reminisce and look back at a handful of innovation highlights from the first half of Visual Intelligence. We have used these methods together with our user partners to develop new innovations in medicine and health, marine science, energy, and earth observation.

Medicine and health

Medical images captured from the inside the body using scanning and imaging techniques have traditionally been difficult and time-consuming to analyze by trained experts. Our researchers have made grand efforts in developing innovative solutions that can assist health professionals in the clinical workflow.

The team of PET imaging, data analysis and machine learning experts in front of the 7T integrated small animal PET/MRI scanner, which is used for data collection in the DLIF project. Photo: UiT.

For instance, our researchers have developed new methods for automatically measuring dimensions of the left ventricle in 2D echocardiography together with GE Vingmed Ultrasound, detecting and handling imperfect image quality in cancer screening in collaboration with the Cancer Registry of Norway, and a deep learning-based approach for estimating the arterial input function in dynamic PET scans together with the University Hospital of North Norway (UNN).

The latter method, dubbed the deep learning-derived input function (DLIF), involves using deep learning to estimate the input function directly from dynamic PET images. The method shows significant potential as it may overcome the need for invasive arterial blood sampling, which is the traditional way of estimating the input function in these images.

Marine science

Tasks in the marine field which involve e.g. the classification and counting of species in an ecosystem are often challenging and time-consuming. Newly developed solutions from Visual Intelligence illustrate deep learning’s immense potential to automate and streamline different steps required in such tasks.

A method for explainable marine image analysis has been validated on multiple marine image datasets across modalities, such as echosounder data like this pictured example. Illustration: Changkyu Choi

Our researchers have developed a semi-supervised method for detecting and classifying fish species from acoustic data, a model for detecting sea mammals from aerial imagery, and a model which estimates the uncertainty in acoustic data.. A method for explainable marine image analysis has also been validated on multiple marine image datasets across modalities, such as echosounder data and imagery of sea mammals captured by drones. This methods were developed in close collaboration with the Institute of Marine Research.

The semi-supervised method for detecting and classifying fish species reduces the dependency on the annotated training data, while efficiently making use of the available annotated data. Extensive experiments show that the method achieves comparable performance to supervised DL methods with fully annotated data, while leveraging only 10% of the annotated training data in addition to unannotated data.

Energy

Data from the Earth’s subsurface, such as borehole imagery and seismic data, are especially important in energy exploration. Automated analysis of such data can lead to large savings in time and resources, as well as more efficient and precise oil and gas exploration.

The team working on the seismic CBIR tool, consisting of NR researchers Alba Ordoñez, Anders Waldeland, Theodor Johannes Line Forgaard. Photo: NR

We are very pleased to see new AI methodology within energy coming out of the Visual Intelligence project. Particular highlights include new models for detecting geological phenomena in seismic data with content-based image retrieval (CBIR), estimating the uncertainty when identifying geological layers, and automatically detecting microfossils from microscope images using self-supervised learning. These methods were developed in close collaboration with Equinor.

The seismic CBIR tool employs transformer-based models pretrained using self-supervised learning that extracts informative embeddings from image data partitioned into crops. The solution holds significant potential as a future tool for helping geologists map out structures within seismic volumes, such as potential hydrocarbon reservoirs.

The interactive aspect of such a solution would also allow geologists to assess and give feedback on results in real-time. This tool would potentially allow for mapping any object of interest in the seismic cube.

Earth observation

Satellite images and data captured by radar sensors from above contain enormous amounts of complex data. Throughout this first four-year period, Visual Intelligence researchers have developed methods for improving the monitoring and prediction of hazard risks, object detection, and for surveying and mapping the ground and sea from the air by exploiting remote sensing images.

An algorithm based on self-supervised learning correctly identifies the locations of two newly-erected buildings from bi-temporalaerial images. Illustration: Are C. Jenssen (NR).

Such methods include new approaches for object detection in oblique aerial imagery in collaboration with former user partner Field, new algorithms for vessel and object detection, and oil spill detection and thickness characterization. The latter two methods have been developed in collaboration with Kongsberg Satellite Services.

The latter method segments oil spills in real life scenarios, aiming for uncertainty quantification and computational efficiency. It has shown very promising results according to initial tests.

To another four innovative years!

These highlights constitute only a handful of innovative solutions that have come out of the closely woven collaborations between our research partners and user partners these last four years. As we move towards the second four-year period of Visual Intelligence, we look forward to many more achievements and accomplishments!

With kind regards,

The Visual Intelligence Management Team

Robert Jenssen | Line Eikvil | Anne Solberg | Inger Solheim

Latest news

Centre Director Robert Jenssen meets Norwegian Minister of Research and Higher Education, Sigrun Aasland

February 6, 2026

Centre Director Robert Jenssen met with Sigrun Aasland, Norwegian Minister of Research and Higher Education, alongside UiT and Aker Nscale representatives to give an update on Aker Nscale's AI Giga Factory in Narvik, Norway.

Visual Intelligence represented at Arctic Frontiers 2026

February 4, 2026

Visual Intelligence was represented by Centre Director Robert Jenssen, Associate Professor Kristoffer Wickstrøm, and VI Alumna Sara Björk at the Arctic Frontiers side session "How can AI and satellite collaboration strengthen Arctic resilience?".

Petter Tømmeraas visits Visual Intelligence in Tromsø

February 3, 2026

Petter Tømmeraas, Director of Data Center Services at Aker Nscale, visited SFI Visual Intelligence to learn more about our research projects and AI startups established by former UiT students.

NLDL 2026 was a great success!

January 12, 2026

Attracting 280 international AI researchers, the Northern Lights Deep Learning Conference (NLDL) 2026 was successfully organized from January 5th to 9th 2026 at UiT The Arctic University of Norway in Tromsø, Norway.

Fruitful mentoring session with Professor Mihaela van der Schaar

January 11, 2026

Visual Intelligence hosted a special NLDL 2026 mentoring session for young researchers with Professor Mihaela van der Schaar – a globally renowned AI researcher and expert

uit.no Toppforskere på kunstig intelligens samles i Tromsø

January 5, 2026

270 KI-eksperter fra nært og fjernt møtes ved UiT for å dele siste forskningsnytt. Et av formålene er å styrke internasjonale bånd og samarbeid på tvers av det globale KI-miljøet (Norwegian news article on uit.no)

unn.no: Forskningsmillioner til pasientrettet KI-satsing

December 23, 2025

Med ferske millioner på konto skal Senter for pasientnær kunstig intelligens (SPKI) ved UNN bidra i et stort nasjonalt KI-prosjekt de neste fire årene (News article on unn.no)

Happy Holidays from SFI Visual Intelligence!

December 18, 2025

The Visual Intelligence Management Team, Robert Jenssen, Line Eikvil, Anne Solberg, and Inger Solheim, wishes everyone a happy holiday season and new year!

Anders Waldeland receives the Digital Trailblazer Award 2025

December 4, 2025

Congratulations to Senior Research Scientist Anders Waldeland, who was awarded the Digital Trailblazer Award 2025 at the Dig X Subsurface conference in Oslo, Norway.

sciencenorway.no: AI can help detect heart diseases more quickly

December 3, 2025

Researchers have developed an artificial intelligence that can automatically measure the heart's structure – both quickly and accurately (Popular science article on sciencenorway.no)