The Visual Intelligence management team, consisting of Line Eikvil (NR), Anne Solberg (UiO), Inger Solheim (UiT) and Robert Jenssen (UiT)
Image:
Petter Bjørklund/SFI Visual Intelligence

The Visual Intelligence management team, consisting of Line Eikvil (NR), Anne Solberg (UiO), Inger Solheim (UiT) and Robert Jenssen (UiT)

Four Innovative Years of SFI Visual Intelligence!

2024 marks the research centre's fourth year of researching the next generation of deep learning methodology for extracting knowledge from complex image data. We look back at various innovation highlights achieved in the first half of Visual Intelligence's run.

Four Innovative Years of SFI Visual Intelligence!

2024 marks the research centre's fourth year of researching the next generation of deep learning methodology for extracting knowledge from complex image data. We look back at various innovation highlights achieved in the first half of Visual Intelligence's run.

By The Visual Intelligence Management Team

It has already been four years since SFI Visual Intelligence launched in December 2020. The centre’s activities revolve around developing and enabling the next generation deep learning methodology for extracting knowledge from complex image data.

These years have been filled with fruitful collaborations, new innovations, and transfer of knowledge and technology between our research and user partners. Our centre's interdisciplinarity and complementary areas of expertise are core to our four years of accomplishments, as they allow for high quality research which drives deep learning forward.

We are highly grateful for the exceptional efforts from our researchers and partners, who continue to work diligently on making the fields of medicine and health, marine science, energy, and earth observation smarter and more efficient.

Throughout the last four years, Visual Intelligence has produced new methods which better learn from limited training data, better estimate the confidence and uncertainty of their predictions, exploit context and dependencies for more robust and efficient solutions, and provide explainable and reliable predictions. Our research has been published in top journals such as Pattern Recognition, Medical Image Analysis, and Remote Sensing, and presented at top scientific venues such as ICLR, CVPR, ICML and NeurIPS.

As we look forward to another four-year long journey, we reminisce and look back at a handful of innovation highlights from the first half of Visual Intelligence. We have used these methods together with our user partners to develop new innovations in medicine and health, marine science, energy, and earth observation.

Medicine and health

Medical images captured from the inside the body using scanning and imaging techniques have traditionally been difficult and time-consuming to analyze by trained experts. Our researchers have made grand efforts in developing innovative solutions that can assist health professionals in the clinical workflow.

The team of PET imaging, data analysis and machine learning experts in front of the 7T integrated small animal PET/MRI scanner, which is used for data collection in the DLIF project. Photo: UiT.

For instance, our researchers have developed new methods for automatically measuring dimensions of the left ventricle in 2D echocardiography together with GE Vingmed Ultrasound, detecting and handling imperfect image quality in cancer screening in collaboration with the Cancer Registry of Norway, and a deep learning-based approach for estimating the arterial input function in dynamic PET scans together with the University Hospital of North Norway (UNN).

The latter method, dubbed the deep learning-derived input function (DLIF), involves using deep learning to estimate the input function directly from dynamic PET images. The method shows significant potential as it may overcome the need for invasive arterial blood sampling, which is the traditional way of estimating the input function in these images.

Marine science

Tasks in the marine field which involve e.g. the classification and counting of species in an ecosystem are often challenging and time-consuming. Newly developed solutions from Visual Intelligence illustrate deep learning’s immense potential to automate and streamline different steps required in such tasks.

A method for explainable marine image analysis has been validated on multiple marine image datasets across modalities, such as echosounder data like this pictured example. Illustration: Changkyu Choi

Our researchers have developed a semi-supervised method for detecting and classifying fish species from acoustic data, a model for detecting sea mammals from aerial imagery, and a model which estimates the uncertainty in acoustic data.. A method for explainable marine image analysis has also been validated on multiple marine image datasets across modalities, such as echosounder data and imagery of sea mammals captured by drones. This methods were developed in close collaboration with the Institute of Marine Research.

The semi-supervised method for detecting and classifying fish species reduces the dependency on the annotated training data, while efficiently making use of the available annotated data. Extensive experiments show that the method achieves comparable performance to supervised DL methods with fully annotated data, while leveraging only 10% of the annotated training data in addition to unannotated data.

Energy

Data from the Earth’s subsurface, such as borehole imagery and seismic data, are especially important in energy exploration. Automated analysis of such data can lead to large savings in time and resources, as well as more efficient and precise oil and gas exploration.

The team working on the seismic CBIR tool, consisting of NR researchers Alba Ordoñez, Anders Waldeland, Theodor Johannes Line Forgaard. Photo: NR

We are very pleased to see new AI methodology within energy coming out of the Visual Intelligence project. Particular highlights include new models for detecting geological phenomena in seismic data with content-based image retrieval (CBIR), estimating the uncertainty when identifying geological layers, and automatically detecting microfossils from microscope images using self-supervised learning. These methods were developed in close collaboration with Equinor.

The seismic CBIR tool employs transformer-based models pretrained using self-supervised learning that extracts informative embeddings from image data partitioned into crops. The solution holds significant potential as a future tool for helping geologists map out structures within seismic volumes, such as potential hydrocarbon reservoirs.

The interactive aspect of such a solution would also allow geologists to assess and give feedback on results in real-time. This tool would potentially allow for mapping any object of interest in the seismic cube.

Earth observation

Satellite images and data captured by radar sensors from above contain enormous amounts of complex data. Throughout this first four-year period, Visual Intelligence researchers have developed methods for improving the monitoring and prediction of hazard risks, object detection, and for surveying and mapping the ground and sea from the air by exploiting remote sensing images.

An algorithm based on self-supervised learning correctly identifies the locations of two newly-erected buildings from bi-temporalaerial images. Illustration: Are C. Jenssen (NR).

Such methods include new approaches for object detection in oblique aerial imagery in collaboration with former user partner Field, new algorithms for vessel and object detection, and oil spill detection and thickness characterization. The latter two methods have been developed in collaboration with Kongsberg Satellite Services.

The latter method segments oil spills in real life scenarios, aiming for uncertainty quantification and computational efficiency. It has shown very promising results according to initial tests.

To another four innovative years!

These highlights constitute only a handful of innovative solutions that have come out of the closely woven collaborations between our research partners and user partners these last four years. As we move towards the second four-year period of Visual Intelligence, we look forward to many more achievements and accomplishments!

With kind regards,

The Visual Intelligence Management Team

Robert Jenssen | Line Eikvil | Anne Solberg | Inger Solheim

Latest news

Visual Intelligence represented at EAGE Annual 2025

June 15, 2025

Alba Ordoñez and Anders U. Waldeland presented ongoing work on seismic foundation models and an interactive seismic interpretation engine at EAGE Annual 2025 in Toulouse, France.

Visual Intelligence PhD Fellow Eirik Østmo featured on Abels tårn

June 13, 2025

Østmo was invited to Abels tårn—one of the largest popular science radio shows in Norway—to answer listener-submitted questions related to artificial Intelligence (AI). The live show took place at Blårock Cafe in Tromsø, Norway on June 12th.

New Industrial PhD project with Kongsberg Satellite Services

June 12, 2025

VI industry partner Kongsberg Satellite Services (KSAT) received an Industrial PhD grant from the Research Council of Norway. The project will be closely connected to Visual Intelligence's "Earth observation" innovation area.

Visual Intelligence represented at plankton-themed workshop by The Institute of Marine Research

June 11, 2025

Visual Intelligence Researchers Amund Vedal and Arnt Børre Salberg recently presented ongoing Visual Intelligence research at a plankton-themed workshop organized by the Institute of Marine Research (IMR), Norway

My Research Stay at Visual Intelligence: Teresa Dorszewski

June 5, 2025

Teresa Dorszewski is a PhD Candidate at the Section for Cognitive Systems at the Technical University of Denmark. She visited Visual Intelligence in Tromsø from January to April 2025.

Visual Intelligence represented at the NORA Annual Conference 2025

June 3, 2025

Centre Director Robert Jenssen was invited to give a keynote and participate in a panel discussion on AI as critical national infrastructure at the NORA Annual Conference 2025 in Halden, Norway.

NRK.no: Nekter å svare om umerkede puslespill er KI-generert: – De bør være ærlige

June 2, 2025

Både forskere og statsråd mener kunstig intelligens bør tydelig merkes. Men forlaget som lager puslespillet som ekspertene mener er KI-generert, sier de ikke har noe med hvordan illustratører lager produktene sine (Norwegian news article by NRK)

ScienceNorway: This is how AI can contribute to faster treatment of lung cancer

May 30, 2025

Researchers have developed an artificial intelligence to map specific immune cells in lung cancer tumors. It can lead to less costly examinations and more personalised cancer treatment (English news story on sciencenorway.no).

Now Hiring: 4 PhD Fellows in Deep Learning

May 28, 2025

The Department of Physics and Technology at UiT The Arctic University of Norway is pleased to announce 4 exciting PhD Fellowships within machine learning at SFI Visual Intelligence. Application deadline: June 17th.

VG: Slik kan AI revolusjonere lungekreftbehandling

May 19, 2025

Norsk forskning har utviklet kunstig intelligens som raskt kan analysere lungekreft. Ekspertene forklarer hvordan dette kan bidra til en mer effektiv og persontilpasset behandling (Norwegian news article in vg.no)

Visual Intelligence evaluated by international experts: "The centre operates at an excellent level"

April 29, 2025

After four years of operation, an international AI expert panel was appointed to assess Visual Intelligence's progress and results. The evaluation was characterized by its excellent remarks on the centre's scientific quality and innovation output.

Visual Intelligence at Norsk Radiografforbund's mammography symposium

April 24, 2025

Senior Researcher Fredrik Dahl recently gave a talk about Norsk Regnesentral's work on developing AI algorithms for automatic analysis of image quality and cancer detection at Norsk Radiografforbund's mammography symposium in Oslo.