Medicine and health

This innovation area focuses on developing more efficient deep learning methods for diagnosis support and decision support for diseases such as cardiovascular diseases and cancer.

Motivation

Medical images captured from inside the body using various scanning and imaging techniques have traditionally been challenging and time-consuming to analyze by trained experts. Well-performing deep learning models in the medical domain have the potential to assist healthcare professionals by increasing certainty and streamlining analyses of medical images.

Our innovations

Visual Intelligence researchers have developed several innovations which aim to assist healthcare professionals in the clinical workflow. For instance, our research efforts have resulted in novel deep learning methods, such as for:

  • automatically measuring the left ventricle in 2D echocardiography, in collaboration with GE Vingmed Ultrasound.
  • detecting cancer in mammography images, together with the Cancer Registry of Norway.
  • estimating the arterial input function in dynamic PET scans, in collaboration with the University Hospital of Northern Norway (UNN).
  • improving cancer diagnostic accuracy via digitalized pathology, together with the Cancer Registry of Norway.
  • better augmenting CT images by clipping intensity values tailored to characteristics of organs, such as the liver, in collaboration with UNN.
  • content-based image retrieval of CT liver images using self-supervised learning, together with UNN.

Addressing research challenges

Major obstacles of developing deep learning methods in medicine and health include the availability of training data, the estimation of confidence and uncertainty in the models’ predictions, as well as lack of explainability and reliability. The innovations mentioned above address these research challenges in different ways, enabling progress within this innovation area.

For instance, research on the challenge of learning from limited data is at the core of the clinically inspired data augmentation technique for CT images mentioned above. This method also leverages context and dependencies by exploiting knowledge about the signal-generating process.  

Research on explainable and reliable AI constitutes a significant part of our method for detecting cancer in mammography images. This is also the case for our novel content-based CT image retrieval method.

Synergies within the innovation area and across other areas

When developing deep learning solutions for concrete medical and health challenges that our user partners face, it is important to transfer knowledge and methodologies across innovation areas. Our proposed methodologies within medicine and health synergize well with other work within this innovation area, as well as our other three innovation areas.

For instance, the development of a semi-automatic landmark prediction in cardiac ultrasound depends on context provided in the form of a scan line in the echocardiography. This is inspired by other developed solutions which leverage context in the form of anatomical knowledge, e.g. for cancer detection in mammography.

Self-supervised deep learning, which several of our medical innovations are based on, has not only proven useful within medicine and health, but also in “Marine science” “Energy” and “Earth observation”. For example, the framework for CT image retrieval shares similarities with a content-based image retrieval system for seismic data.

Highlighted publications

Using Machine Learning to Quantify Tumor Infiltrating Lymphocytes in Whole Slide Images

February 14, 2022
By
Nikita Shvetsov, Morten Grønnesby, Edvard Pedersen, Kajsa Møllersen, Lill-Tove Rasmussen Busund, Ruth Schwienbacher, Lars Ailo Bongo, Thomas K. Kilvaer

The Risk of Imbalanced Datasets in Chest X-ray Image-based Diagnostics

February 1, 2022
By
Srishti Gautam, Marina M.-C. Höhne, Stine Hansen, Robert Jenssen and Michael Kampffmeyer

Other publications

A lightweight and extensible cell segmentation and classification model for H&E-stained cancer whole slide images

By authors:

Nikita Shvetsov, Thomas Karsten Kilvær, Masoud Tafavvoghi, Anders Sildnes, Kajsa Møllersen, Lill-Tove Rasmussen Busund, Lars Ailo Bongo

Published in:

Computers in Biology and Medicine, Volume 199, 2025

on

December 1, 2025

FLEXtime: Filterbank Learning to Explain Time Series

By authors:

Thea Brüsch, Kristoffer Wickstrøm, Mikkel N. Schmidt, Robert Jenssen, Tommy Sonne Alstrøm

Published in:

Explainable Artificial Intelligence. xAI 2025. Communications in Computer and Information Science, vol 2579. Springer

on

October 14, 2025

Low-Rank Adaptations for increased Generalization in Foundation Model features

By authors:

Vilde Schulerud Bøe, Andreas Kleppe, Sebastian Foersch, Daniel-Christoph Wagner, Lill-Tove Rasmussen Busund, Adín Ramírez Rivera

Published in:

MICCAI Workshop on Computational Pathology with Multimodal Data (COMPAYL), DAEJEON, South Korea, 2025

on

September 27, 2025

VMRA-MaR: An Asymmetry-Aware Temporal Framework for Longitudinal Breast Cancer Risk Prediction

By authors:

Zijun Sun, Solveig Thrun, Michael Kampffmeyer

Published in:

Medical Image Computing and Computer Assisted Intervention – MICCAI 2025. MICCAI 2025. Lecture Notes in Computer Science, vol 15974. Springer

on

September 18, 2025

WiseLVAM: A Novel Framework For Left Ventricle Automatic Measurements

By authors:

Durgesh Kumar Singh, Qing Cao, Sarina Thomas, Ahcène Boubekki, Robert Jenssen, Michael Kampffmeyer

Published in:

Simplifying Medical Ultrasound, ASMUS 2025 Workshop, MICCAI 2025

on

September 17, 2025