Image:
MICCAI

Three Visual Intelligence-authored papers accepted for leading AI conference on medical imaging

Visual Intelligence will be well represented at MICCAI 2025—one of the leading AI conferences on medical imaging and computer assisted intervention—with three recently accepted research papers.

Three Visual Intelligence-authored papers accepted for leading AI conference on medical imaging

Visual Intelligence will be well represented at MICCAI 2025—one of the leading AI conferences on medical imaging and computer assisted intervention—with three recently accepted research papers.

By Petter Bjørklund, Communications Advisor at SFI Visual Intelligence

The annual International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) attracts the world's leading biomedical scientists, engineers, and clinicians from a wide range of disciplines associated with medical imaging and computer assisted intervention. MICCAI has an historical acceptance rate of 30 per cent.

Centre Director Robert Jenssen is delighted by Visual Intelligence's representation at this year's MICCAI conference, which is organized in Daejeon, South Korea from September 23rd to 27th.

"This research demonstrates Visual Intelligence's commitment to advancing medical imaging through the development of novel deep learning methods that deliver explainable and reliable results. I congratulate our researchers whose papers were accepted to the highly competitive MICCAI conference," Jenssen says.

Explicit alignment strategies in longitudinal mammography for breast cancer risk prediction

PhD Fellow Solveig Thrun is one of the VI researchers who got their research paper accepted for MICCAI 2025. Her paper—titled "Reconsidering Explicit Longitudinal Mammography Alignment for Enhanced Breast Cancer Risk Prediction"—investigates explicit alignment strategies in longitudinal mammography for breast cancer risk prediction. It focuses on whether alignment should occur in the input image space or the representation space.

PhD Fellow Solveig Thrun. Photo: Private

As recent deep learning approaches increasingly leverage the temporal aspect of screening to track breast tissue changes over time, spatial alignment across time points has become essential. However, the optimal design for explicit alignment in mammography remains underexplored.

"Our findings provide new insights into alignment choices and their impact on predictive performance, offering practical guidance for future work in this domain," says Thrun.

The results show that jointly optimizing explicit alignment in the representation space alongside risk prediction, as done in current state-of-the-art methods, leads to a trade-off between alignment quality and predictive performance

"This demonstrates that image-level alignment outperforms representation-level approaches, producing more accurate deformation fields and improving risk prediction accuracy," Thrun explains.

Tied prototype model for few-shot medical image segmentation

The second Visual Intelligence-authored paper—titled "Tied Prototype Model for Few-Shot Medical Image Segmentation"—addresses the challenge of segmenting important anatomical structures in medical images with limited labels. The paper is authored by VI researchers Hyeongji Kim and Michael Kampffmeyer, and Stine Hansen.

Postdoctoral Fellow Hyeongji Kim. Photo: Petter Bjørklund / SFI Visual Intelligence

Their work introduces the Tied Prototype Model (TPM)—a novel protoype-based approach which addresses key limitations of the previous ADNet method, a previously proposed method from Visual Intelligence.

Unlike ADNet, which relies on a single prototype, focuses on binary classification, and uses fixed thresholds, TPM supports multiple prototypes and enables multiclass training and segmentation. These methodological advances offer a promising foundation for future work in medical image segmentation.

"Our theoretical analysis establishes the equivalence between ADNet and a special case of TPM, showing that our TPM is a generalization of ADNet. Our experiments show that each of the three main contributions—multi-prototype extension, multiclass training, and the proposed ideal thresholding strategy—consistently enhanced segmentation accuracy. This highlights TPM's effectiveness and adaptability in few-shot medical image segmentation," Kim says.

Novel temporal framework for longitudinal breast cancer risk prediction

The third paper—titled "VMRA-MaR: An Asymmetry-Aware Temporal Framework for Longitudinal Breast Cancer Risk Prediction"—introduces a novel framework for breast cancer risk prediction. The paper is authored by Zijun Sun and VI researchers Solveig Thrun and Michael Kampffmeyer.

Zijun Sun is a Master's Student at the University of Bologna. Photo: Private

It integrates a Vision Mamba RNN (VMRNN) to model dynamic breast tissue changes over time and an asymmetry module to detect bilateral differences, aiming to improve early cancer recognition and personalize screening strategies.

The results show that the framework significantly improves breast cancer risk prediction, particularly for challenging high-density breast cases and at extended follow-up intervals.

"The model's ability to effectively capture intricate tissue dynamics through its recurrent temporal and asymmetry-aware approach contributes to its superior performance compared to previous state-of-the-art methods," Sun explains.

Latest news

2025 Norwegian AI Society Symposium: An insightful and collaborative event

June 23, 2025

More than 50 attendees from the Norwegian AI research community gathered in Tromsø, Norway for two days of insightful presentations, interactive technical sessions, and scientific and social interactions.

Minister of Research and Higher Education visits Visual Intelligence hub at Norwegian Computing Center

June 16, 2025

Last week, we wished Aasland—accompanied by Political Advisor Munir Jaber and Senior Adviser Finn-Hugo Markussen—welcome to the Norwegian Computing Center (NR). One of the visit's goals was to showcase ongoing Visual Intelligence projects at NR.

Visual Intelligence represented at EAGE Annual 2025

June 15, 2025

Alba Ordoñez and Anders U. Waldeland presented ongoing work on seismic foundation models and an interactive seismic interpretation engine at EAGE Annual 2025 in Toulouse, France.

Visual Intelligence PhD Fellow Eirik Østmo featured on Abels tårn

June 13, 2025

Østmo was invited to Abels tårn—one of the largest popular science radio shows in Norway—to answer listener-submitted questions related to artificial Intelligence (AI). The live show took place at Blårock Cafe in Tromsø, Norway on June 12th.

New Industrial PhD project with Kongsberg Satellite Services

June 12, 2025

VI industry partner Kongsberg Satellite Services (KSAT) received an Industrial PhD grant from the Research Council of Norway. The project will be closely connected to Visual Intelligence's "Earth observation" innovation area.

Visual Intelligence represented at plankton-themed workshop by The Institute of Marine Research

June 11, 2025

Visual Intelligence Researchers Amund Vedal and Arnt Børre Salberg recently presented ongoing Visual Intelligence research at a plankton-themed workshop organized by the Institute of Marine Research (IMR), Norway

My Research Stay at Visual Intelligence: Teresa Dorszewski

June 5, 2025

Teresa Dorszewski is a PhD Candidate at the Section for Cognitive Systems at the Technical University of Denmark. She visited Visual Intelligence in Tromsø from January to April 2025.

Visual Intelligence represented at the NORA Annual Conference 2025

June 3, 2025

Centre Director Robert Jenssen was invited to give a keynote and participate in a panel discussion on AI as critical national infrastructure at the NORA Annual Conference 2025 in Halden, Norway.

NRK.no: Nekter å svare om umerkede puslespill er KI-generert: – De bør være ærlige

June 2, 2025

Både forskere og statsråd mener kunstig intelligens bør tydelig merkes. Men forlaget som lager puslespillet som ekspertene mener er KI-generert, sier de ikke har noe med hvordan illustratører lager produktene sine (Norwegian news article by NRK)

ScienceNorway: This is how AI can contribute to faster treatment of lung cancer

May 30, 2025

Researchers have developed an artificial intelligence to map specific immune cells in lung cancer tumors. It can lead to less costly examinations and more personalised cancer treatment (English news story on sciencenorway.no).

Now Hiring: 4 PhD Fellows in Deep Learning

May 28, 2025

The Department of Physics and Technology at UiT The Arctic University of Norway is pleased to announce 4 exciting PhD Fellowships within machine learning at SFI Visual Intelligence. Application deadline: June 17th.

VG: Slik kan AI revolusjonere lungekreftbehandling

May 19, 2025

Norsk forskning har utviklet kunstig intelligens som raskt kan analysere lungekreft. Ekspertene forklarer hvordan dette kan bidra til en mer effektiv og persontilpasset behandling (Norwegian news article in vg.no)

Visual Intelligence evaluated by international experts: "The centre operates at an excellent level"

April 29, 2025

After four years of operation, an international AI expert panel was appointed to assess Visual Intelligence's progress and results. The evaluation was characterized by its excellent remarks on the centre's scientific quality and innovation output.

Visual Intelligence at Norsk Radiografforbund's mammography symposium

April 24, 2025

Senior Researcher Fredrik Dahl recently gave a talk about Norsk Regnesentral's work on developing AI algorithms for automatic analysis of image quality and cancer detection at Norsk Radiografforbund's mammography symposium in Oslo.