Image:
MICCAI

Three Visual Intelligence-authored papers accepted for leading AI conference on medical imaging

Visual Intelligence will be well represented at MICCAI 2025—one of the leading AI conferences on medical imaging and computer assisted intervention—with three recently accepted research papers.

Three Visual Intelligence-authored papers accepted for leading AI conference on medical imaging

Visual Intelligence will be well represented at MICCAI 2025—one of the leading AI conferences on medical imaging and computer assisted intervention—with three recently accepted research papers.

By Petter Bjørklund, Communications Advisor at SFI Visual Intelligence

The annual International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) attracts the world's leading biomedical scientists, engineers, and clinicians from a wide range of disciplines associated with medical imaging and computer assisted intervention. MICCAI has an historical acceptance rate of 30 per cent.

Centre Director Robert Jenssen is delighted by Visual Intelligence's representation at this year's MICCAI conference, which is organized in Daejeon, South Korea from September 23rd to 27th.

"This research demonstrates Visual Intelligence's commitment to advancing medical imaging through the development of novel deep learning methods that deliver explainable and reliable results. I congratulate our researchers whose papers were accepted to the highly competitive MICCAI conference," Jenssen says.

Explicit alignment strategies in longitudinal mammography for breast cancer risk prediction

PhD Fellow Solveig Thrun is one of the VI researchers who got their research paper accepted for MICCAI 2025. Her paper—titled "Reconsidering Explicit Longitudinal Mammography Alignment for Enhanced Breast Cancer Risk Prediction"—investigates explicit alignment strategies in longitudinal mammography for breast cancer risk prediction. It focuses on whether alignment should occur in the input image space or the representation space.

PhD Fellow Solveig Thrun. Photo: Private

As recent deep learning approaches increasingly leverage the temporal aspect of screening to track breast tissue changes over time, spatial alignment across time points has become essential. However, the optimal design for explicit alignment in mammography remains underexplored.

"Our findings provide new insights into alignment choices and their impact on predictive performance, offering practical guidance for future work in this domain," says Thrun.

The results show that jointly optimizing explicit alignment in the representation space alongside risk prediction, as done in current state-of-the-art methods, leads to a trade-off between alignment quality and predictive performance

"This demonstrates that image-level alignment outperforms representation-level approaches, producing more accurate deformation fields and improving risk prediction accuracy," Thrun explains.

Tied prototype model for few-shot medical image segmentation

The second Visual Intelligence-authored paper—titled "Tied Prototype Model for Few-Shot Medical Image Segmentation"—addresses the challenge of segmenting important anatomical structures in medical images with limited labels. The paper is authored by VI researchers Hyeongji Kim and Michael Kampffmeyer, and Stine Hansen.

Postdoctoral Fellow Hyeongji Kim. Photo: Petter Bjørklund / SFI Visual Intelligence

Their work introduces the Tied Prototype Model (TPM)—a novel protoype-based approach which addresses key limitations of the previous ADNet method, a previously proposed method from Visual Intelligence.

Unlike ADNet, which relies on a single prototype, focuses on binary classification, and uses fixed thresholds, TPM supports multiple prototypes and enables multiclass training and segmentation. These methodological advances offer a promising foundation for future work in medical image segmentation.

"Our theoretical analysis establishes the equivalence between ADNet and a special case of TPM, showing that our TPM is a generalization of ADNet. Our experiments show that each of the three main contributions—multi-prototype extension, multiclass training, and the proposed ideal thresholding strategy—consistently enhanced segmentation accuracy. This highlights TPM's effectiveness and adaptability in few-shot medical image segmentation," Kim says.

Novel temporal framework for longitudinal breast cancer risk prediction

The third paper—titled "VMRA-MaR: An Asymmetry-Aware Temporal Framework for Longitudinal Breast Cancer Risk Prediction"—introduces a novel framework for breast cancer risk prediction. The paper is authored by Zijun Sun and VI researchers Solveig Thrun and Michael Kampffmeyer.

Zijun Sun is a Master's Student at the University of Bologna. Photo: Private

It integrates a Vision Mamba RNN (VMRNN) to model dynamic breast tissue changes over time and an asymmetry module to detect bilateral differences, aiming to improve early cancer recognition and personalize screening strategies.

The results show that the framework significantly improves breast cancer risk prediction, particularly for challenging high-density breast cases and at extended follow-up intervals.

"The model's ability to effectively capture intricate tissue dynamics through its recurrent temporal and asymmetry-aware approach contributes to its superior performance compared to previous state-of-the-art methods," Sun explains.

Latest news

My Research Stay at Visual Intelligence: Artur Radzivil

February 12, 2026

Artur Radzivil is a PhD Research Fellow at Vilnius Gediminas Technical University. He visited Visual Intelligence in Oslo from September to November 2025.

Centre Director Robert Jenssen meets Norwegian Minister of Research and Higher Education, Sigrun Aasland

February 6, 2026

Centre Director Robert Jenssen met with Sigrun Aasland, Norwegian Minister of Research and Higher Education, alongside UiT and Aker Nscale representatives to give an update on Aker Nscale's AI Giga Factory in Narvik, Norway.

Visual Intelligence represented at Arctic Frontiers 2026

February 4, 2026

Visual Intelligence was represented by Centre Director Robert Jenssen, Associate Professor Kristoffer Wickstrøm, and VI Alumna Sara Björk at the Arctic Frontiers side session "How can AI and satellite collaboration strengthen Arctic resilience?".

Petter Tømmeraas visits Visual Intelligence in Tromsø

February 3, 2026

Petter Tømmeraas, Director of Data Center Services at Aker Nscale, visited SFI Visual Intelligence to learn more about our research projects and AI startups established by former UiT students.

NLDL 2026 was a great success!

January 12, 2026

Attracting 280 international AI researchers, the Northern Lights Deep Learning Conference (NLDL) 2026 was successfully organized from January 5th to 9th 2026 at UiT The Arctic University of Norway in Tromsø, Norway.

Fruitful mentoring session with Professor Mihaela van der Schaar

January 11, 2026

Visual Intelligence hosted a special NLDL 2026 mentoring session for young researchers with Professor Mihaela van der Schaar – a globally renowned AI researcher and expert

uit.no Toppforskere på kunstig intelligens samles i Tromsø

January 5, 2026

270 KI-eksperter fra nært og fjernt møtes ved UiT for å dele siste forskningsnytt. Et av formålene er å styrke internasjonale bånd og samarbeid på tvers av det globale KI-miljøet (Norwegian news article on uit.no)

unn.no: Forskningsmillioner til pasientrettet KI-satsing

December 23, 2025

Med ferske millioner på konto skal Senter for pasientnær kunstig intelligens (SPKI) ved UNN bidra i et stort nasjonalt KI-prosjekt de neste fire årene (News article on unn.no)

Happy Holidays from SFI Visual Intelligence!

December 18, 2025

The Visual Intelligence Management Team, Robert Jenssen, Line Eikvil, Anne Solberg, and Inger Solheim, wishes everyone a happy holiday season and new year!

Anders Waldeland receives the Digital Trailblazer Award 2025

December 4, 2025

Congratulations to Senior Research Scientist Anders Waldeland, who was awarded the Digital Trailblazer Award 2025 at the Dig X Subsurface conference in Oslo, Norway.