Image:
MICCAI

Three Visual Intelligence-authored papers accepted for leading AI conference on medical imaging

Visual Intelligence will be well represented at MICCAI 2025—one of the leading AI conferences on medical imaging and computer assisted intervention—with three recently accepted research papers.

Three Visual Intelligence-authored papers accepted for leading AI conference on medical imaging

Visual Intelligence will be well represented at MICCAI 2025—one of the leading AI conferences on medical imaging and computer assisted intervention—with three recently accepted research papers.

By Petter Bjørklund, Communications Advisor at SFI Visual Intelligence

The annual International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) attracts the world's leading biomedical scientists, engineers, and clinicians from a wide range of disciplines associated with medical imaging and computer assisted intervention. MICCAI has an historical acceptance rate of 30 per cent.

Centre Director Robert Jenssen is delighted by Visual Intelligence's representation at this year's MICCAI conference, which is organized in Daejeon, South Korea from September 23rd to 27th.

"This research demonstrates Visual Intelligence's commitment to advancing medical imaging through the development of novel deep learning methods that deliver explainable and reliable results. I congratulate our researchers whose papers were accepted to the highly competitive MICCAI conference," Jenssen says.

Explicit alignment strategies in longitudinal mammography for breast cancer risk prediction

PhD Fellow Solveig Thrun is one of the VI researchers who got their research paper accepted for MICCAI 2025. Her paper—titled "Reconsidering Explicit Longitudinal Mammography Alignment for Enhanced Breast Cancer Risk Prediction"—investigates explicit alignment strategies in longitudinal mammography for breast cancer risk prediction. It focuses on whether alignment should occur in the input image space or the representation space.

PhD Fellow Solveig Thrun. Photo: Private

As recent deep learning approaches increasingly leverage the temporal aspect of screening to track breast tissue changes over time, spatial alignment across time points has become essential. However, the optimal design for explicit alignment in mammography remains underexplored.

"Our findings provide new insights into alignment choices and their impact on predictive performance, offering practical guidance for future work in this domain," says Thrun.

The results show that jointly optimizing explicit alignment in the representation space alongside risk prediction, as done in current state-of-the-art methods, leads to a trade-off between alignment quality and predictive performance

"This demonstrates that image-level alignment outperforms representation-level approaches, producing more accurate deformation fields and improving risk prediction accuracy," Thrun explains.

Tied prototype model for few-shot medical image segmentation

The second Visual Intelligence-authored paper—titled "Tied Prototype Model for Few-Shot Medical Image Segmentation"—addresses the challenge of segmenting important anatomical structures in medical images with limited labels. The paper is authored by VI researchers Hyeongji Kim and Michael Kampffmeyer, and Stine Hansen.

Postdoctoral Fellow Hyeongji Kim. Photo: Petter Bjørklund / SFI Visual Intelligence

Their work introduces the Tied Prototype Model (TPM)—a novel protoype-based approach which addresses key limitations of the previous ADNet method, a previously proposed method from Visual Intelligence.

Unlike ADNet, which relies on a single prototype, focuses on binary classification, and uses fixed thresholds, TPM supports multiple prototypes and enables multiclass training and segmentation. These methodological advances offer a promising foundation for future work in medical image segmentation.

"Our theoretical analysis establishes the equivalence between ADNet and a special case of TPM, showing that our TPM is a generalization of ADNet. Our experiments show that each of the three main contributions—multi-prototype extension, multiclass training, and the proposed ideal thresholding strategy—consistently enhanced segmentation accuracy. This highlights TPM's effectiveness and adaptability in few-shot medical image segmentation," Kim says.

Novel temporal framework for longitudinal breast cancer risk prediction

The third paper—titled "VMRA-MaR: An Asymmetry-Aware Temporal Framework for Longitudinal Breast Cancer Risk Prediction"—introduces a novel framework for breast cancer risk prediction. The paper is authored by Zijun Sun and VI researchers Solveig Thrun and Michael Kampffmeyer.

Zijun Sun is a Master's Student at the University of Bologna. Photo: Private

It integrates a Vision Mamba RNN (VMRNN) to model dynamic breast tissue changes over time and an asymmetry module to detect bilateral differences, aiming to improve early cancer recognition and personalize screening strategies.

The results show that the framework significantly improves breast cancer risk prediction, particularly for challenging high-density breast cases and at extended follow-up intervals.

"The model's ability to effectively capture intricate tissue dynamics through its recurrent temporal and asymmetry-aware approach contributes to its superior performance compared to previous state-of-the-art methods," Sun explains.

Latest news

NLDL 2026 was a great success!

January 12, 2026

Attracting 280 international AI researchers, the Northern Lights Deep Learning Conference (NLDL) 2026 was successfully organized from January 5th to 9th 2026 at UiT The Arctic University of Norway in Tromsø, Norway.

Fruitful mentoring session with Professor Mihaela van der Schaar

January 11, 2026

Visual Intelligence hosted a special NLDL 2026 mentoring session for young researchers with Professor Mihaela van der Schaar – a globally renowned AI researcher and expert

uit.no Toppforskere på kunstig intelligens samles i Tromsø

January 5, 2026

270 KI-eksperter fra nært og fjernt møtes ved UiT for å dele siste forskningsnytt. Et av formålene er å styrke internasjonale bånd og samarbeid på tvers av det globale KI-miljøet (Norwegian news article on uit.no)

unn.no: Forskningsmillioner til pasientrettet KI-satsing

December 23, 2025

Med ferske millioner på konto skal Senter for pasientnær kunstig intelligens (SPKI) ved UNN bidra i et stort nasjonalt KI-prosjekt de neste fire årene (News article on unn.no)

Happy Holidays from SFI Visual Intelligence!

December 18, 2025

The Visual Intelligence Management Team, Robert Jenssen, Line Eikvil, Anne Solberg, and Inger Solheim, wishes everyone a happy holiday season and new year!

Anders Waldeland receives the Digital Trailblazer Award 2025

December 4, 2025

Congratulations to Senior Research Scientist Anders Waldeland, who was awarded the Digital Trailblazer Award 2025 at the Dig X Subsurface conference in Oslo, Norway.

sciencenorway.no: AI can help detect heart diseases more quickly

December 3, 2025

Researchers have developed an artificial intelligence that can automatically measure the heart's structure – both quickly and accurately (Popular science article on sciencenorway.no)

State Secretary Marianne Wilhelmsen visits SFI Visual Intelligence and UiT

November 26, 2025

State Secretary Marianne Wilhelmsen visited UiT The Arctic University of Norway to learn more about SFI Visual Intelligence and UiT's AI initiatives in education and research.

TV2.no: Sier Elon Musk er smartere enn Leonardo da Vinci

November 25, 2025

KI-chatboten Grok har fortalt brukere at verdens rikeste mann er både smartere og sprekere enn noen andre i verden – inkludert basketballstjernen LeBron James og Leonardo da Vinci (Norwegian news article on tv2.no)

Successful science communication workshop at Skibotn

November 21, 2025

The Visual Intelligence Graduate School gathered our early career researchers for a 3-Day Science Communication workshop at Skibotn field station outside of Tromsø, Norway.

uit.no: UiT og Aker Nscale sammen om storsatsing på kunstig intelligens

November 19, 2025

Onsdag inngikk Aker Nscale og UiT Norges arktiske universitet en ti-årig samarbeidsavtale for å utvikle og styrke kompetansemiljøene for kunstig intelligens i Narvik og Nord-Norge. Aker Nscale garanterer for 100 millioner kroner i avtaleperioden (news story on uit.no)

Two fruitful days at The Alan Turing Institute's headquarters

November 17, 2025

Centre Director Robert Jenssen and PhD Candidate Lars Uebbing had two fruitful days together with researchers at The Alan Turing Institute's headquarters in London

Anders Waldeland nominated for the Digital Trailblazer 2025 Award

November 12, 2025

Senior Research Scientist Anders Waldeland is nominated for the Digital Trailblazer 2025 Award. The winner is announced at the Dig X Subsurface conference in Oslo, Norway in December.