Image:
ICML

Four Visual Intelligence-authored papers accepted for prestigious machine learning conference

Visual Intelligence will be well presented at ICML 2025—one of the leading international academic conference in machine learning—with four newly accepted research papers.

Four Visual Intelligence-authored papers accepted for prestigious machine learning conference

Visual Intelligence will be well presented at ICML 2025—one of the leading international academic conference in machine learning—with four newly accepted research papers.

By Petter Bjørklund, Communications Advisor at SFI Visual Intelligence

The International Conference on Machine Learning (ICML) attracts 7000 researchers from around the globe to share high impact research in machine learning and artificial intelligence (AI). This year's conference has an acceptance rate of around 26.9 per cent.

Centre Director Robert Jenssen is thrilled by how well represented Visual Intelligence will be at this year's ICML, which will be organized from July 13th - 19th.

“It is very important for us as a research centre in deep learning to contribute to the scientific progress of the field, laying the foundation for innovation and real-world impact. I am very proud of our researchers’ cutting-edge work to better extract information in neural networks, to better compress information, to enable interpretability and to leverage multimodality,” Jenssen says.

Learning representations without labels using contrasting learning

Professor and Principal Investigator (PI) Adín Ramírez Rivera is one of six Visual Intelligence researchers who got their paper accepted for ICML 2025. His paper—titled "Self-Organizing Visual Prototypes for Non-Parametric Representation Learning"—is about learning representations without labels using contrasting learning.

Adín Ramírez Rivera. Photo: UiO.

Contrasting existing approaches, Ramírez Rivera and co-authors take advantage of the relationship between data that they have seen during training and improve the comparisons—in the contrastive setup—by using relative information of absolute one—just like how current methods do.

"Our results show that using this relative information helps to learn better representations as evidenced by several tasks that we performed on image data. We outperform several existing methods and show that this proposal not only scales but also outperforms current learning setups," Ramírez Rivera says.

Novel general layer-wise quantization framework

The second VI paper—titled "Layer-wise Quantization for Quantized Optimistic Dual Averaging"—is authored by Associate Professor and PI Ali Ramezani-Kebrya. He proposes a general layer-wise quantization framework that takes into account the statistical heterogeneity across layers and an efficient solver for distributed variational inequalities.

Ali Ramezani-Kebrya. Photo: Private

Ramezani-Kebrya and co-authors establish tight variance and code-length bounds for layer-wise quantization, which generalize the bounds for global quantization frameworks.

"We empirically achieve up to a 150% speed-up over the baselines in end-to-end training time for training Wasserstein GAN on 12+ GPUs", Ramezani-Kebrya explains.

Framework for visually self-explainable document question answering

The third paper, titled "DocVXQA: Context-Aware Visual Explanations for Document Question Answering", proposes DocVXQA: a framework for visually self-explainable document question answering that produces accurate answers while generating visual heatmaps for interpretability.

The paper is authored by Postdoctoral Researcher and PI Changkyu Choi and other collaborators from Spain, France and Norway.

Changkyu Choi. Photo: UiT

"By encoding explainability principles as learning criteria, DocVXQA balances performance and trust through context-aware explanations," Choi says.

Novel multimodal variational autoencoder

Rogelio Andrade Mancisidor, a former PhD Candidate at UiT Machine Learning Group, is the main author of the fourth accepted VI paper—titled "Aggregation of Dependent Expert Distributions in Multimodal Variational Autoencoders". He is now an Associate Professor at BI and a Visual Intelligence collaborator.

His paper introduces the Consensus of Dependent Experts (CoDE) method, which models the dependence between single-modality distributions through their error of estimation and generalizes Product of Experts (PoE). The paper is co-authored by associate professors Michael Kampffmeyer and Shujian Yu, as well as Centre Director Robert Jenssen.

Rogelio Andrade Mancisidor. Photo: BI.

Multimodal Variational Autoencoders (VAEs) use the Product of Experts (PoE) or Mixture of Experts (MoE) methods to estimate consensus distributions by aggregating single-modality distributions and assuming independence for simplicity, which—according to Mancisidor—is an overly optimistic assumption. The CoDE method was proposed as a way of overcoming this limitation.

"We use CoDE to develop CoDE-VAE—a novel multimodal VAE that learns the contribution of each consensus distribution to the optimization. We argue that consensus distributions conditioned on more modalities or with relatively more information should contribute extra to the optimization," Mancisidor explains.

He says CoDE-VAE shows better performance in terms of balancing the trade-off between generative coherence and generative quality—as well as generating more precise log-likelihood estimations.

"In addition, our experiments support the hypothesis that data modalities are correlated, as they are simply data modalities on the same underlying object," Mancisidor adds.

Latest news

Dagsavisen: Hun lærer kunstig intelligens å forstå medisinske bilder

March 4, 2026

Elisabeth Wetzer forsker på hvordan maskiner kan lære å analysere medisinske bilder – og samtidig forstå hva legene faktisk ser etter (Norwegian news article on dagsavisen.no)

My Research Stay at Visual Intelligence: Artur Radzivil

February 12, 2026

Artur Radzivil is a PhD Research Fellow at Vilnius Gediminas Technical University. He visited Visual Intelligence in Oslo from September to November 2025.

Centre Director Robert Jenssen meets Norwegian Minister of Research and Higher Education, Sigrun Aasland

February 6, 2026

Centre Director Robert Jenssen met with Sigrun Aasland, Norwegian Minister of Research and Higher Education, alongside UiT and Aker Nscale representatives to give an update on Aker Nscale's AI Giga Factory in Narvik, Norway.

Visual Intelligence represented at Arctic Frontiers 2026

February 4, 2026

Visual Intelligence was represented by Centre Director Robert Jenssen, Associate Professor Kristoffer Wickstrøm, and VI Alumna Sara Björk at the Arctic Frontiers side session "How can AI and satellite collaboration strengthen Arctic resilience?".

Petter Tømmeraas visits Visual Intelligence in Tromsø

February 3, 2026

Petter Tømmeraas, Director of Data Center Services at Aker Nscale, visited SFI Visual Intelligence to learn more about our research projects and AI startups established by former UiT students.

NLDL 2026 was a great success!

January 12, 2026

Attracting 280 international AI researchers, the Northern Lights Deep Learning Conference (NLDL) 2026 was successfully organized from January 5th to 9th 2026 at UiT The Arctic University of Norway in Tromsø, Norway.

Fruitful mentoring session with Professor Mihaela van der Schaar

January 11, 2026

Visual Intelligence hosted a special NLDL 2026 mentoring session for young researchers with Professor Mihaela van der Schaar – a globally renowned AI researcher and expert

uit.no Toppforskere på kunstig intelligens samles i Tromsø

January 5, 2026

270 KI-eksperter fra nært og fjernt møtes ved UiT for å dele siste forskningsnytt. Et av formålene er å styrke internasjonale bånd og samarbeid på tvers av det globale KI-miljøet (Norwegian news article on uit.no)

unn.no: Forskningsmillioner til pasientrettet KI-satsing

December 23, 2025

Med ferske millioner på konto skal Senter for pasientnær kunstig intelligens (SPKI) ved UNN bidra i et stort nasjonalt KI-prosjekt de neste fire årene (News article on unn.no)

Happy Holidays from SFI Visual Intelligence!

December 18, 2025

The Visual Intelligence Management Team, Robert Jenssen, Line Eikvil, Anne Solberg, and Inger Solheim, wishes everyone a happy holiday season and new year!