Image:
ICML

Four Visual Intelligence-authored papers accepted for prestigious machine learning conference

Visual Intelligence will be well presented at ICML 2025—one of the leading international academic conference in machine learning—with four newly accepted research papers.

Four Visual Intelligence-authored papers accepted for prestigious machine learning conference

Visual Intelligence will be well presented at ICML 2025—one of the leading international academic conference in machine learning—with four newly accepted research papers.

By Petter Bjørklund, Communications Advisor at SFI Visual Intelligence

The International Conference on Machine Learning (ICML) attracts 7000 researchers from around the globe to share high impact research in machine learning and artificial intelligence (AI). This year's conference has an acceptance rate of around 26.9 per cent.

Centre Director Robert Jenssen is thrilled by how well represented Visual Intelligence will be at this year's ICML, which will be organized from July 13th - 19th.

“It is very important for us as a research centre in deep learning to contribute to the scientific progress of the field, laying the foundation for innovation and real-world impact. I am very proud of our researchers’ cutting-edge work to better extract information in neural networks, to better compress information, to enable interpretability and to leverage multimodality,” Jenssen says.

Learning representations without labels using contrasting learning

Professor and Principal Investigator (PI) Adín Ramírez Rivera is one of six Visual Intelligence researchers who got their paper accepted for ICML 2025. His paper—titled "Self-Organizing Visual Prototypes for Non-Parametric Representation Learning"—is about learning representations without labels using contrasting learning.

Adín Ramírez Rivera. Photo: UiO.

Contrasting existing approaches, Ramírez Rivera and co-authors take advantage of the relationship between data that they have seen during training and improve the comparisons—in the contrastive setup—by using relative information of absolute one—just like how current methods do.

"Our results show that using this relative information helps to learn better representations as evidenced by several tasks that we performed on image data. We outperform several existing methods and show that this proposal not only scales but also outperforms current learning setups," Ramírez Rivera says.

Novel general layer-wise quantization framework

The second VI paper—titled "Layer-wise Quantization for Quantized Optimistic Dual Averaging"—is authored by Associate Professor and PI Ali Ramezani-Kebrya. He proposes a general layer-wise quantization framework that takes into account the statistical heterogeneity across layers and an efficient solver for distributed variational inequalities.

Ali Ramezani-Kebrya. Photo: Private

Ramezani-Kebrya and co-authors establish tight variance and code-length bounds for layer-wise quantization, which generalize the bounds for global quantization frameworks.

"We empirically achieve up to a 150% speed-up over the baselines in end-to-end training time for training Wasserstein GAN on 12+ GPUs", Ramezani-Kebrya explains.

Framework for visually self-explainable document question answering

The third paper, titled "DocVXQA: Context-Aware Visual Explanations for Document Question Answering", proposes DocVXQA: a framework for visually self-explainable document question answering that produces accurate answers while generating visual heatmaps for interpretability.

The paper is authored by Postdoctoral Researcher and PI Changkyu Choi and other collaborators from Spain, France and Norway.

Changkyu Choi. Photo: UiT

"By encoding explainability principles as learning criteria, DocVXQA balances performance and trust through context-aware explanations," Choi says.

Novel multimodal variational autoencoder

Rogelio Andrade Mancisidor, a former PhD Candidate at UiT Machine Learning Group, is the main author of the fourth accepted VI paper—titled "Aggregation of Dependent Expert Distributions in Multimodal Variational Autoencoders". He is now an Associate Professor at BI and a Visual Intelligence collaborator.

His paper introduces the Consensus of Dependent Experts (CoDE) method, which models the dependence between single-modality distributions through their error of estimation and generalizes Product of Experts (PoE). The paper is co-authored by associate professors Michael Kampffmeyer and Shujian Yu, as well as Centre Director Robert Jenssen.

Rogelio Andrade Mancisidor. Photo: BI.

Multimodal Variational Autoencoders (VAEs) use the Product of Experts (PoE) or Mixture of Experts (MoE) methods to estimate consensus distributions by aggregating single-modality distributions and assuming independence for simplicity, which—according to Mancisidor—is an overly optimistic assumption. The CoDE method was proposed as a way of overcoming this limitation.

"We use CoDE to develop CoDE-VAE—a novel multimodal VAE that learns the contribution of each consensus distribution to the optimization. We argue that consensus distributions conditioned on more modalities or with relatively more information should contribute extra to the optimization," Mancisidor explains.

He says CoDE-VAE shows better performance in terms of balancing the trade-off between generative coherence and generative quality—as well as generating more precise log-likelihood estimations.

"In addition, our experiments support the hypothesis that data modalities are correlated, as they are simply data modalities on the same underlying object," Mancisidor adds.

Latest news

Trends in Visual Intelligence 2026

April 17, 2026

The field of Visual Intelligence is continuously transforming. Chief Research Scientist Arnt-Børre Salberg dives deeper into the current trends in the field of visual intelligence as of early 2026.

Centre-developed seismic foundation model is now open source!

April 6, 2026

The NCS model, a seismic foundation model trained on data from the Norwegian data repository for subsurface data, is now available as an open-source model, allowing anyone to download, utilize, and further develop the model.

Visual Intelligence Annual Report 2025

March 31, 2026

The Visual Intelligence Annual Report 2025, highlighting the centre's progress, activities, achieved innovations, staff, funding, and publications for 2025, is now available to read on our websites.

Visual Intelligence strengthens ties with Pioneer Centre for AI in EHR-related research

March 26, 2026

Visual Intelligence researchers contributed to the Pioneer Centre for AI workshop on Electronic Health Records research. The aim was to strengthen ties between the two centres on EHR-related research.

Nordlys: Her blir KI-studentene grillet av sin «egen» teknologi

March 24, 2026

Tre av studentene i sivilingeniør i Kunstig Intelligens ved UiT skal delta i NM i KI. Slik gikk det da de ble intervjuet ved hjelp av kunstig intelligens (News article in nordlys.no).

My Research Stay at Visual Intelligence: Rami Al-Belmpeisi

March 15, 2026

Rami Al-Belmpeisi is a PhD Research Fellow in the Visual Computing section at DTU Compute, Technical University of Denmark. He visited Visual Intelligence in Tromsø from November 2025 to February 2026.

Visual Intelligence inspires future students at the UiT Open Day

March 12, 2026

Visual Intelligence researchers came to the UiT Open Day to inform the students about UiT's study programme, and inspire them to pursue AI-related studies and career paths.

Dagsavisen: Hun lærer kunstig intelligens å forstå medisinske bilder

March 4, 2026

Elisabeth Wetzer forsker på hvordan maskiner kan lære å analysere medisinske bilder – og samtidig forstå hva legene faktisk ser etter (Norwegian news article on dagsavisen.no)

My Research Stay at Visual Intelligence: Artur Radzivil

February 12, 2026

Artur Radzivil is a PhD Research Fellow at Vilnius Gediminas Technical University. He visited Visual Intelligence in Oslo from September to November 2025.

Centre Director Robert Jenssen meets Norwegian Minister of Research and Higher Education, Sigrun Aasland

February 6, 2026

Centre Director Robert Jenssen met with Sigrun Aasland, Norwegian Minister of Research and Higher Education, alongside UiT and Aker Nscale representatives to give an update on Aker Nscale's AI Giga Factory in Narvik, Norway.