Image:
ICML

Four Visual Intelligence-authored papers accepted for prestigious machine learning conference

Visual Intelligence will be well presented at ICML 2025—one of the leading international academic conference in machine learning—with four newly accepted research papers.

Four Visual Intelligence-authored papers accepted for prestigious machine learning conference

Visual Intelligence will be well presented at ICML 2025—one of the leading international academic conference in machine learning—with four newly accepted research papers.

By Petter Bjørklund, Communications Advisor at SFI Visual Intelligence

The International Conference on Machine Learning (ICML) attracts 7000 researchers from around the globe to share high impact research in machine learning and artificial intelligence (AI). This year's conference has an acceptance rate of around 26.9 per cent.

Centre Director Robert Jenssen is thrilled by how well represented Visual Intelligence will be at this year's ICML, which will be organized from July 13th - 19th.

“It is very important for us as a research centre in deep learning to contribute to the scientific progress of the field, laying the foundation for innovation and real-world impact. I am very proud of our researchers’ cutting-edge work to better extract information in neural networks, to better compress information, to enable interpretability and to leverage multimodality,” Jenssen says.

Learning representations without labels using contrasting learning

Professor and Principal Investigator (PI) Adín Ramírez Rivera is one of six Visual Intelligence researchers who got their paper accepted for ICML 2025. His paper—titled "Self-Organizing Visual Prototypes for Non-Parametric Representation Learning"—is about learning representations without labels using contrasting learning.

Adín Ramírez Rivera. Photo: UiO.

Contrasting existing approaches, Ramírez Rivera and co-authors take advantage of the relationship between data that they have seen during training and improve the comparisons—in the contrastive setup—by using relative information of absolute one—just like how current methods do.

"Our results show that using this relative information helps to learn better representations as evidenced by several tasks that we performed on image data. We outperform several existing methods and show that this proposal not only scales but also outperforms current learning setups," Ramírez Rivera says.

Novel general layer-wise quantization framework

The second VI paper—titled "Layer-wise Quantization for Quantized Optimistic Dual Averaging"—is authored by Associate Professor and PI Ali Ramezani-Kebrya. He proposes a general layer-wise quantization framework that takes into account the statistical heterogeneity across layers and an efficient solver for distributed variational inequalities.

Ali Ramezani-Kebrya. Photo: Private

Ramezani-Kebrya and co-authors establish tight variance and code-length bounds for layer-wise quantization, which generalize the bounds for global quantization frameworks.

"We empirically achieve up to a 150% speed-up over the baselines in end-to-end training time for training Wasserstein GAN on 12+ GPUs", Ramezani-Kebrya explains.

Framework for visually self-explainable document question answering

The third paper, titled "DocVXQA: Context-Aware Visual Explanations for Document Question Answering", proposes DocVXQA: a framework for visually self-explainable document question answering that produces accurate answers while generating visual heatmaps for interpretability.

The paper is authored by Postdoctoral Researcher and PI Changkyu Choi and other collaborators from Spain, France and Norway.

Changkyu Choi. Photo: UiT

"By encoding explainability principles as learning criteria, DocVXQA balances performance and trust through context-aware explanations," Choi says.

Novel multimodal variational autoencoder

Rogelio Andrade Mancisidor, a former PhD Candidate at UiT Machine Learning Group, is the main author of the fourth accepted VI paper—titled "Aggregation of Dependent Expert Distributions in Multimodal Variational Autoencoders". He is now an Associate Professor at BI and a Visual Intelligence collaborator.

His paper introduces the Consensus of Dependent Experts (CoDE) method, which models the dependence between single-modality distributions through their error of estimation and generalizes Product of Experts (PoE). The paper is co-authored by associate professors Michael Kampffmeyer and Shujian Yu, as well as Centre Director Robert Jenssen.

Rogelio Andrade Mancisidor. Photo: BI.

Multimodal Variational Autoencoders (VAEs) use the Product of Experts (PoE) or Mixture of Experts (MoE) methods to estimate consensus distributions by aggregating single-modality distributions and assuming independence for simplicity, which—according to Mancisidor—is an overly optimistic assumption. The CoDE method was proposed as a way of overcoming this limitation.

"We use CoDE to develop CoDE-VAE—a novel multimodal VAE that learns the contribution of each consensus distribution to the optimization. We argue that consensus distributions conditioned on more modalities or with relatively more information should contribute extra to the optimization," Mancisidor explains.

He says CoDE-VAE shows better performance in terms of balancing the trade-off between generative coherence and generative quality—as well as generating more precise log-likelihood estimations.

"In addition, our experiments support the hypothesis that data modalities are correlated, as they are simply data modalities on the same underlying object," Mancisidor adds.

Latest news

My Research Stay at Visual Intelligence: Aitor Sánchez

October 5, 2025

Aitor Sánchez is a PhD candidate at the Intelligent Systems Group of the University of the Basque Country in Spain. He visited Visual Intelligence in Tromsø from March to June 2025.

Visual Intelligence at Forskningsdagene 2025

September 28, 2025

Visual Intelligence researchers participated in this year's Forskningsdagene: an annual national research festival which aims to stimulate the general public's interest and curiosity in research.

forskning.no: Derfor trenger vi en lov for kunstig intelligens

September 26, 2025

Norge arbeider med å få på plass en egen lov for kunstig intelligens. Loven skal passe på at vi bruker KI på en trygg måte (Norwegian news article at uit.no).

Visual Intelligence Days 2025: Two packed days of scientific and social interactions!

September 25, 2025

85 researchers from the Visual Intelligence consortium convened for the Visual Intelligence Days: the annual workshop where researchers, user partners and invited guests convene to share knowledge and updates on the latest research and innovations within the centre.

uit.no: Hvordan håndtere skjevheter og beskytte personvernet i KI-alderen?

September 11, 2025

Det skal eksperter på kunstig intelligens, cybersikkerhet og juss fra UiT Norges arktiske universitet diskutere på Verdensteatret Kino den 17. september. Arrangementet er gratis og åpen for alle (Norwegian news article at uit.no)

Successful PhD defense by Nikita Shvetsov

August 27, 2025

Congratulations to Nikita Shvetsov for successfully defending his PhD thesis and achieving the PhD degree in Science at UiT The Arctic University of Norway on August 27th 2025.

uit.no: – UiT er langt fremme når det gjelder kunstig intelligens

August 25, 2025

Det sa digitaliseringsminister Karianne Tung (Ap) da hun besøkte UiT Norges arktiske universitet i Tromsø for å lære mer om utdanning og toppforskning på kunstig intelligens ved universitetet (Norwegian news article at uit.no)

Two successful PhD defenses within two days

August 22, 2025

Congratulations to Iver Martinsen and Durgesh Kumar Singh, who successfully defended their PhD theses at UiT The Arctic University of Norway on August 20th and 21st respectively.

Visual Intelligence at Arendalsuka 2025

August 14, 2025

Visual Intelligence brought together academia, industry, the institute sector, the Research Council of Norway, and Norwegian politics to discuss why and how Norwegian academia and industry should collaborate more closely to develop innovative artificial intelligence.

My Research Stay at Visual Intelligence: João Campagnolo

August 1, 2025

João Campagnolo is a PhD Candidate in the Neuroscience Department of the University of Copenhagen (KU). He visited SFI Visual Intelligence in Tromsø from April to May 2025.

16 EUGLOH Mobility Scholarships for NLDL Winter School 2026: Apply Now!

July 29, 2025

Students from EUGLOH partner institutions can now apply for an exclusive mobility scholarship worth 2 900 EUR for the NLDL 2026 Winter School. Application deadline October 3rd 2025

2025 Norwegian AI Society Symposium: An insightful and collaborative event

June 23, 2025

More than 50 attendees from the Norwegian AI research community gathered in Tromsø, Norway for two days of insightful presentations, interactive technical sessions, and scientific and social interactions.

Minister of Research and Higher Education visits Visual Intelligence hub at Norwegian Computing Center

June 16, 2025

Last week, we wished Aasland—accompanied by Political Advisor Munir Jaber and Senior Adviser Finn-Hugo Markussen—welcome to the Norwegian Computing Center (NR). One of the visit's goals was to showcase ongoing Visual Intelligence projects at NR.