Image:
ICML

Four Visual Intelligence-authored papers accepted for prestigious machine learning conference

Visual Intelligence will be well presented at ICML 2025—one of the leading international academic conference in machine learning—with four newly accepted research papers.

Four Visual Intelligence-authored papers accepted for prestigious machine learning conference

Visual Intelligence will be well presented at ICML 2025—one of the leading international academic conference in machine learning—with four newly accepted research papers.

By Petter Bjørklund, Communications Advisor at SFI Visual Intelligence

The International Conference on Machine Learning (ICML) attracts 7000 researchers from around the globe to share high impact research in machine learning and artificial intelligence (AI). This year's conference has an acceptance rate of around 26.9 per cent.

Centre Director Robert Jenssen is thrilled by how well represented Visual Intelligence will be at this year's ICML, which will be organized from July 13th - 19th.

“It is very important for us as a research centre in deep learning to contribute to the scientific progress of the field, laying the foundation for innovation and real-world impact. I am very proud of our researchers’ cutting-edge work to better extract information in neural networks, to better compress information, to enable interpretability and to leverage multimodality,” Jenssen says.

Learning representations without labels using contrasting learning

Professor and Principal Investigator (PI) Adín Ramírez Rivera is one of six Visual Intelligence researchers who got their paper accepted for ICML 2025. His paper—titled "Self-Organizing Visual Prototypes for Non-Parametric Representation Learning"—is about learning representations without labels using contrasting learning.

Adín Ramírez Rivera. Photo: UiO.

Contrasting existing approaches, Ramírez Rivera and co-authors take advantage of the relationship between data that they have seen during training and improve the comparisons—in the contrastive setup—by using relative information of absolute one—just like how current methods do.

"Our results show that using this relative information helps to learn better representations as evidenced by several tasks that we performed on image data. We outperform several existing methods and show that this proposal not only scales but also outperforms current learning setups," Ramírez Rivera says.

Novel general layer-wise quantization framework

The second VI paper—titled "Layer-wise Quantization for Quantized Optimistic Dual Averaging"—is authored by Associate Professor and PI Ali Ramezani-Kebrya. He proposes a general layer-wise quantization framework that takes into account the statistical heterogeneity across layers and an efficient solver for distributed variational inequalities.

Ali Ramezani-Kebrya. Photo: Private

Ramezani-Kebrya and co-authors establish tight variance and code-length bounds for layer-wise quantization, which generalize the bounds for global quantization frameworks.

"We empirically achieve up to a 150% speed-up over the baselines in end-to-end training time for training Wasserstein GAN on 12+ GPUs", Ramezani-Kebrya explains.

Framework for visually self-explainable document question answering

The third paper, titled "DocVXQA: Context-Aware Visual Explanations for Document Question Answering", proposes DocVXQA: a framework for visually self-explainable document question answering that produces accurate answers while generating visual heatmaps for interpretability.

The paper is authored by Postdoctoral Researcher and PI Changkyu Choi and other collaborators from Spain, France and Norway.

Changkyu Choi. Photo: UiT

"By encoding explainability principles as learning criteria, DocVXQA balances performance and trust through context-aware explanations," Choi says.

Novel multimodal variational autoencoder

Rogelio Andrade Mancisidor, a former PhD Candidate at UiT Machine Learning Group, is the main author of the fourth accepted VI paper—titled "Aggregation of Dependent Expert Distributions in Multimodal Variational Autoencoders". He is now an Associate Professor at BI and a Visual Intelligence collaborator.

His paper introduces the Consensus of Dependent Experts (CoDE) method, which models the dependence between single-modality distributions through their error of estimation and generalizes Product of Experts (PoE). The paper is co-authored by associate professors Michael Kampffmeyer and Shujian Yu, as well as Centre Director Robert Jenssen.

Rogelio Andrade Mancisidor. Photo: BI.

Multimodal Variational Autoencoders (VAEs) use the Product of Experts (PoE) or Mixture of Experts (MoE) methods to estimate consensus distributions by aggregating single-modality distributions and assuming independence for simplicity, which—according to Mancisidor—is an overly optimistic assumption. The CoDE method was proposed as a way of overcoming this limitation.

"We use CoDE to develop CoDE-VAE—a novel multimodal VAE that learns the contribution of each consensus distribution to the optimization. We argue that consensus distributions conditioned on more modalities or with relatively more information should contribute extra to the optimization," Mancisidor explains.

He says CoDE-VAE shows better performance in terms of balancing the trade-off between generative coherence and generative quality—as well as generating more precise log-likelihood estimations.

"In addition, our experiments support the hypothesis that data modalities are correlated, as they are simply data modalities on the same underlying object," Mancisidor adds.

Latest news

Visual Intelligence evaluated by international experts: "The centre operates at an excellent level"

April 29, 2025

After four years of operation, an international AI expert panel was appointed to assess Visual Intelligence's progress and results. The evaluation was characterized by its excellent remarks on the centre's scientific quality and innovation output.

Visual Intelligence at Norsk Radiografforbund's mammography symposium

April 24, 2025

Senior Researcher Fredrik Dahl recently gave a talk about Norsk Regnesentral's work on developing AI algorithms for automatic analysis of image quality and cancer detection at Norsk Radiografforbund's mammography symposium in Oslo.

Michael Kampffmeyer receives UiT's Young Researcher Award

April 4, 2025

Michael Kampffmeyer is one of UiT's youngest professors and has already distinguished himself through his contributions to AI research. He has now won UiT's award for young researchers at the university's annual celebration.

Visual Intelligence Annual Report 2024

April 2, 2025

The fifth Visual Intelligence annual report, showcasing the centre's activities, results, staff, funding and publications for 2024, is now available on our web pages.

uit.no: UiT er vertskap for landsdekkende KI-konferanse

April 1, 2025

I juni møtes det norske KI-miljøet i Tromsø for å presentere ny forskning og diskutere nye retninger innen feltet. KI-forskere inviteres til å delta og vise fram forskningen sin under konferansen.

Successful Industry Pitch Day at UiT

March 20, 2025

Visual Intelligence and the Digital Innovation Lab invited industry professionals to present ideas for master's projects to computer science and machine learning students at UiT The Arctic University of Norway.

Dagens Næringsliv: Norges eldste fagmiljø innen KI

March 18, 2025

Kunstig intelligens (KI) endrer måten vi løser komplekse problemer på. Ved UiT Norges arktiske universitet leder professor Robert Jenssen Visual Intelligence, et senter for forskningsdrevet innovasjon som utvikler neste generasjons KI-metoder.

Visual Intelligence at the UiT Open Day in Tromsø

March 13, 2025

Visual Intelligence researchers had the great pleasure of talking to high school students about the AI study programme at UiT The Arctic University of Norway during the UiT Open Day‍‍.

Visual Intelligence represented at CuttingEdgeAI seminar

March 11, 2025

Director Robert Jenssen represented Visual Intelligence at the CuttingEdgeAI seminar "KI anno 2023: I offentlighetens interesse?" at the University of Bergen on March 7th.

forskning.no: Derfor fungerer KI dårligere på kvinner

March 8, 2025

Det hender at kunstig intelligens behandler menn og kvinner ulikt. Hvordan skjer dette? KI-forsker Elisabeth Wetzer forklarer hva som ligger bak skjevhetene i teknologien (Popular science story in forskning.no and sciencenorway.no)

Visual Intelligence at TEKdagen 2025

February 11, 2025

We had the pleasure of talking to students about the exciting career opportunities at Visual Intelligence and UiT The Arctic University of Norway during TEKdagen 2025

School visit from Breivang upper secondary school

February 5, 2025

Last week, we welcomed students from Breivang upper secondary school to a full-day practical session on AI and programming at UiT The Arctic University of Norway