Scientific publications

At Visual Intelligence we work across our innovation areas to extract knowledge from large volumes of visual data more efficiently through automatic and intelligent data analysis. The work to address the core research challenges in deep learning: working with limited training data, utilizing context and dependencies, providing explainability, confidence and uncertainty, are important in all the innovation areas.

publications as of January 2021:
June 21, 2021
June 24, 2021
·
Ahcène Boubekki, Michael Kampffmeyer, Ulf Brefeld, Robert Jenssen

Joint optimization of an autoencoder for clustering and embedding

The objective function of a class of Gaussian mixture models can be rephrased as the loss function of a one-hidden layer autoencoder: the clustering module. Integrating the latter into a deep autoencoder yields a model able to jointly learn a clustering and an embedding and with state-of-the-art clustering performance.

June 16, 2021
June 25, 2021
·
Qinghui Liu (Brian), Michael Kampffmeyer, Robert Jenssen, Arnt-Børre Salberg

Self-constructing graph neural networks to model long-range pixel dependencies for semantic segmentation of remote sensing images

Capturing global contextual representations in remote sensing images by exploiting long-range pixel–pixel dependencies has been shown to improve segmentation performance. We propose the Self-Constructing Graph (SCG) module that learns a long-range dependency graph directly from the image data.

March 13, 2021
March 17, 2021
·
Daniel J. Trosten, Sigurd Løkse, Robert Jenssen, Michael Kampffmeyer

Reconsidering Representation Alignment for Multi-view Clustering

To appear in CVPR 2021.

We identify several drawbacks with current state of the art methods for deep multi-view clustering, and present a simple baseline model that avoids these drawbacks. The baseline is expanded with a contrastive learning component, resulting in a model which outperforms the current SOTA on several benchmark datasets.

February 19, 2021
·
Shujian Yu, Francesco Alesiani, Xi Yu, Robert Jenssen, Jose Principe

Measuring Dependence with Matrix‐Based Entropy Functional

Published to AAAI-21.

An interpretable and differentiable dependence (or independence) measure that can be used to 1) train deep network under covariate shift and non-Gaussian noise; 2) implement a deep deterministic information bottleneck; and 3) understand the dynamics of learning of CNN. Code available.

Other publications

annual reports