Context and dependencies

Background

The strength of machine learning methods is the ability to learn from data rather than using predefined models. For complex data there is however a need to integrate the best of these two worlds to enable integration of physical or geometrical models, dependencies, and prior knowledge, as well as the exploitation of multiple complex image modalities simultaneously.

Challenges

Current deep learning systems for image analysis depend on individual pixel information, capturing dependencies solely via the convolution neighborhood.

This means that the ability to incorporate context and prior knowledge, e.g. about topology or boundaries, is limited. The ability to conform to physical models, and principles governing the image data generation and its properties is also limited, including modelling of temporal dependencies and processes. In order to make deep learning based computer vision systems ubiquitous and applicable also for complex, sparsely labelled image data, there is a need for visual intelligence that can easily be adapted to new, non-standard data sources with few labelled training samples.

Main objective

To develop new methodology to exploiting context, dependencies and prior knowledge in deep learning.

Highlighted publications

New Visual Intelligence paper accepted to NeurIPS
September 23, 2022
ProtoVAE explainability paper by Srishti Gautam and co-authors is published to NeurIPS 2022.
Multi-modal land cover mapping of remote sensing images using pyramid attention and gated fusion networks
August 1, 2022
We present a novel pyramid attention and gated fusion method (MultiModNet) for multi-modality land cover mapping in remote sensing.
Using Machine Learning to Quantify Tumor Infiltrating Lymphocytes in Whole Slide Images
June 21, 2022
Developing artificial intelligence methods to help pathologists in analysis of whole slide images for cancer treatment and detection.