Explainability and reliability in deep learning

Opening the "black box" of deep learning to give explainable and reliable predictions.

When black-box algorithms like deep neural networks are making decisions that were previously entrusted to humans, it becomes more and more necessary for these algorithms to explain themselves.

To a large degree, our user partner’s applications involve imaging the unseen – the inside of the human body, the sea, and the surface of the earth seen from space independent of daylight and weather conditions. Impact of innovative technology for users depends on trust. A limitation of deep learning models is that there is no generally accepted solution for how to open the “black-box” of the deep network to provide explainable decisions which can be relied on to be trustworthy.

There is therefore a need for explainability, which means that the models should be able to summarize the reasons for their predictions, both to gain the trust of users and to produce insights about the causes of their decisions.

Related news

Official opening of Visual Intelligence research centre!
January 19, 2021

January 14, 2021 the official opening of SFI Visual Intelligence will be organized at the UiT - The Arctic University of Norway. Anne Husebekk, the rector of UiT will be giving a speech at the opening ceremony.

Northern Lights Deep Learning Workshop 2021
January 19, 2021

NLDL 2021 will be a digital conference hosted by the UiT Machine Learning Group and Visual Intelligence January 18-20. The program includes a Mini Deep Learning School the 18th and is followed by a tight program the rest of the week.

A new Centre for Research-based Innovation
January 19, 2021

Visual Intelligence will be one of the new SFIs funded by the Research Council of Norway. The center will run over a period of eight years and will form a collaboration between businesses and research institutions in Norway.

Visual Intelligence is officially opened!
January 19, 2021

The official opening of SFI Visual Intelligence was successfully arranged as a digital event today. We are now ready to commence our research and innovation to tackle some of the large challenges in deep learning and AI, along with our partners.

Related projects

Detection and classification of fish species from acoustic data
December 15, 2020
We collaborate with the Institute of Marine Research (IMR) to develop models and applications to detect and classify fish from echosounders.
Deep learning and AI in the medical domain
January 19, 2021
Overcoming the challenges of limited training data in the medical domain and laying the fundamentals for explainability and reliability.
Opening the black box of AI
January 19, 2021
Deep learning and AI models must become interpretable, explainable and reliable before they can be utilized in complex domains.