Interpretation of mammography screenings.

Image:

Deep learning and AI in the medical domain

In the field of medical imaging Visual Intelligence aims at obtaining more efficient tools for diagnosis support and decision support for diseases using deep learning technologies.‍

One of the major obstacles for this is the availability of training data within this area. Visual Intelligence has partnered with GE Vingmed Ultrasound, The Norwegian Cancer Registry and The University Hospital of Northern Norway (UNN) that together will provide important medical image data. By developing new deep learning methodology Visual Intelligence will enable this important data source to be used to its full potential.

Breast cancer seen on a mammography x-ray.

One example of this is our project to detect hand handle imperfect image quality in images obtained from breast cancer screening. Every year in the Norwegian breast cancer screening program, about 250 000 women participate each year. Thus, a large number of images are taken across the country by different hospitals and by a large number of radiographers. When the mammograms are acquired small movements of the breast may result in reduced image quality that can be hard to detect for radiographers, but which can affect subsequent interpretation of mammograms. Hence, it is desirable to detect this already at acquisition time to ensure high-quality images, and if imperfections are still unavoidable to be able to reconstruct a perfect image. Challenges here are that the current manual interpretation process does not provide annotations of image quality, which means that training data are scarce.

Visual Intelligence aims to unlock the potential of complex image data across different domains. In the domain of medicine and health another very important aspect, when moving into diagnosis, is the need for estimates of confidence and uncertainty in the predictions as wells as explanation of the predictions that a model provides.

Related to explainability in artificial intelligence researchers from the UiT Machine Learning Group, the host group for Visual Intelligence, have already delivered extensive research on this topic with application on images from colonoscopy screenings.

One of the main tasks during a screening is to locate small abnormal growths called polyps, which are known to be possible precursors to colorectal cancer (CRC).

Such screenings are manual procedures performed by physicians and are therefore affected by human factors such as fatigue and experience. A trustworthy Decision Support Systems (DSSs) could aid physicians during or after the procedure but should provide a measure of uncertainty to accompany its prediction such that physicians can make well-informed decisions and communicate to the user what factors influences a prediction. Without such information, the user can not determine if the model is detecting features that are associated with the disease in question or if it is exploiting artifacts in the data.

Their solution was able to meet such requirements and showed that deep models are utilizing the shape and edge information of polyps to make their prediction. Moreover, inaccurate predictions show a higher degree of uncertainty compared to precise predictions.

Read the full article about explainability in CRC desicion support system.

Video

Further reading

Detection of sea mammals from aerial imagery
December 18, 2020
Better solutions are needed to estimate the populations of sea mammals, such as breeding seals, from aerial images of the sea ice.
Opening the black box of AI
January 19, 2021
Deep learning and AI models must become interpretable, explainable and reliable before they can be utilized in complex domains.
Detection and classification of fish species from acoustic data
March 1, 2021
We collaborate with the Institute of Marine Research (IMR) to develop models and applications to detect and classify fish from echosounders.