Blog

Deep learning and AI in the medical domain

March 7, 2022

One of the major obstacles for this is the availability of training data within this area. Visual Intelligence has partnered with GE Vingmed Ultrasound, The Norwegian Cancer Registry and The University Hospital of Northern Norway (UNN) that together will provide important medical image data. By developing new deep learning methodology Visual Intelligence will enable this important data source to be used to its full potential.

Breast cancer seen on a mammography x-ray.

One example of this is our project to detect hand handle imperfect image quality in images obtained from breast cancer screening. Every year in the Norwegian breast cancer screening program, about 250 000 women participate each year. Thus, a large number of images are taken across the country by different hospitals and by a large number of radiographers. When the mammograms are acquired small movements of the breast may result in reduced image quality that can be hard to detect for radiographers, but which can affect subsequent interpretation of mammograms. Hence, it is desirable to detect this already at acquisition time to ensure high-quality images, and if imperfections are still unavoidable to be able to reconstruct a perfect image. Challenges here are that the current manual interpretation process does not provide annotations of image quality, which means that training data are scarce.

Visual Intelligence aims to unlock the potential of complex image data across different domains. In the domain of medicine and health another very important aspect, when moving into diagnosis, is the need for estimates of confidence and uncertainty in the predictions as wells as explanation of the predictions that a model provides.

Related to explainability in artificial intelligence researchers from the UiT Machine Learning Group, the host group for Visual Intelligence, have already delivered extensive research on this topic with application on images from colonoscopy screenings.

One of the main tasks during a screening is to locate small abnormal growths called polyps, which are known to be possible precursors to colorectal cancer (CRC).

Such screenings are manual procedures performed by physicians and are therefore affected by human factors such as fatigue and experience. A trustworthy Decision Support Systems (DSSs) could aid physicians during or after the procedure but should provide a measure of uncertainty to accompany its prediction such that physicians can make well-informed decisions and communicate to the user what factors influences a prediction. Without such information, the user can not determine if the model is detecting features that are associated with the disease in question or if it is exploiting artifacts in the data.

Their solution was able to meet such requirements and showed that deep models are utilizing the shape and edge information of polyps to make their prediction. Moreover, inaccurate predictions show a higher degree of uncertainty compared to precise predictions.

Read the full article about explainability in CRC desicion support system.

Publication

Uncertainty and interpretability in convolutional neural networks for semantic segmentation of colorectal polyps

November 14, 2019

Kristoffer Wickstrøm, Michael Kampffmeyer, Robert Jenssen

Paper abstract

Colorectal polyps are known to be potential precursors to colorectal cancer, which is one of the leading causes of cancer-related deaths on a global scale. Early detection and prevention of colorectal cancer is primarily enabled through manual screenings, where the intestines of a patient is visually examined. Such a procedure can be challenging and exhausting for the person performing the screening. This has resulted in numerous studies on designing automatic systems aimed at supporting physicians during the examination. Recently, such automatic systems have seen a significant improvement as a result of an increasing amount of publicly available colorectal imagery and advances in deep learning research for object image recognition. Specifically, decision support systems based on Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance on both detection and segmentation of colorectal polyps. However, CNN-based models need to not only be precise in order to be helpful in a medical context. In addition, interpretability and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. Furthermore, we propose a novel method for estimating the uncertainty associated with important features in the input and demonstrate how interpretability and uncertainty can be modeled in DSSs for semantic segmentation of colorectal polyps. Results indicate that deep models are utilizing the shape and edge information of polyps to make their prediction. Moreover, inaccurate predictions show a higher degree of uncertainty compared to precise predictions.