Many deep learning studies are not designed to provide unbiased estimation of the system's performance in the intended application. Reports of overoptimistic estimates and opportunities may inflate the expectation of what is currently possible, misguide resource allocation, and hamper the progression of the field. In this talk, we will look into how the performance of a deep learning system in an intended application could be estimated more reliably than what is currently common practice, even if restricted to using retrospective data.
The recent advances in deep learning and drastic increase in number of imaging satellites, new levels of automation are possible and necessary. KSAT is investing significantly into modern MLOps practices to achieve this and intends to use its membership in Visual Intelligence to solve the research aspects of this transformation.
Deep learning can bring time savings and increased reproducibility to medical image analysis. However, acquiring training data is challenging due to the time-intensive nature of labeling and high inter-observer variability in annotations. Rather than labeling images, in this work we propose an alternative pipeline where images are generated from existing high-quality annotations using generative adversarial networks (GANs). Annotations are derived automatically from previously built anatomical models and transformed into realistic synthetic ultrasound images with CycleGAN.
In fisheries acoustics, echo sounding is applied to detect fish and other marine objects in the ocean; a central tool for stock assessments and establishing fishing quotas. Fish detection and species classification from echo sounder data is typically a manual process. In our work, we automate this process by training a convolutional neural network for semantic segmentation using supervised learning. The talk will describe the data, the CNN-approach used for segmentation – as well as issues related to the training data, such as the quality of annotations when used in a machine learning setting.
Deep learning is the cornerstone of artificial intelligence applications across a wide range of tasks and domains. An important component that is missing from deep learning is explainability, i.e. the ability to explain what influenced a prediction made by a deep learning-based system. Explainable deep learning is an active area of research, with new algorithms being proposed at a rapid pace. This presentation highlights existing methods for explainable deep learning, as well as how to model uncertainty in explainability.