A huge problem in the scientific community is the abuse of test sets, by modifying a method after test results, and then retesting and reporting the better result. This creates a serious problem: when state-of-the-art is achieved by re-using the test set, it is impossible to beat it when doing things properly.
A pipeline for automated diagnostic grading is proposed, called TRI-grade. First, a tissue segmentation method is utilized to find the diagnostically relevant urothelium tissue. Then, a parameterized tile extraction method is used to extract tiles from the urothelium regions at three magnification levels (25x, 100x, and 400x). The extracted tiles form the training, validation, and test data used to train and test the diagnostic model.
In quantitative reservoir characterization workflows, it is common to incorporate the uncertainty of predictions thus such subsurface models should provide calibrated probabilities and the associated uncertainties in their predictions. Whilst Machine Learning is being utilised or tested at different geoscience application domains, the uncertainty associated with their prediction is often ignored. We introduce and compare different approaches to obtaining probabilistic ML models and show different case studies for well data and seismic based applications. Overall, we observe that the resulting uncertainties offer a possibility to consider different scenarios in subsurface modeling and further improve the model performance as well as enhancing the interpretability of the models.
In this talk, we will discuss two different applications that require Unsupervised Domain Adaptation. Firstly, in congenital heart defects, MRI data is scarce in comparison to other diseases. We developed an approach that completely eliminates the necessity of labels in the target domain during training to solve a voxel-based task. Secondly, we show how adversarial domain transfer can efficiently be used to improve the realism of minimally-invasive surgical training simulators, which has been also posed as a challenge (AdaptOR) at MICCAI 2021 conference.
Normalizing flows and diffusion models are two classes of deep probabilistic generative models that excel at modeling high-dimensional distributions. In recent years, a lot of progress has been made for these models, with the majority of work focusing on images.
The talk focus on how to adapt the machine learning pipeline to complex real-world situations that involve dealing with data with imperfect and weak labels. First, we describe a general approach to such problems, diving into specific applications. Then, we present an overview of techniques to derive explanations for predictions that we leverage to introduce strategies to measure and understand label quality.
The innovation power of deep learning and computer vision now reaches marine science. This seminar introduces you to the recent achievement of deep learning in marine science,especially in analyzing the echo sounder data, known as SONAR data. Changkyu Choi is a PhD student in UiT Machine Learning Group and SFI Visual Intelligence, working for novel deep learning methods that bridge computer visionto marine science. His work is also closely collaborated with the stake holders of SFI Visual Intelligence, e.g., Institute of Marine Research(Havforsknings instituttet) and Norwegian Computing Center (Norsk Regnesentral).
Interpreting and understanding seismic data is a key process for accurate subsurface analysis in oil and gas exploration. We have recently started putting deep learning neural networks to use in assisting the interpreter to gain efficiency and quality. Digitalization and improved analysis has become an important step for Equinor to achieve success when Identifying new prospects, making new discoveries and extending the lifetime of existing fields. The talk will cover seismic and seismic interpretation (the “why”, “what” and “how”) and our effort to train neural networks to mimic the interpreters reasoning and understanding of complex data.
Urban maps in Norway are currently updated using manual photo interpretation on stereo aerial imagery. However, there is often a substantial delay after completion of construction work until new buildings, roads, etc. appear in updated versions of the urban maps. Automated pixel-based urban land cover classification from multispectral aerial images of very high resolution has proven difficult since the same spectral values may occur within several land cover types. Airborne hyperspectral data may provide better discriminative power. However, there is still the problem that the same types of material may exist within different land cover types, such as buildings, roads, parks, gardens, etc. In this seminar Øyvind Trier dives into how these challenges can be assessed using deep learning.
In this talk, we will develop a conceptual approach by combining the model-based method of sparse regularization by shearlets with the data-driven method of deep learning. Our solvers are guided by a microlocal analysis viewpoint to pay particular attention to the singularity structures of the data. Focussing then on the inverse problem of (limited-angle) computed tomography, we will show that our algorithms significantly outperform previous methodologies, including methods entirely based on deep learning. Finally, we will also briefly touch upon the issue of how to interpret such approaches