The program will be available shortly. Please check back later.
Presenter: Elisabeth Wetzer, Bioinformatician at Karolinska Institute (Dept. of Oncology-Pathology). Upcoming Associate professor at UiT Machine learning group.
Abstract: Combining the information of different imaging modalities offers complimentary information about the properties of the imaged specimen. Often these modalities need to be captured by different machines, which requires that the resulting images need to be matched and registered in order to map the corresponding signals to each other. This can be a very challenging task due to the varying appearance of the specimen in different sensors. We can exploit representation learning techniques to find image-like representations of both modalities, that are similar in intensity and features, which allows us to transform the very challenging multimodal registration task into a generally easier, monomodal one. These representations are required to have certain properties important to the downstream task of registration, e.g. being rotationally equivariant, which can be achieved by different approaches. In this talk, I will discuss two approaches to generate these representations, which form an alternative to GAN or diffusion-model based image-to-image translation as they require a lot less training data - an important aspect in biomedical applications.
In compliance with GDPR consent requirements, presentations given in a Visual Intelligence context may be recorded with the consent of the speaker. All recordings are edited to remove all faces, names and voices of other participants. Questions and comments by the audience will hence be removed and will not appear in the recording. With the freely given consent from the speaker, recorded presentation may be posted on the Visual Intelligence YouTube channel.
Visual Intelligence Seminar Series: Thursdays, bi-weekly, odd-week numbers
This seminar is open for members of the consortium. If you want to participate as a guest please sign up.