The team of PET imaging, data analysis and machine learning experts in front of the 7T integrated small animal PET/MRI scanner, which will be used for data collection in the innovation project. Top left: Samuel Kuttner. Bottom left: Rodrigo Berzaghi. Top right: Kristoffer Knutsen Wickstrøm. Bottom right: Luigi Tommaso Luppino.
Image:
UiT

The team of PET imaging, data analysis and machine learning experts in front of the 7T integrated small animal PET/MRI scanner, which will be used for data collection in the innovation project. Top left: Samuel Kuttner. Bottom left: Rodrigo Berzaghi. Top right: Kristoffer Knutsen Wickstrøm. Bottom right: Luigi Tommaso Luppino.

New innovation project funded on deep learning-based AI for quantitative PET imaging

Developing deep learning-based AI for quantitative PET imaging. Groups joining forces and a new innovation project.

Developing deep learning-based AI for quantitative PET imaging

The problem at hand

Positron emission tomography (PET) imaging plays a vital role in detection, staging, and treatment response assessment of cancer.

Dynamic PET, in particular, visualizes the time-dependent uptake of an injected radiotracer. By application of a kinetic model, it allows quantification of the underlying biological process. Kinetic modelling requires knowledge of the time-dependent tracer concentration in blood, the so-called arterial input-function (AIF).

The gold-standard AIF is obtained by arterial blood sampling, which is invasive, laborious, time-consuming, and potentially painful, with risk for complications. Existing approximations that aim to overcome blood sampling suffer drawbacks that limit their usability.

Groups joining forces

Over the past years, in the Nuclear Medicine and Radiation Biology research group at UiT The Arctic University of Norway, researchers have gained significant knowledge and state-of-the-art infrastructure for performing cutting-edge PET research in humans and small-animals in the PET Imaging Center at the University Hospital of North Norway (UNN).

These possibilities were realized through The Norwegian Nuclear Medicine Consortium 180°N funded by Tromsø Research Foundation.

Head of the PET Imaging Center at UNN, Rune Sundset explains:

- The PET Imaging Center in Tromsø is the northern most PET imaging center in the world. It is physically connected to the hospital and physically connected to the university. This allows us to perform research from basic organic chemistry, all the way to research in human beings.

- In the basement we have a cyclotron and equipment for producing radiopharmaceuticals. On the ground floor we have clinical scanners for patient examinations and at the first floor we have equipment for preclinical studies.

Rune Sundset Photo: NTNU

Likewise, the UiT Machine Learning Group has over many years built a unique team of top competent researchers within artificial intelligence (AI) and deep learning for image analysis.

The group was recently acknowledged national status as Center for Research-based Innovation (SFI) by the Research Council of Norway, with the name SFI Visual Intelligence.

Robert Jenssen, head of the SFI Visual Intelligence. Photo: UiT
Michael Kampffmeyer, head of the UiT Machine Learning Group and work package leader in SFI Visual Intelligence. Photo: UiT

-As a machine learning group, our main competence is to develop the next generation AI methodology to solve real world problems of great value to society. Our collaboration with the PET Imaging Center gives unprecedented opportunities and is of strategic importance for us, explains Michael Kampffmeyer.

Jenssen follows up:

As a Norwegian centre for research-based innovation (senter for forskningsdrevet innovasjon – SFI), Visual Intelligence shall advance deep learning research for value creation by developing innovative solutions that impact people. I am very pleased with the innovation path we are pursuing for quantitative PET imaging, the competence we are building, and the results we are obtaining.  

In this favourable environment, researchers from both 180°N and SFI Visual Intelligence joint their efforts and invented a novel AI-based method that predicts the input-function from dynamic PET images, and which does not suffer the limitations of existing methods.

This deep learning-based input-function (DLIF) has shown promising results for a few tracers in a few animal models and in a human cohort, based on data obtained from collaborators.

These achievements were acknowledged at the 180°N Conference held in Tromsø in March 2022, when the abstract presented by Nils Thomas Doherty Midtbø, Ms student involved in the project, won the 10k NOK price for the Best Abstract Award:

- I am honored to be part of a project such as this, in which my thesis has strong tides with cutting-edge research which, moreover, can soon have a clear impact in real-world applications.

Nils Thomas Doherty Midtbø won the 10k NOK prce for the Best Abstract Award at the 180°N Annual Conference held in Tromsø, March 2022. Photo: Luigi T. Luppino

The method so far has been developed using limited available data (several tens of scans without blood samples). Larger amounts of data (several hundreds of scans with blood samples) are still necessary to improve the method further and to validate it in several settings and scenarios, however there is no available database to meet such requirements.

For this reason, UiT recently granted funding for an innovation project whose goal is to collect such data and, consequently, develop and eventually commercialize this novel AI-product that will significantly advance tumor imaging with PET.

The innovation project

This innovation project aims at collecting a large database of dynamic small-animal PET images with arterial blood sampling. This will allow to build commercial DLIF models based on AI which will significantly simplify quantitative tumor imaging with dynamic PET by evading of blood sampling.

With future translational research, the DLIF models could also be adopted for human PET imaging.

The innovation project is a unique opportunity. It requires top expertise in small-animal PET imaging for data collection, as well as for data analysis and machine learning, all available through the 180°N consortium.

The team of PET imaging, data analysis and machine learning experts in front of the 7T integrated small animal PET/MRI scanner, which will be used for data collection in the innovation project. Top left: Samuel Kuttner. Bottom left: Rodrigo Berzaghi. Top right: Kristoffer Knutsen Wickstrøm. Bottom right: Luigi Tommaso Luppino. Photo: UiT

In particular, the researcher hired for this project, Rodrigo Berzaghi explains:

- This opportunity means a lot to me, both personally and professionally. As a biologist, I have worked a lot with small animals, but carrying out arterial cannulation in them is a major challenge. Learning such technique will be a great achievement, which, together with PET imaging, will grant great visibility to our group, both nationally and internationally

Rodrigo Berzaghi working in the small animal lab area in the PET Imaging Center. Photo: UiT

Collecting such an unprecedented database for AIF prediction represents another great added value, as it may attract collaborators keen to perform state-of-the-art research which cannot be conducted elsewhere. Samuel Kuttner, medical physicist and researcher at the PET Imaging Center and the UiT Machine Learning Group/SFI Visual Intelligence, summarises:

- The task of performing simultaneous PET scanning and arterial blood sampling is very challenging, and very few centers in the world are able to perform this. We are actively collaborating with international experts in the field, for instance at Sherbrooke Molecular Imaging Center, University de Sherbrooke, Canada, where we will have exchange methodology and techniques for small-animal arterial blood sampling with simultaneous dynamic PET imaging.

Preparations for a small-animal PET scan. Photo: UiT

Finally, an innovative product based on state-of-the-art AI methodology, developed at the SFI Visual Intelligence using the above mentioned data, is under way of being commersialized. Luigi Tommaso Luppino and Kristoffer Knutsen Wickstrøm, postdoctoral research fellow and Ph.D. student with the UiT Machine Learning Group/SFI Visual Intelligence, identify several challenges also from their point of view:

- The model we aim to come up with not only must perform adequately well, but it also must provide quantitative measures of its uncertainty and clear interpretations of its behaviour, namely by visualising which parts of the input lead the model to a certain output. These are the keys to gain the trust of doctors and medical practitioners in general, the final users of our proposed method. Thus, the development of new methodologies for interpretable AI (XAI), prominent topic in today’s AI-related research, is fundamental.

Kristoffer Knutsen Wickstrøm, PhD candidate, and Luigi Tommaso Luppino, postdoctoral researcher in the UiT Machine Learning Group/SFI Visual Intelligence. Photo: UiT
Rodrigo Berzaghi and Samuel Kuttner. Photo: UiT.

Latest news

Three Visual Intelligence-authored papers accepted for leading AI conference on medical imaging

June 24, 2025

Visual Intelligence will be well represented at MICCAI 2025—one of the leading AI conferences on medical imaging and computer assisted intervention—with three recently accepted research papers.

2025 Norwegian AI Society Symposium: An insightful and collaborative event

June 23, 2025

More than 50 attendees from the Norwegian AI research community gathered in Tromsø, Norway for two days of insightful presentations, interactive technical sessions, and scientific and social interactions.

Minister of Research and Higher Education visits Visual Intelligence hub at Norwegian Computing Center

June 16, 2025

Last week, we wished Aasland—accompanied by Political Advisor Munir Jaber and Senior Adviser Finn-Hugo Markussen—welcome to the Norwegian Computing Center (NR). One of the visit's goals was to showcase ongoing Visual Intelligence projects at NR.

Visual Intelligence represented at EAGE Annual 2025

June 15, 2025

Alba Ordoñez and Anders U. Waldeland presented ongoing work on seismic foundation models and an interactive seismic interpretation engine at EAGE Annual 2025 in Toulouse, France.

Visual Intelligence PhD Fellow Eirik Østmo featured on Abels tårn

June 13, 2025

Østmo was invited to Abels tårn—one of the largest popular science radio shows in Norway—to answer listener-submitted questions related to artificial Intelligence (AI). The live show took place at Blårock Cafe in Tromsø, Norway on June 12th.

New Industrial PhD project with Kongsberg Satellite Services

June 12, 2025

VI industry partner Kongsberg Satellite Services (KSAT) received an Industrial PhD grant from the Research Council of Norway. The project will be closely connected to Visual Intelligence's "Earth observation" innovation area.

Visual Intelligence represented at plankton-themed workshop by The Institute of Marine Research

June 11, 2025

Visual Intelligence Researchers Amund Vedal and Arnt Børre Salberg recently presented ongoing Visual Intelligence research at a plankton-themed workshop organized by the Institute of Marine Research (IMR), Norway

My Research Stay at Visual Intelligence: Teresa Dorszewski

June 5, 2025

Teresa Dorszewski is a PhD Candidate at the Section for Cognitive Systems at the Technical University of Denmark. She visited Visual Intelligence in Tromsø from January to April 2025.

Visual Intelligence represented at the NORA Annual Conference 2025

June 3, 2025

Centre Director Robert Jenssen was invited to give a keynote and participate in a panel discussion on AI as critical national infrastructure at the NORA Annual Conference 2025 in Halden, Norway.

NRK.no: Nekter å svare om umerkede puslespill er KI-generert: – De bør være ærlige

June 2, 2025

Både forskere og statsråd mener kunstig intelligens bør tydelig merkes. Men forlaget som lager puslespillet som ekspertene mener er KI-generert, sier de ikke har noe med hvordan illustratører lager produktene sine (Norwegian news article by NRK)

ScienceNorway: This is how AI can contribute to faster treatment of lung cancer

May 30, 2025

Researchers have developed an artificial intelligence to map specific immune cells in lung cancer tumors. It can lead to less costly examinations and more personalised cancer treatment (English news story on sciencenorway.no).

Now Hiring: 4 PhD Fellows in Deep Learning

May 28, 2025

The Department of Physics and Technology at UiT The Arctic University of Norway is pleased to announce 4 exciting PhD Fellowships within machine learning at SFI Visual Intelligence. Application deadline: June 17th.

VG: Slik kan AI revolusjonere lungekreftbehandling

May 19, 2025

Norsk forskning har utviklet kunstig intelligens som raskt kan analysere lungekreft. Ekspertene forklarer hvordan dette kan bidra til en mer effektiv og persontilpasset behandling (Norwegian news article in vg.no)

Visual Intelligence evaluated by international experts: "The centre operates at an excellent level"

April 29, 2025

After four years of operation, an international AI expert panel was appointed to assess Visual Intelligence's progress and results. The evaluation was characterized by its excellent remarks on the centre's scientific quality and innovation output.

Visual Intelligence at Norsk Radiografforbund's mammography symposium

April 24, 2025

Senior Researcher Fredrik Dahl recently gave a talk about Norsk Regnesentral's work on developing AI algorithms for automatic analysis of image quality and cancer detection at Norsk Radiografforbund's mammography symposium in Oslo.