Image:

Successful PhD defense by Srishti Gautam

We congratulate Srishti Gautam for successfully defending her PhD thesis and achieving the PhD degree in Science at UiT The Arctic University of Norway.

Successful PhD defense by Srishti Gautam

We congratulate Srishti Gautam for successfully defending her PhD thesis and achieving the PhD degree in Science at UiT The Arctic University of Norway on March 15th 2024.

By: Petter Bjørklund, Communications advisor, Visual Intelligence.

Gautam is a researcher at UiT Machine Learning group and the research centre. Her thesis, "Towards Interpretable, Trustworthy and Reliable AI", focuses on enhancing the interpretability of deep learning through the development of self-explainable models.

The title of Gautam's trial lecture was "Vision-language models: applications, limitations and future directions".

Supervisors:

  • Associate professor Michael Kampffmeyer, UiT Machine Learning Group, Department of Physics and Technology, UiT (main supervisor)
  • Postdoctoral researcher Ahcene Boubekki, Physikalisch-Technische Bundesanstalt (co-supervisor).
  • Professor Robert Jenssen, UiT Machine Learning, Department of Physics and Technology, UiT (co-supervisor).

Evaluation committee:

  • Professor Georgios Leontidis, Chair of Machine Learning, lnterdisciplinary Director of Data and Al, Turing Academic Liaison, Vice-Principals' Office, University of Aberdeen, UK (1. Opponent)
  • Professor Lilja Øvrelid, Language Technology Group, Section for Machine Learning, Department of lnformatics, University of Oslo, Oslo, Norway (2. Opponent)
  • Associate Professor Elisabeth Wetzer, Machine Learning Group, Department of Physics and Technology, UiT (intern member and leader of the committee)
Gautam defending her PhD thesis. Photo: Petter Bjørklund.
Gautam with dean, supervisors and thesis evaluation committee. Photo: Petter Bjørklund.

Interview with Srishti Gautam

Could you provide a short summary of your thesis?

In the rapidly evolving field of artificial intelligence (AI), the development of deep learning models has marked a significant milestone, enabling breakthroughs across various applications. However, these advancements have also surfaced critical challenges, notably the models' vulnerability to inheriting biases from their training data. This issue can further be compounded by these large models’ inherent lack of transparency in their decision making. Such issues not only undermine thetrust in these technologies but also pose a barrier to their widespread adoption.

Recognizing the importance of these concerns, my thesis focuses on enhancing the interpretability of deep learning through the development of self-explainable models. These models aim to shift the paradigm towards more transparent AI systems by integrating explanations directly into their architecture, thereby offering direct insights into their decision-making processes. Further, we address the inadvertent learning of biases in deep learning by putting these self-explainable models into action.

Why have you focused on this particular topic? What is the importance of researching this topic?

I have chosen to focus on this topic because of the critical role AI plays in our lives today and its potential for even greater impact in the future. As AI technologies become increasingly integrated into various sectors—ranging from healthcare and education to finance and security—the need for these systems to be transparent, fair, and trustworthy becomes paramount.

Bias in AI can lead to unfair outcomes, such as discrimination against certain groups, while opacity in AI decision-making processes can prevent users from understanding and trusting the results. By developing self-explainable models that are inherently transparent and designed to detect biases, this research aims to foster a generation of AI systems that are not only high-performing but also equitable and comprehensible. This is crucial for ensuring that AI technologies benefit society as a whole, facilitating their widespread adoption in a responsible and ethical manner.

What methods have you used in your thesis?

Srishti Gautam. Photo: Petter Bjørklund.

In my thesis, I employed a multi-faceted approach to develop and enhance AI models, including:

1. Enhancement and Development of Self-Explainable Models: I designed novel self-explainable models that integrate explanations directly into their architecture. This approach allows the models to provide insights into their decision-making processes inherently, making them more transparent and understandable. Further, I introduced a novel algorithm aimed at improving the explanation quality of existing state-of-the-art self-explainable models. This algorithm enhances the clarity and relevance of the explanations provided by the models, making it easier for users to understand the rationale behind AI decisions.

2. Counteracting Data Artifacts: An important aspect of my research involved identifying and mitigating the learning of artifacts—spurious correlations that models might pick up from the training data. By focusing on this, the methodology helps in reducing the inadvertent perpetuation of biases, ensuring that the models make decisions based on relevant features rather than biased or irrelevant correlations.

 

3. Fairness in Large Language Models: Given the increasing use of large language models in various applications, my thesis also extends to exploring fairness within these models. This involves analyzing and demonstrating how such models can reinforce social biases, specifically against gender and race, and exploring strategies to mitigate these biases, thereby promoting fairness.

What significance may your results have on society/the general public?

The results of my research have the potential to significantly impact various sectors of society and the general public. By enhancing AI transparency and fairness,consumers and end-users gain access to more reliable and understandable AI-driven services. Marginalized and underrepresented groups stand to benefit from efforts to reduce biases in AI, aiming to ensure equitable applications and prevent the perpetuation of social inequalities. For example, in healthcare, transparent and unbiased AI can lead to more accurate diagnoses and treatments, supporting fair medical decisions for patients. Businesses and organizations using self-explainable AI models can build customer trust and ensure regulatory compliance, fostering ethical practices. Additionally, the development of transparent and fair AI models can provide valuable insights for policy makers and regulators, informing better guidelines and regulations for AI use.

Summary of the thesis

The field of artificial intelligence recently witnessed remarkable growth, leading to the development of complex deep learning models that perform exceptionally across various domains. However, these developments bring forth critical issues. Deep learning models are vulnerable to inheriting and potentially exacerbating biases present in their training data. Moreover, the complexity of these models leads to a lack of transparency, which can allow biases to go undetected. This can lead to ultimately hindering the adoption of these models due to a lack of trust. It is therefore crucial to foster the creation of artificial intelligence systems that are inherently transparent, trustworthy, and fair. This thesis contributes to this line of research by exploring the interpretability of deep learning through self-explainable models. These models represent a shift towards more transparent systems, offering explanations that are integral to the model’s architecture, yielding insights into their decision-making processes. Consequently, this inherent transparency enhances our understanding, thereby providing a mechanism to address the inadvertent learning of biases.

Latest news

Three Visual Intelligence-authored papers accepted for leading AI conference on medical imaging

June 24, 2025

Visual Intelligence will be well represented at MICCAI 2025—one of the leading AI conferences on medical imaging and computer assisted intervention—with three recently accepted research papers.

2025 Norwegian AI Society Symposium: An insightful and collaborative event

June 23, 2025

More than 50 attendees from the Norwegian AI research community gathered in Tromsø, Norway for two days of insightful presentations, interactive technical sessions, and scientific and social interactions.

Minister of Research and Higher Education visits Visual Intelligence hub at Norwegian Computing Center

June 16, 2025

Last week, we wished Aasland—accompanied by Political Advisor Munir Jaber and Senior Adviser Finn-Hugo Markussen—welcome to the Norwegian Computing Center (NR). One of the visit's goals was to showcase ongoing Visual Intelligence projects at NR.

Visual Intelligence represented at EAGE Annual 2025

June 15, 2025

Alba Ordoñez and Anders U. Waldeland presented ongoing work on seismic foundation models and an interactive seismic interpretation engine at EAGE Annual 2025 in Toulouse, France.

Visual Intelligence PhD Fellow Eirik Østmo featured on Abels tårn

June 13, 2025

Østmo was invited to Abels tårn—one of the largest popular science radio shows in Norway—to answer listener-submitted questions related to artificial Intelligence (AI). The live show took place at Blårock Cafe in Tromsø, Norway on June 12th.

New Industrial PhD project with Kongsberg Satellite Services

June 12, 2025

VI industry partner Kongsberg Satellite Services (KSAT) received an Industrial PhD grant from the Research Council of Norway. The project will be closely connected to Visual Intelligence's "Earth observation" innovation area.

Visual Intelligence represented at plankton-themed workshop by The Institute of Marine Research

June 11, 2025

Visual Intelligence Researchers Amund Vedal and Arnt Børre Salberg recently presented ongoing Visual Intelligence research at a plankton-themed workshop organized by the Institute of Marine Research (IMR), Norway

My Research Stay at Visual Intelligence: Teresa Dorszewski

June 5, 2025

Teresa Dorszewski is a PhD Candidate at the Section for Cognitive Systems at the Technical University of Denmark. She visited Visual Intelligence in Tromsø from January to April 2025.

Visual Intelligence represented at the NORA Annual Conference 2025

June 3, 2025

Centre Director Robert Jenssen was invited to give a keynote and participate in a panel discussion on AI as critical national infrastructure at the NORA Annual Conference 2025 in Halden, Norway.

NRK.no: Nekter å svare om umerkede puslespill er KI-generert: – De bør være ærlige

June 2, 2025

Både forskere og statsråd mener kunstig intelligens bør tydelig merkes. Men forlaget som lager puslespillet som ekspertene mener er KI-generert, sier de ikke har noe med hvordan illustratører lager produktene sine (Norwegian news article by NRK)

ScienceNorway: This is how AI can contribute to faster treatment of lung cancer

May 30, 2025

Researchers have developed an artificial intelligence to map specific immune cells in lung cancer tumors. It can lead to less costly examinations and more personalised cancer treatment (English news story on sciencenorway.no).

Now Hiring: 4 PhD Fellows in Deep Learning

May 28, 2025

The Department of Physics and Technology at UiT The Arctic University of Norway is pleased to announce 4 exciting PhD Fellowships within machine learning at SFI Visual Intelligence. Application deadline: June 17th.

VG: Slik kan AI revolusjonere lungekreftbehandling

May 19, 2025

Norsk forskning har utviklet kunstig intelligens som raskt kan analysere lungekreft. Ekspertene forklarer hvordan dette kan bidra til en mer effektiv og persontilpasset behandling (Norwegian news article in vg.no)

Visual Intelligence evaluated by international experts: "The centre operates at an excellent level"

April 29, 2025

After four years of operation, an international AI expert panel was appointed to assess Visual Intelligence's progress and results. The evaluation was characterized by its excellent remarks on the centre's scientific quality and innovation output.

Visual Intelligence at Norsk Radiografforbund's mammography symposium

April 24, 2025

Senior Researcher Fredrik Dahl recently gave a talk about Norsk Regnesentral's work on developing AI algorithms for automatic analysis of image quality and cancer detection at Norsk Radiografforbund's mammography symposium in Oslo.