Associate professor Elisabeth Wetzer says AI tends to treat men and women differently.
Image:
Photo: Petter Bjørklund/SFI Visual Intelligence.

Associate professor Elisabeth Wetzer says AI tends to treat men and women differently.

Researcher: This makes AI perform worse for women

Artificial intelligence may treat men and women differently. How does this happen? AI researcher Elisabeth Wetzer explains the biases surrounding AI technology.

Researcher: This makes AI perform worse for women

Artificial intelligence may treat men and women differently. How does this happen? AI researcher Elisabeth Wetzer explains the biases surrounding AI technology.

By Petter Bjørklund, Communication Advisor at SFI Visual Intelligence

From health-promoting technologies to personal assistants. There is little doubt that artificial intelligence (AI) can help us in many different ways. But does it help everyone equally?

“AI has a tendency to treat men and women differently,“ says associate professor at UiT Machine Learning Group and SFI Visual Intelligence, Elisabeth Wetzer.

She is an AI expert and describes this as a significant challenge with today’s AI technology. Systems that favor job applications by men, grant lower credit limits to women, and recognize fewer female faces are only a handful of examples she mentions.

What makes an AI algorithm become “biased”, in other words, to act in ways that favor or discriminate against groups like men or women?

Data can reinforce biases in data

AI is trained on enormous amounts of data. Chatbots like ChatGPT, DeepSeek, and Elon Musk’s Grok are based on millions of pictures, videos and text from the internet. These “big data” are essential for an AI system to perform a given task.

But data are historical. This means that they can reflect prejudices and outdated stereotypes throughout history, for example pertaining to gender.

“If you look through a set of data from the last decade, you will quickly find groups who have been discriminated against based on their gender, sexuality, or skin color. Since AI is made to find patterns and correlations in data, there is a risk that the systems may pick up and reinforce biases from the dataset,” Wetzer says.

The consequences of this can be significant, especially for marginalized and underrepresented groups.

“Let us say you have a credit score system designed to determine the size of the loan someone should be granted. If the system is based on salary statistics from the past sixty years, it will pick up that there is a significant wage gap between women and men,” she explains.

“The system will then assume that women are less economically responsible than men, and that women are less suited to be granted a loan. This means that it has learned a skewed and incorrect connection between gender and income.”

Should AI be gender-blind?

If there is a risk of AI misusing gender information to treat people differently, does this mean that AI systems should be developed to be gender-blind?

It depends on what the AI system is designed for, Wetzer responds. In some cases, information about gender can be an important factor for the system to take into account.

“For example, some diseases occur more frequently in women than men. For those cases, you do not want an AI to consciously ignore the person’s gender when detecting such diseases,” Wetzer says.

“If the information is relevant to the AI’s task, it is important that gender is not ignored. But an algorithm should never use this information to determine how suited someone is to be granted a loan,” she adds.

It is not always easy to develop systems that do not take this into consideration. This is because they are adept at detecting gender-related factors which the developers may not have realized existed in the data. For example, an AI tool from Amazon learned to ignore job applications that mentioned universities associated with women.

Lacking representation in data

In today’s global society, it is important that all people are represented equally. This also applies to the data that AI systems are based on. Who is represented in the data or not has a significant impact on whom the technology performs better or worse on.

“If a specific group of people is not equally represented in the data as other groups, the system will perform worse on that particular group,” Wetzer explains.

If AI is trained on images of male professors, it will assume that the profession is reserved for men. AI developers need to be conscious of the representation in the dataset, especially when developing systems that make decisions affecting people’s health and well-being.

"For example, an AI cancer diagnostics system may have been created in a developed country that can afford to do so. People will assume that it will perform well for everyone, but some marginalized groups may not ever have been part of the training data. The system will most likely not work well on those people."

“Woke” AI

But the pursuit of equal representation in AI models and data can also go too far. Last year, Gemini, Google’s AI-based image generator, was accused of being “woke” when it generated images of German soldiers from 1943 with African and Asian appearances.

“If you ask an AI about how something was in Germany at a particular point in time, it would be wrong of it to assume that the population was more diverse than it actually was. You can clearly see that it has actively tried to make a more diverse set and produced something which does not make much sense,” Wetzer says.

Skewed gender balance

Biases in AI do not just include training data. Only 30 percent of today’s global AI workforce are women, meaning that the systems are often developed by men. This can significantly impact the development of such systems.

“There are a lot of things to consider when developing AI, for example which training data, neural network, and parameters to use. These decisions are made by someone, and today’s workforce is not particularly diverse,” she says.

If AI is developed by only a single group, there is a risk that the system will be based on how that particular group understands, experiences, and interprets the world. This is rarely a conscious choice and usually happens without the developers themselves being aware of it.

“Several studies show that technology is shaped by those who create it. This means that there is a chance that a single group may forget to include others’ perspectives and experiences around gender discrimination and racism.”

A need for female role models

The AI workforce and academia needs to reflect the diversity in society, she says. Increased focus on diversity among AI developers and researchers is crucial for developing AI technology that works well for everyone.

“It is absolutely essential to include different perspectives and experiences in the development of these solutions”, Wetzer emphasizes.

She believes that the field will become more diverse. However, several measures are still needed to motivate girls and women to study, research, and develop AI.

“We must continue to spark interest in STEM sciences from an early age and put STEM careers on the map for girls. We also need strong role models who can inspire them to study and work with AI. This means we need to shed more light on female researchers and their contributions to the field."

AI regulation is necessary

Last year, the AI Act, the world’s first AI law, was passed in the EU. It imposes strict requirements for the responsible development and use of AI in Europe and Norway. The entire legislation is set to be implemented in Norway by 2026.

The lack of such guidelines can contribute to reinforcing social and economic inequalities among different groups of people, such as those based on gender. Wetzer is positive about the AI Act and sees it as an important step toward developing safer and more fair AI.

“I believe it will provide thorough guidelines on how AI systems should be developed and tested before being implemented, similarly to how medications are tested. There are clear guidelines and multiple stages that must be followed before the drugs can be used and sold, and the same should apply to AI”, Wetzer says.

“The regulation will encourage developers and researchers to consider how AI systems should be designed according to fundamental ethical principles. It is important that AI systems serve more than just corporate interests”, she concludes.

Stay updated on the latest Visual Intelligence news on our social media:

LinkedIn

BlueSky

Latest news

Visual Intelligence represented at EAGE Annual 2025

June 15, 2025

Alba Ordoñez and Anders U. Waldeland presented ongoing work on seismic foundation models and an interactive seismic interpretation engine at EAGE Annual 2025 in Toulouse, France.

Visual Intelligence PhD Fellow Eirik Østmo featured on Abels tårn

June 13, 2025

Østmo was invited to Abels tårn—one of the largest popular science radio shows in Norway—to answer listener-submitted questions related to artificial Intelligence (AI). The live show took place at Blårock Cafe in Tromsø, Norway on June 12th.

New Industrial PhD project with Kongsberg Satellite Services

June 12, 2025

VI industry partner Kongsberg Satellite Services (KSAT) received an Industrial PhD grant from the Research Council of Norway. The project will be closely connected to Visual Intelligence's "Earth observation" innovation area.

Visual Intelligence represented at plankton-themed workshop by The Institute of Marine Research

June 11, 2025

Visual Intelligence Researchers Amund Vedal and Arnt Børre Salberg recently presented ongoing Visual Intelligence research at a plankton-themed workshop organized by the Institute of Marine Research (IMR), Norway

My Research Stay at Visual Intelligence: Teresa Dorszewski

June 5, 2025

Teresa Dorszewski is a PhD Candidate at the Section for Cognitive Systems at the Technical University of Denmark. She visited Visual Intelligence in Tromsø from January to April 2025.

Visual Intelligence represented at the NORA Annual Conference 2025

June 3, 2025

Centre Director Robert Jenssen was invited to give a keynote and participate in a panel discussion on AI as critical national infrastructure at the NORA Annual Conference 2025 in Halden, Norway.

NRK.no: Nekter å svare om umerkede puslespill er KI-generert: – De bør være ærlige

June 2, 2025

Både forskere og statsråd mener kunstig intelligens bør tydelig merkes. Men forlaget som lager puslespillet som ekspertene mener er KI-generert, sier de ikke har noe med hvordan illustratører lager produktene sine (Norwegian news article by NRK)

ScienceNorway: This is how AI can contribute to faster treatment of lung cancer

May 30, 2025

Researchers have developed an artificial intelligence to map specific immune cells in lung cancer tumors. It can lead to less costly examinations and more personalised cancer treatment (English news story on sciencenorway.no).

Now Hiring: 4 PhD Fellows in Deep Learning

May 28, 2025

The Department of Physics and Technology at UiT The Arctic University of Norway is pleased to announce 4 exciting PhD Fellowships within machine learning at SFI Visual Intelligence. Application deadline: June 17th.

VG: Slik kan AI revolusjonere lungekreftbehandling

May 19, 2025

Norsk forskning har utviklet kunstig intelligens som raskt kan analysere lungekreft. Ekspertene forklarer hvordan dette kan bidra til en mer effektiv og persontilpasset behandling (Norwegian news article in vg.no)

Visual Intelligence evaluated by international experts: "The centre operates at an excellent level"

April 29, 2025

After four years of operation, an international AI expert panel was appointed to assess Visual Intelligence's progress and results. The evaluation was characterized by its excellent remarks on the centre's scientific quality and innovation output.

Visual Intelligence at Norsk Radiografforbund's mammography symposium

April 24, 2025

Senior Researcher Fredrik Dahl recently gave a talk about Norsk Regnesentral's work on developing AI algorithms for automatic analysis of image quality and cancer detection at Norsk Radiografforbund's mammography symposium in Oslo.