Associate professor Elisabeth Wetzer says AI tends to treat men and women differently.
Image:
Photo: Petter Bjørklund/SFI Visual Intelligence.

Associate professor Elisabeth Wetzer says AI tends to treat men and women differently.

Researcher: This makes AI perform worse for women

Artificial intelligence may treat men and women differently. How does this happen? AI researcher Elisabeth Wetzer explains the biases surrounding AI technology.

Researcher: This makes AI perform worse for women

Artificial intelligence may treat men and women differently. How does this happen? AI researcher Elisabeth Wetzer explains the biases surrounding AI technology.

By Petter Bjørklund, Communication Advisor at SFI Visual Intelligence

From health-promoting technologies to personal assistants. There is little doubt that artificial intelligence (AI) can help us in many different ways. But does it help everyone equally?

“AI has a tendency to treat men and women differently,“ says associate professor at UiT Machine Learning Group and SFI Visual Intelligence, Elisabeth Wetzer.

She is an AI expert and describes this as a significant challenge with today’s AI technology. Systems that favor job applications by men, grant lower credit limits to women, and recognize fewer female faces are only a handful of examples she mentions.

What makes an AI algorithm become “biased”, in other words, to act in ways that favor or discriminate against groups like men or women?

Data can reinforce biases in data

AI is trained on enormous amounts of data. Chatbots like ChatGPT, DeepSeek, and Elon Musk’s Grok are based on millions of pictures, videos and text from the internet. These “big data” are essential for an AI system to perform a given task.

But data are historical. This means that they can reflect prejudices and outdated stereotypes throughout history, for example pertaining to gender.

“If you look through a set of data from the last decade, you will quickly find groups who have been discriminated against based on their gender, sexuality, or skin color. Since AI is made to find patterns and correlations in data, there is a risk that the systems may pick up and reinforce biases from the dataset,” Wetzer says.

The consequences of this can be significant, especially for marginalized and underrepresented groups.

“Let us say you have a credit score system designed to determine the size of the loan someone should be granted. If the system is based on salary statistics from the past sixty years, it will pick up that there is a significant wage gap between women and men,” she explains.

“The system will then assume that women are less economically responsible than men, and that women are less suited to be granted a loan. This means that it has learned a skewed and incorrect connection between gender and income.”

Should AI be gender-blind?

If there is a risk of AI misusing gender information to treat people differently, does this mean that AI systems should be developed to be gender-blind?

It depends on what the AI system is designed for, Wetzer responds. In some cases, information about gender can be an important factor for the system to take into account.

“For example, some diseases occur more frequently in women than men. For those cases, you do not want an AI to consciously ignore the person’s gender when detecting such diseases,” Wetzer says.

“If the information is relevant to the AI’s task, it is important that gender is not ignored. But an algorithm should never use this information to determine how suited someone is to be granted a loan,” she adds.

It is not always easy to develop systems that do not take this into consideration. This is because they are adept at detecting gender-related factors which the developers may not have realized existed in the data. For example, an AI tool from Amazon learned to ignore job applications that mentioned universities associated with women.

Lacking representation in data

In today’s global society, it is important that all people are represented equally. This also applies to the data that AI systems are based on. Who is represented in the data or not has a significant impact on whom the technology performs better or worse on.

“If a specific group of people is not equally represented in the data as other groups, the system will perform worse on that particular group,” Wetzer explains.

If AI is trained on images of male professors, it will assume that the profession is reserved for men. AI developers need to be conscious of the representation in the dataset, especially when developing systems that make decisions affecting people’s health and well-being.

"For example, an AI cancer diagnostics system may have been created in a developed country that can afford to do so. People will assume that it will perform well for everyone, but some marginalized groups may not ever have been part of the training data. The system will most likely not work well on those people."

“Woke” AI

But the pursuit of equal representation in AI models and data can also go too far. Last year, Gemini, Google’s AI-based image generator, was accused of being “woke” when it generated images of German soldiers from 1943 with African and Asian appearances.

“If you ask an AI about how something was in Germany at a particular point in time, it would be wrong of it to assume that the population was more diverse than it actually was. You can clearly see that it has actively tried to make a more diverse set and produced something which does not make much sense,” Wetzer says.

Skewed gender balance

Biases in AI do not just include training data. Only 30 percent of today’s global AI workforce are women, meaning that the systems are often developed by men. This can significantly impact the development of such systems.

“There are a lot of things to consider when developing AI, for example which training data, neural network, and parameters to use. These decisions are made by someone, and today’s workforce is not particularly diverse,” she says.

If AI is developed by only a single group, there is a risk that the system will be based on how that particular group understands, experiences, and interprets the world. This is rarely a conscious choice and usually happens without the developers themselves being aware of it.

“Several studies show that technology is shaped by those who create it. This means that there is a chance that a single group may forget to include others’ perspectives and experiences around gender discrimination and racism.”

A need for female role models

The AI workforce and academia needs to reflect the diversity in society, she says. Increased focus on diversity among AI developers and researchers is crucial for developing AI technology that works well for everyone.

“It is absolutely essential to include different perspectives and experiences in the development of these solutions”, Wetzer emphasizes.

She believes that the field will become more diverse. However, several measures are still needed to motivate girls and women to study, research, and develop AI.

“We must continue to spark interest in STEM sciences from an early age and put STEM careers on the map for girls. We also need strong role models who can inspire them to study and work with AI. This means we need to shed more light on female researchers and their contributions to the field."

AI regulation is necessary

Last year, the AI Act, the world’s first AI law, was passed in the EU. It imposes strict requirements for the responsible development and use of AI in Europe and Norway. The entire legislation is set to be implemented in Norway by 2026.

The lack of such guidelines can contribute to reinforcing social and economic inequalities among different groups of people, such as those based on gender. Wetzer is positive about the AI Act and sees it as an important step toward developing safer and more fair AI.

“I believe it will provide thorough guidelines on how AI systems should be developed and tested before being implemented, similarly to how medications are tested. There are clear guidelines and multiple stages that must be followed before the drugs can be used and sold, and the same should apply to AI”, Wetzer says.

“The regulation will encourage developers and researchers to consider how AI systems should be designed according to fundamental ethical principles. It is important that AI systems serve more than just corporate interests”, she concludes.

Stay updated on the latest Visual Intelligence news on our social media:

LinkedIn

BlueSky

Latest news

Anders Waldeland receives the Digital Trailblazer Award 2025

December 4, 2025

Congratulations to Senior Research Scientist Anders Waldeland, who was awarded the Digital Trailblazer Award 2025 at the Dig X Subsurface conference in Oslo, Norway.

sciencenorway.no: AI can help detect heart diseases more quickly

December 3, 2025

Researchers have developed an artificial intelligence that can automatically measure the heart's structure – both quickly and accurately (Popular science article on sciencenorway.no)

State Secretary Marianne Wilhelmsen visits SFI Visual Intelligence and UiT

November 26, 2025

State Secretary Marianne Wilhelmsen visited UiT The Arctic University of Norway to learn more about SFI Visual Intelligence and UiT's AI initiatives in education and research.

TV2.no: Sier Elon Musk er smartere enn Leonardo da Vinci

November 25, 2025

KI-chatboten Grok har fortalt brukere at verdens rikeste mann er både smartere og sprekere enn noen andre i verden – inkludert basketballstjernen LeBron James og Leonardo da Vinci (Norwegian news article on tv2.no)

Successful science communication workshop at Skibotn

November 21, 2025

The Visual Intelligence Graduate School gathered our early career researchers for a 3-Day Science Communication workshop at Skibotn field station outside of Tromsø, Norway.

uit.no: UiT og Aker Nscale sammen om storsatsing på kunstig intelligens

November 19, 2025

Onsdag inngikk Aker Nscale og UiT Norges arktiske universitet en ti-årig samarbeidsavtale for å utvikle og styrke kompetansemiljøene for kunstig intelligens i Narvik og Nord-Norge. Aker Nscale garanterer for 100 millioner kroner i avtaleperioden (news story on uit.no)

Two fruitful days at The Alan Turing Institute's headquarters

November 17, 2025

Centre Director Robert Jenssen and PhD Candidate Lars Uebbing had two fruitful days together with researchers at The Alan Turing Institute's headquarters in London

Anders Waldeland nominated for the Digital Trailblazer 2025 Award

November 12, 2025

Senior Research Scientist Anders Waldeland is nominated for the Digital Trailblazer 2025 Award. The winner is announced at the Dig X Subsurface conference in Oslo, Norway in December.

AI can help detect heart diseases more quickly

November 7, 2025

Visual Intelligence researchers have developed an AI to automatically measure the heart's structure – both quickly and accurately. They believe it can help doctors detect and treat cardiovascular diseases faster.

How can PET and AI help detect prostate cancer earlier?

November 5, 2025

Samuel Kuttner and Elin Kile presented research on PET and artificial intelligence at evening seminar on early detection of prostate cancer organized by the Norwegian Prostate Cancer Assocation.

Visual Intelligence represented at Svarte Natta 2025

October 29, 2025

Centre Director Robert Jenssen represented Visual Intelligence at Svarte Natta 2025 – North Norway's journalist and media conference organized by the Norwegian Union of Journalists.

My Research Stay at Visual Intelligence: Aitor Sánchez

October 5, 2025

Aitor Sánchez is a PhD candidate at the Intelligent Systems Group of the University of the Basque Country in Spain. He visited Visual Intelligence in Tromsø from March to June 2025.