Associate professor Elisabeth Wetzer says AI tends to treat men and women differently.
Image:
Photo: Petter Bjørklund/SFI Visual Intelligence.

Associate professor Elisabeth Wetzer says AI tends to treat men and women differently.

Researcher: This makes AI perform worse for women

Artificial intelligence may treat men and women differently. How does this happen? AI researcher Elisabeth Wetzer explains the biases surrounding AI technology.

Researcher: This makes AI perform worse for women

Artificial intelligence may treat men and women differently. How does this happen? AI researcher Elisabeth Wetzer explains the biases surrounding AI technology.

By Petter Bjørklund, Communication Advisor at SFI Visual Intelligence

From health-promoting technologies to personal assistants. There is little doubt that artificial intelligence (AI) can help us in many different ways. But does it help everyone equally?

“AI has a tendency to treat men and women differently,“ says associate professor at UiT Machine Learning Group and SFI Visual Intelligence, Elisabeth Wetzer.

She is an AI expert and describes this as a significant challenge with today’s AI technology. Systems that favor job applications by men, grant lower credit limits to women, and recognize fewer female faces are only a handful of examples she mentions.

What makes an AI algorithm become “biased”, in other words, to act in ways that favor or discriminate against groups like men or women?

Data can reinforce biases in data

AI is trained on enormous amounts of data. Chatbots like ChatGPT, DeepSeek, and Elon Musk’s Grok are based on millions of pictures, videos and text from the internet. These “big data” are essential for an AI system to perform a given task.

But data are historical. This means that they can reflect prejudices and outdated stereotypes throughout history, for example pertaining to gender.

“If you look through a set of data from the last decade, you will quickly find groups who have been discriminated against based on their gender, sexuality, or skin color. Since AI is made to find patterns and correlations in data, there is a risk that the systems may pick up and reinforce biases from the dataset,” Wetzer says.

The consequences of this can be significant, especially for marginalized and underrepresented groups.

“Let us say you have a credit score system designed to determine the size of the loan someone should be granted. If the system is based on salary statistics from the past sixty years, it will pick up that there is a significant wage gap between women and men,” she explains.

“The system will then assume that women are less economically responsible than men, and that women are less suited to be granted a loan. This means that it has learned a skewed and incorrect connection between gender and income.”

Should AI be gender-blind?

If there is a risk of AI misusing gender information to treat people differently, does this mean that AI systems should be developed to be gender-blind?

It depends on what the AI system is designed for, Wetzer responds. In some cases, information about gender can be an important factor for the system to take into account.

“For example, some diseases occur more frequently in women than men. For those cases, you do not want an AI to consciously ignore the person’s gender when detecting such diseases,” Wetzer says.

“If the information is relevant to the AI’s task, it is important that gender is not ignored. But an algorithm should never use this information to determine how suited someone is to be granted a loan,” she adds.

It is not always easy to develop systems that do not take this into consideration. This is because they are adept at detecting gender-related factors which the developers may not have realized existed in the data. For example, an AI tool from Amazon learned to ignore job applications that mentioned universities associated with women.

Lacking representation in data

In today’s global society, it is important that all people are represented equally. This also applies to the data that AI systems are based on. Who is represented in the data or not has a significant impact on whom the technology performs better or worse on.

“If a specific group of people is not equally represented in the data as other groups, the system will perform worse on that particular group,” Wetzer explains.

If AI is trained on images of male professors, it will assume that the profession is reserved for men. AI developers need to be conscious of the representation in the dataset, especially when developing systems that make decisions affecting people’s health and well-being.

"For example, an AI cancer diagnostics system may have been created in a developed country that can afford to do so. People will assume that it will perform well for everyone, but some marginalized groups may not ever have been part of the training data. The system will most likely not work well on those people."

“Woke” AI

But the pursuit of equal representation in AI models and data can also go too far. Last year, Gemini, Google’s AI-based image generator, was accused of being “woke” when it generated images of German soldiers from 1943 with African and Asian appearances.

“If you ask an AI about how something was in Germany at a particular point in time, it would be wrong of it to assume that the population was more diverse than it actually was. You can clearly see that it has actively tried to make a more diverse set and produced something which does not make much sense,” Wetzer says.

Skewed gender balance

Biases in AI do not just include training data. Only 30 percent of today’s global AI workforce are women, meaning that the systems are often developed by men. This can significantly impact the development of such systems.

“There are a lot of things to consider when developing AI, for example which training data, neural network, and parameters to use. These decisions are made by someone, and today’s workforce is not particularly diverse,” she says.

If AI is developed by only a single group, there is a risk that the system will be based on how that particular group understands, experiences, and interprets the world. This is rarely a conscious choice and usually happens without the developers themselves being aware of it.

“Several studies show that technology is shaped by those who create it. This means that there is a chance that a single group may forget to include others’ perspectives and experiences around gender discrimination and racism.”

A need for female role models

The AI workforce and academia needs to reflect the diversity in society, she says. Increased focus on diversity among AI developers and researchers is crucial for developing AI technology that works well for everyone.

“It is absolutely essential to include different perspectives and experiences in the development of these solutions”, Wetzer emphasizes.

She believes that the field will become more diverse. However, several measures are still needed to motivate girls and women to study, research, and develop AI.

“We must continue to spark interest in STEM sciences from an early age and put STEM careers on the map for girls. We also need strong role models who can inspire them to study and work with AI. This means we need to shed more light on female researchers and their contributions to the field."

AI regulation is necessary

Last year, the AI Act, the world’s first AI law, was passed in the EU. It imposes strict requirements for the responsible development and use of AI in Europe and Norway. The entire legislation is set to be implemented in Norway by 2026.

The lack of such guidelines can contribute to reinforcing social and economic inequalities among different groups of people, such as those based on gender. Wetzer is positive about the AI Act and sees it as an important step toward developing safer and more fair AI.

“I believe it will provide thorough guidelines on how AI systems should be developed and tested before being implemented, similarly to how medications are tested. There are clear guidelines and multiple stages that must be followed before the drugs can be used and sold, and the same should apply to AI”, Wetzer says.

“The regulation will encourage developers and researchers to consider how AI systems should be designed according to fundamental ethical principles. It is important that AI systems serve more than just corporate interests”, she concludes.

Stay updated on the latest Visual Intelligence news on our social media:

LinkedIn

BlueSky

Latest news

Centre-developed seismic foundation model is now open source!

April 6, 2026

The NCS model, a seismic foundation model trained on data from the Norwegian data repository for subsurface data, is now available as an open-source model, allowing anyone to download, utilize, and further develop the model.

Visual Intelligence Annual Report 2025

March 31, 2026

The Visual Intelligence Annual Report 2025, highlighting the centre's progress, activities, achieved innovations, staff, funding, and publications for 2025, is now available to read on our websites.

Visual Intelligence strengthens ties with Pioneer Centre for AI in EHR-related research

March 26, 2026

Visual Intelligence researchers contributed to the Pioneer Centre for AI workshop on Electronic Health Records research. The aim was to strengthen ties between the two centres on EHR-related research.

Nordlys: Her blir KI-studentene grillet av sin «egen» teknologi

March 24, 2026

Tre av studentene i sivilingeniør i Kunstig Intelligens ved UiT skal delta i NM i KI. Slik gikk det da de ble intervjuet ved hjelp av kunstig intelligens (News article on nordlys.no).

My Research Stay at Visual Intelligence: Rami Al-Belmpeisi

March 15, 2026

Rami Al-Belmpeisi is a PhD Research Fellow in the Visual Computing section at DTU Compute, Technical University of Denmark. He visited Visual Intelligence in Tromsø from November 2025 to February 2026.

Visual Intelligence inspires future students at the UiT Open Day

March 12, 2026

Visual Intelligence researchers came to the UiT Open Day to inform the students about UiT's study programme, and inspire them to pursue AI-related studies and career paths.

Dagsavisen: Hun lærer kunstig intelligens å forstå medisinske bilder

March 4, 2026

Elisabeth Wetzer forsker på hvordan maskiner kan lære å analysere medisinske bilder – og samtidig forstå hva legene faktisk ser etter (Norwegian news article on dagsavisen.no)

My Research Stay at Visual Intelligence: Artur Radzivil

February 12, 2026

Artur Radzivil is a PhD Research Fellow at Vilnius Gediminas Technical University. He visited Visual Intelligence in Oslo from September to November 2025.

Centre Director Robert Jenssen meets Norwegian Minister of Research and Higher Education, Sigrun Aasland

February 6, 2026

Centre Director Robert Jenssen met with Sigrun Aasland, Norwegian Minister of Research and Higher Education, alongside UiT and Aker Nscale representatives to give an update on Aker Nscale's AI Giga Factory in Narvik, Norway.

Visual Intelligence represented at Arctic Frontiers 2026

February 4, 2026

Visual Intelligence was represented by Centre Director Robert Jenssen, Associate Professor Kristoffer Wickstrøm, and VI Alumna Sara Björk at the Arctic Frontiers side session "How can AI and satellite collaboration strengthen Arctic resilience?".