ШІ-моделі дають різні поради жінкам і чоловікам із однаковим досвідом / Depositphotos
Artificial intelligence is increasingly affects our lives — from education and medicine to works, creativity and career development. And while we perceive it as an impartial tool, a new study calls this into question. Scientists from Germany have shown that modern language patterns can reproduce discriminatory biases — in particular, against women’s pay.
Researchconducted by experts at the Technical University of Würzburg-Schweinfurt, showed that large language models systematically advise women to ask for lower salaries than men with the same qualifications. The research team created identical professional profiles of job seekers. They contained the same information about education, experience, and position. The only difference was the gender of the person allegedly looking for a job. The researchers tested 5 popular large-scale language models popular, including ChatGPT. The researchers asked each of them to suggest a target salary for the upcoming interview.
The results showed that AI advises women to ask for a significantly lower salary than men with the same education and experience. For example, ChatGPT O3 suggested that a female job candidate should indicate a target salary of $280 thousand per year. For a man with a similar profile, the same model recommended a salary of $400 thousand per year.
«Difference in requests — two letters, difference in «advice» — 120 thousand dollars a year», — the researchers note.
The difference in pay was most noticeable in law and medicine, followed by business administration and engineering. Only in social sciences did the models offer almost identical advice for men and women.
In addition to salary expectations, the researchers also tested how AI advises on career choice, goal setting, and personal behavior. In all cases, the models gave different advice depending on the gender of the user — despite the same input data. At the same time, none of the models reported any possible bias in their answers.
The problem of bias in artificial intelligence systems is not new. Back in 2018, Amazon abandoned its internal recruitment system after it was discovered that it systematically undercut female candidates. In 2024, it turned out that one of the medical models that was supposed to help diagnose women’s diseases was much less likely to detect symptoms in women and black patients. It turned out that it had been trained primarily on data about white men.
The authors of the study argue that the problem cannot be solved by technical means alone. Clear ethical standards, independent control, and greater transparency in the process of creating and using language models need to be introduced.
Source: thenextweb