
[ad_1]
Online translation tools have helped us learn new languages, communicate across language boundaries, and view foreign websites in our native language. But the artificial intelligence (AI) behind them is far from perfect, often reproducing rather than rejecting the prejudices that exist within a language or society.
These tools are particularly vulnerable to gender stereotypes because some languages (like English) do not tend to use gender names, while others (like German) do. When translating from English to German, the translation tools need to decide which gender to assign English words such as “cleaner”. In a very large majority, the tools conform to the stereotype, opting for the feminine word in German.
Prejudice is human: it is part of who we are. But when left unchallenged, prejudices can emerge in the form of concrete negative attitudes towards others. Now our team has found a way to retrain the AI behind translation tools, using targeted training to help them avoid gender stereotypes. Our method could be used in other areas of AI to help technology reject, rather than reproduce, prejudices within society.
Biased algorithms
Much to the dismay of their creators, AI algorithms often develop racist or sexist traits. Google Translate has been accused of gender stereotypes, such as its translations assuming that all doctors are men and all nurses are women. Meanwhile, the GPT-3 AI language generator – which wrote an entire article for The Guardian in 2020 – has recently shown that it is also terribly effective at producing harmful content and disinformation.
These AI failures are not necessarily the fault of their creators. Academics and activists recently drew attention to gender bias in the Oxford English Dictionary, where sexist synonyms for “wife” – such as “slut” or “maid” – show how even a catalog of words constantly revised and edited by academics may contain biases that reinforce stereotypes and perpetuate sexism in everyday life.
AI learns prejudice because it isn’t built in a vacuum: it learns to think and act by reading, analyzing, and categorizing existing data – like that contained in the Oxford English Dictionary. In the case of translating AI, we expose its algorithm to billions of words of text data and ask it to recognize and learn patterns it detects. We call this process machine learning, and along the way, patterns of bias are learned along with those of grammar and syntax.
Ideally, the textual data that we show AI will not contain bias. But there is a continuing trend in the field to build larger systems trained on ever-increasing datasets. We are talking about hundreds of billions of words. These are obtained on the internet using indiscriminate text scraping tools such as Common Crawl and WebText2, which maraud the web, gobbling up every word they come across.
The sheer size of the resulting data makes it difficult for any human to know what is in it. But we do know that some of it comes from platforms like Reddit, which made headlines for presenting offensive, false, or conspiratorial information in user posts.
New translations
In our research, we wanted to find a way to counter bias in textual datasets pulled from the Internet. Our experiments used a randomly selected portion of an existing English-German corpus (a selection of text) that originally contained 17.2 million sentence pairs – half in English, half in German.
As we have pointed out, German has gendered forms for nouns (the doctor can be “the doctor“For men,”the doctor”For women) where in English we do not deal with these nominal forms (with a few exceptions, themselves controversial, like“ actor ”and“ actress ”).
Our analysis of this data revealed clear gender imbalances. For example, we found that the masculine form of engineer in German (the engineer) was 75 times more common than its female counterpart (the female engineer). A translation tool trained on these data will inevitably reproduce this bias, translating “engineer” into man “the engineer.“So what can be done to prevent or mitigate this?
Overcome prejudices
A seemingly simple answer is to “balance” the corpus before asking computers to learn from it. Perhaps, for example, adding more engineers to the corpus would prevent a translation system from assuming that all engineers are men.
Unfortunately, this approach presents difficulties. Translation tools are trained for days on billions of words. Recycling them by changing the gender of words is possible, but it is inefficient, expensive and complicated. Gender adjustment in languages like German is particularly difficult because in order to have grammatical meaning, several words in a sentence may need to be changed to reflect the change of gender.
Instead of this painstaking gender rebalancing, we decided to retrain existing translation systems with focused lessons. When we spotted a bias in the existing tools, we decided to retrain them on new, smaller datasets – much like an afternoon of gender sensitivity training at work.
This approach takes a fraction of the time and resources required to train models from scratch. We were only able to use a few hundred selected translation examples – instead of millions – to adjust the behavior of the translation AI in a targeted way. When testing gender professions in translation – as we did with “engineers” – the improvements in precision after fitting were about nine times greater than the “balanced” recycling approach.
In our research, we wanted to show that tackling the biases hidden in huge datasets doesn’t necessarily mean painstakingly adjusting millions of training examples, a task that risks being dismissed as impossible. Instead, data bias can be targeted and unlearned – a lesson other AI researchers can apply to their own work.
This article by Stefanie Ullmann, Postdoctoral Research Associate, University of Cambridge and Danielle Saunders, Research Student, Department of Engineering, University of Cambridge is republished from The Conversation under a Creative Commons license. Read the original article.
Published March 31, 2021 – 17:00 UTC
[ad_2]