AI tool quantifies power imbalance between female and male characters in Hollywood movies - Technology Breaking News
At first glance, the movie “Frozen” might seem to have two strong female protagonists — Elsa, the elder princess with unruly powers over snow and ice, and her sister, Anna, who spends much of the film on a quest to save their kingdom.
But the two princesses actually exert very different levels of power and control over their own destinies, according to new research from University of Washington computer scientists.
The team used machine-learning-based tools to analyze the language in nearly 800 movie scripts, quantifying how much power and agency those scripts give to individual characters. In their study, recently presented in Denmark at the 2017 Conference on Empirical Methods in Natural Language Processing, the researchers found subtle but widespread gender bias in the way male and female characters are portrayed.
“‘Frozen’ is an interesting example because Elsa really does make her own decisions and is able to drive her own destiny forward, while Anna consistently fails in trying to rescue her sister and often needs the help of a man,” said lead author and Paul G. Allen School of Computer Science & Engineering doctoral student Maarten Sap, whose team also applied the tool to Wikipedia plot summaries of several classic Disney princess movies.
“Anna is actually portrayed with the same low levels of power and agency as Cinderella, which is a movie that came out more than 60 years ago. That’s a pretty sad finding,” Sap said.
The team also created a searchable online database showing the subtle gender biases in hundreds of Hollywood movie scripts, which range from late 80s cult classics like “Heathers” to romantic comedies like “500 Days of Summer” to war films like “Apocalypse Now.”
In their analysis, the researchers found that women were consistently portrayed in ways that reinforce gender stereotypes, such as in more submissive positions and with less agency than men. For example, male characters spoke more in imperative sentences (“Bring me my horse”) while female characters tended to hedge their statements (“Maybe I am wrong”). However, the bias is not just in the words these characters speak, but also in the way they are portrayed through narratives.
To study the nuanced biases in narratives, the UW researchers expanded prior work presented in 2016 on “connotation frames” that give insights into how different verbs can empower or weaken different characters through their connotative meanings. The study evaluated the power and agency implicit in 2,000 commonly used verbs, where the connotative meanings were obtained from Amazon Mechanical Turk crowdsourcing experiments.
The power dimension denotes whether a character has authority over another character, while the agency dimension denotes whether a character has control over his or her own life or storyline. For each verb, turkers were asked to rank the implied level of power differentials and agency on a scale of 1 to 3.
“For example, if a female character ‘implores’ her husband, that implies the husband has a stance where he can say no. If she ‘instructs’ her husband, that implies she has more power,” said co-author Ari Holtzman, an Allen School doctoral student. “What we found was that men systematically have more power and agency in the film script universe.”
Verbs that imply low power or agency include words like ask, experience, happen, wait, relax, need or apologize. Verbs that confer high power or agency include words like finish, prepare, betray, construct, destroy, assign or compose.
Using the movie scripts, the researchers automatically identified genders of 21,000 characters based on names and descriptions. Using natural language processing tools, which employ machine learning, they looked at which characters appeared as a verb’s subject and object. They then computed how much agency and power were ascribed to these characters, using their crowdsourced connotation frames. The researchers also accounted for the fact that male actors spent more time on screen than female actors and also spoke more, accounting for 71.8 percent of the words spoken across all movies.
The team calculated separate power and agency scores for male and female characters in each movie. They also created scores based on words that the characters spoke in dialogue and on words that were used in narration or stage direction to describe those characters — exposing subtle differences and biases.
In 2010’s “Black Swan,” a movie centered around a female lead — a perfectionist ballerina who slowly loses grip on reality — the movie’s dialogue gives more agency to female characters. But the language used to describe the characters in stage direction and narration gave male characters more power and agency in that film.
In the 2007 movie “Juno,” about an offbeat young woman who unexpectedly gets pregnant, male characters’ scene descriptions and narratives also consistently score higher in power and agency, though the two genders come closer in their dialogue.
The UW team’s tool yields a much more nuanced analysis of gender bias in fictional works than the Bechdel Test, which only evaluates whether at least two female characters have a conversation about something other than a man.
The tendency for male characters to score higher on both power and agency dimensions held true throughout all genres: comedy, drama, horror, sci-fi, thrillers. Interestingly, the team found the same gender bias even for movies with female casting directors or script writers.
“We controlled for this. Even when women play a significant role in shaping a film, implicit gender biases are still there in the script,” said co-author and Allen School doctoral student Hannah Rashkin.
Next steps for the team include broadening the tool to not only identify gender bias in texts but also to correct for it by offering rephrasing suggestions or ways to make language more equal across characters of different genders. The methodology isn’t limited to movies, but could be applied to books, plays or any other texts.
“We developed this tool to help people understand how they may be perpetuating these subtle but prevalent biases that are deeply integrated in our language,” said senior author Yejin Choi, an associate professor in the Allen School. “We believe it will help to have this diagnostic tool that can tell writers how much power they are implicitly giving to women versus men.”
###
The research was funded by the National Science Foundation, Google and Facebook. The other co-author is former UW Allen School undergraduate Marcella Cindy Prasetio.
See how nearly 800 different movie scripts rank on gender bias here: https://homes.cs.washington.edu/~msap/movie-bias/
For more information, contact the research team at debiasing-ai@cs.washington.edu.