Gender Bias Inside the Digital Revolution: Digital Human Rights
from Women Around the World and Women and Foreign Policy Program

Gender Bias Inside the Digital Revolution: Digital Human Rights

During a recent CFR roundtable, Professor Safiya Noble spoke about digital human rights – an issue on which she is advising the United Nations. Dr. Noble explores the biases against women and people of color that are embedded in search engine results and algorithms.
Young girl checks out an iPhone in November 2018.
Young girl checks out an iPhone in November 2018. LILLIAN SUWANRUMPHA/AFP/Getty Images

This post was coauthored by Abigail Van Buren, interdepartmental program assistant at the Council on Foreign Relations.

During a recent CFR roundtable, Professor Safiya Noble spoke about digital human rights – an issue on which she is advising the United Nations. In her celebrated book, Algorithms of Oppression, Dr. Noble explores the biases against women and people of color that are embedded in search engine results and algorithms as we move into an increasingly digital age.

More on:

Technology and Innovation

Digital Policy

Women and Women's Rights

Race and Ethnicity

Dr. Noble explained that her book sprung from a project she began in graduate school, when she noticed the rising popularity of search engines as a source of public knowledge. She was stunned to find that when she typed “black girls” into the search bar, the first page brought back pornographic images as the primary representation of African American girls. For Dr. Noble, this opened a series of questions about how search engine content is curated. Her studies showed that while multiple criteria are factored into search engine results, in large part, these results mirror public opinion – the representation of black girls often centered on sexist and racist images, because of negative stereotypes many have about these girls. Dr. Noble found similar results for Latina and Asian American girls.

Research also demonstrates that public opinion and what we see online are mutually-reinforcing. In a recent report, UN special rapporteur on the promotion and protection of the right to freedom of opinion and expression David Kaye warns that because artificial intelligence (AI) can personalize what users see online, AI “may reinforce biases and incentivize the promotion and recommendation of inflammatory content or disinformation in order to sustain users’ online engagement.” In other words, search engine content can be shaped by public biases and may reinforce biases by rebounding them as search results that 73 percent of users believe to be accurate and trustworthy. As bias continues, people have started to look at content moderating as a solution, where tech companies monitor user-generated materials in accordance with pre-set guidelines to determine if content should be taken down or left up. For example, some companies take down content that spreads disinformation, obscenity, or violent, hateful speech.

The challenges of content filtering and the curation of ideas have touched other technologies such as radio. For example, during the Rwandan genocide, “hate radio” was used to spread anti-Tutsi propaganda and later, to direct killers and target individuals.  However, as Dr. Noble pointed out, radio and the Internet are different in terms of content regulation. Those broadcasting on the radio often require a license from the government, which, at least in some countries, comes with guidelines on what can and cannot be said on the airwaves. Furthermore, radio stations have more mechanisms to curate who speaks, about what, at what time. By contrast, the Internet lacks this check and enables anyone to publish virtually anything at any time (anywhere and everywhere), as Noble has discussed in Time magazine. It is chilling to think of how much more amplified the messages of hate in Rwanda would have been if these platforms had been available then. In fact, this may already be happening as UN experts investigating a genocide in Myanmar said that Facebook played a role in spreading hate speech.

International human rights law (and several countries in Europe and across the world) prohibit “hate speech,” while the United States takes a more protective view toward robust free speech. The risk of speech regulation, of course, is that it can be wielded by governments against unpopular speech, minorities, and political opponents (as Thailand is currently engaged in under its Computer Crimes Act—or “fake news” law). But even putting to one side the free speech debate, UN special rapporteur David Kaye questions whether individuals are even truly able to exercise freedom of opinion – which includes the right to form an opinion – given that online content curation raises novel questions about the types of coercion or inducement that may be considered an interference with the right to form an opinion. Most online content is curated not for the purposes of removing “offensive” materials, but rather to entice the attention web surfers and promote advertisements – a predominant business model for social media giants like Facebook.

Even without government regulation of content, tech companies themselves have taken steps to address bias on their platforms. After a legal challenge against Airbnb – brought by an African American man denied a booking by a landlord based on race – leading to scrutiny from consumers as #AirbnbWhileBlack took off on social media – the rental service platform adopted a policy requiring that users “comply with local laws and regulations” including federal anti-discrimination laws. However, there are carve-outs for smaller dwellings with fewer rooms in many jurisdictions. Though the application of non-discrimination laws on the actions of independent contractors in the online gig-economy remain murky, other companies like Lyft and Uber have also established their own nondiscrimination policies to oversee peer-to-peer interactions on their platforms, sometimes even going beyond what is required by state law. Indeed, Noble has written about the profit models that drive online platforms that can even test the limits of legality in housing discrimination and circumvention of civil rights and non-discrimination.

More on:

Technology and Innovation

Digital Policy

Women and Women's Rights

Race and Ethnicity

At the roundtable, Dr. Noble and her UCLA colleague, Dr. Sarah Roberts, noted that the global labor force supporting the tech sector includes those working on content-moderation – and that this digital labor forces is “values-based,” but the criteria they are using for determining what content stays up and what comes down is not always transparent. One step in the right direction for shaping both the values and the composition of the digital labor force is diversifying the tech sector through programs, such as Black Girls Code and Girls Who Code. As discussed in a CFR report on Women and Tech one of us co-authored, increasing access to STEM education and jobs for women and girls is critical, not only to expand opportunities for half the world, but also to grow economies, particularly in emerging economies around the world and in disadvantaged communities at home. Noble argues in her book, placing the onus for current discriminatory practices on marginalized communities is not without its own problems, but addressing the eco-system of challenges requires investments in disenfranchised communities, public policy, and significantly more research about the role in digital media platforms in bolstering or suppressing democracy and social equality.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail