MODERATE CAUTIOUSLY

Minimize your control over user expression.

If your product provides a forum for content or communication, consider carefully the far-reaching impact of any content moderation policies or decisions, consider carefully whether you want to be in the business of policing those forums. Following the Santa Clara Principles, any policies addressing illegal or other harmful behavior should be as clear and narrow as possible, be transparent to users, and be applied in an equitable manner. Any policies and mechanisms must be regularly evaluated to address civil liberties and civil rights impact on users, particularly users whose voices have historically been marginalized.

PROHIBIT ONLY ILLEGAL CONTENT OR SPECIFIC HARMFUL BEHAVIOR

A sense of community can flourish when users are able to communicate freely. Narrowly tailoring your terms of service or community guidelines to prohibit only illegal content or specific harmful behavior that undermines the purpose of your service will help you limit the time you spend monitoring speech and the risk of being inconsistent or biased in the application of your policies.

Case Study

Facebook “race blind” content moderation drives away Black users

Facebook came under public fire, many Black users left the platform, and the company was hauled before Congress in 2021 for its content moderation practices, which the company’s own researchers

Facebook came under public fire, many Black users left the platform, and the company was hauled before Congress in 2021 for its content moderation practices, which the company’s own researchers found allowed hate speech to proliferate, while disproportionately censoring Black users and people from marginalized groups. Their findings were consistent with criticisms civil rights groups had directed at Facebook for years, including the ACLU of Northern California, who in 2020 urged Facebook to address content moderation on its platform following the erroneous takedowns of content shared by Black activists and LGBTQ artists. Facebook has promised to examine and improve how its platform treats people differently based on race and has created a Civil Rights Team to audit the company’s progress.

 

Case Study

PayPal Flops as Moral Police

In February 2012, PayPal drew criticism from the press and civil liberties groups when it threatened to kick book publishers off its platform unless they removed “offending literature” from their catalogs.

In February 2012, PayPal drew criticism from the press and civil liberties groups when it threatened to kick book publishers off its platform unless they removed “offending literature” from their catalogs. Although the company claimed that its policy was merely a shield against legal action, its actions were seen as affecting a broad range of content, some of which was clearly legal. Faced with a barrage of criticism, PayPal narrowed its policy to focus more narrowly on illegal and liability-inducing material.

Case Study

Civil Rights Groups Criticize Facebook for Censoring Police Shootings

A coalition of 40 tech and racial justice civil rights groups slammed Facebook CEO Mark Zuckerberg in an open letter for “silencing individuals at the request of the police” after two separate incidents in which the social media company

A coalition of 40 tech and racial justice civil rights groups slammed Facebook CEO Mark Zuckerberg in an open letter for “silencing individuals at the request of the police” after two separate incidents in which the social media company suspended livestreams of police shootings by African-American users. The controversy arose amid reports that Zuckerberg reprimanded Facebook employees for defacing “Black Lives Matters” messages written on the walls of company headquarters. Groups called on Facebook to recognize its importance as a tool for “documenting police misconduct” and institute transparent policies regarding content censorship and collaboration with law enforcement. 

CLEARLY SPELL OUT YOUR COMMUNITY’S POLICIES

Vague prohibitions of “offensive” or “inappropriate” speech leave users uncertain as to what they can and cannot say, which can both chill acceptable speech and drive away users who have lost trust in moderation enforcement that appears discriminatory or arbitrary. On the other hand, clear, understandable policies encourage users to contribute rich content and respectfully engage with others. Policies should be easily accessible by users, who should also be notified when policies are updated or revised.

Case Study

Instagram Receives Worldwide Criticism After Banning Period Photo

Instagram found itself at the center of an international controversy after it removed a photo posted by the Sikh artist and poet Rapi Kaur.

Instagram found itself at the center of an international controversy after it removed a photo posted by the Sikh artist and poet Rapi Kaur. Even though the photo was of a fully dressed woman alone in bed with a period stain on her clothing, Instagram deleted the photo twice for allegedly violating its prohibitions against sexual acts, violence, and nudity. After facing widespread criticism for its decision, Instagram apologized for “wrongly remov[ing]” the content as a violation of its Community Guidelines and reposted the photo.

MAKE THE CONSEQUENCE FIT THE VIOLATION, FOR EVERYONE

If users are subject to extreme penalties for minor violations of your policies , they may lose interest in engaging with your service. Instead, ensure that consequences are proportionate to the rule violation and equally applied to users. Doing so helps you earn the respect and loyalty of all of your users.

PROTECT AND PROMOTE DIGITAL RIGHTS

As users increasingly rely on use digital devices and the internet to connect, communicate, and fight for social change, it is critical for you to safeguard the civil rights of users and take measures to prevent the entrenchment of social inequities online.

Case Study

Facebook “race blind” content moderation drives away Black users

Facebook came under public fire, many Black users left the platform, and the company was hauled before Congress in 2021 for its content moderation practices, which the company’s own researchers

Facebook came under public fire, many Black users left the platform, and the company was hauled before Congress in 2021 for its content moderation practices, which the company’s own researchers found allowed hate speech to proliferate, while disproportionately censoring Black users and people from marginalized groups. Their findings were consistent with criticisms civil rights groups had directed at Facebook for years, including the ACLU of Northern California, who in 2020 urged Facebook to address content moderation on its platform following the erroneous takedowns of content shared by Black activists and LGBTQ artists. Facebook has promised to examine and improve how its platform treats people differently based on race and has created a Civil Rights Team to audit the company’s progress.

 

Share This: