Chapter 5: Free Expression, Anonymity, and Content Moderation

In this week’s readings and media, you will see a study about the Online Disinhibition Effect, which explains how a person’s social behavior can be different when posting online anonymously.

When you consider the negative impact of trolling, you may be wondering why there isn’t a global initiative to eliminate anonymous communication all together? Should the social media companies be held accountable?

On the other side of the issue, you will examine the social and political benefits to being anonymous. There is a strong case to suggest that, without anonymity, there would be a chilling effect on free expression, which, as we have become accustomed to, is believed to be a cornerstone of democracy.

Also, what content needs to be moderated and when is the time for an organization to limit, block, or deplatform a user?

Key points in this chapter

In the following readings and media, the authors will present the following themes:

  1. People behave differently when they know they are anonymous – mostly negatively or in anti-social ways.
  2. The power of SM to inflict suffering and to cast hate upon individuals and groups causes tension against the principles of free speech.
  3. Anonymity provides protection for political dissenters and oppressed groups to organize and communicate safely.
  4. Content moderation and deplatforming are pressing, but extremely complex needs for social media companies, with political controversy swirling all around the decisions made.

Article: Council on Foreign Relations – “Hate Speech on Social Media: Global Comparisons” by Zachary Laub, June 7, 2019 (8 pages)

This article provides an analysis of the relationship between hate speech on social media and hate crimes on a global scale. Worth noting are the following statements:

  • “The same technology that allows social media to galvanize democracy activists can be used by hate groups seeking to organize and recruit.”
  • “Users’ experiences online are mediated by algorithms designed to maximize their engagement, which often inadvertently promote extreme content…. ‘YouTube may be one of the most powerful radicalizing instruments of the 21st century,’ writes sociologist Zeynep Tufekci.”
  • “The 1996 law exempts tech platforms from liability for actionable speech by their users. Magazines and television networks, for example, can be sued for publishing defamatory information they know to be false; social media platforms cannot be found similarly liable for content they host.”

hate speech

Laub, Z. (2019). Hate speech on social media: Global comparisons. Council on Foreign Relations7.

Article: The Online Disinhibition Effect

John Suler’s research article “The Online Disinhibition Effect” describes the six psychological factors that contribute to trolling behavior: dissociative anonymity, invisibility, asynchronicity, solipsistic in-trojection, dissociative imagination, and minimization of authority.  Download PDF: “The Online Disinhibition Effect.” (6 pages)

Key quotes:

“When people have the opportunity to separate their actions on-line from their in-person lifestyle and identity, they feel less vulnerable about self-disclosing and acting out. Whatever they say or do can’t be directly linked to the rest of their lives. In a process of dissociation, they don’t have to own their behavior by acknowledging it within the full context of an integrated online/offline identity.”

“Consciously or unconsciously, people may feel that the imaginary characters they ‘created’ exist in a different space, that one’s online persona along with the online others live in an make-believe dimension, separate and apart from the demands and responsibilities of the real world. They split or dissociate online fiction from offline fact.”

“Consciously or unconsciously, people may feel that the imaginary characters they ‘created’ exist in a different space, that one’s online persona along with the online others live in an make-believe dimension, separate and apart from the demands and responsibilities of the real world. They split or dissociate online fiction from offline fact.”

Suler, John (2004). “The Online Disinhibition Effect.” CyberPsychology & Behavior 7 (3): 321–326. doi:10.1089/1094931041291295. Retrieved 10 March 2013.

Business policy: The value of anonymity

Whisper appWhisper allows users to post their intimate feelings with total anonymity. Here are their community guidelines with references to their philosophy of anonymity. In this Huffington Post article, we see how the anonymity factor has served as a channel for expression: “LGBT Youths With Unsupportive Parents Sound Off Anonymously On Whisper App” by Curtis M. Wong Senior Editor, HuffPost Queer Voices. Retrieved December 18, 2016.

Poll:When it comes to people’s identity on social media, … what you think should happen?” YouGov informal poll of 3,400+ adults, July, 2021

What do you suppose are the results for the poll question above? After you read the results, go to the Twitter post from YouGov and read the comments. Take into consideration that Twitter users do not represent anything or anyone else other than Twitter users, so consider this to be a trend and not necessarily scientific.

Content Moderation and Deplatforming

“The wave of violence has shown technology companies that communication and coordination flow in tandem.  Now that technology corporations are implicated in acts of massive violence by providing and protecting forums for hate speech, CEOs are called to stand on their ethical principles, not just their terms of service.” – Joan Donovan, author of “Navigating the Tech Stack: When, Where and How Should We Moderate Content?

At first glance, the issue of whether a private company has the right to moderate content for the purpose of removing (perceived) offensive or misleading content is simply a matter of conducting business. The most common meme used to describe this policy is the “No shirt. No Shoes. No Service” policy used by restaurants to refuse service to those who do not comply with the rules.

However, the issue becomes more complicated when you consider that a small handful of tech companies now control the vast majority of content that people engage with; there is no equivalent “public sphere.”

Content moderation is typically used to remove violent, pornographic, threatening, and other objectionable content from social media and the Internet. However, it can also be used to limit political expression or oppress ethnic, cultural, religious, and sexual minorities.

Companies that facilitate online communication are making discretionary decisions to remove users or entities from their systems that they deem to impose a risk to their customers or the public—or their profitability. There are several ways that companies can deplatform a person or entity.

Deplatforming removes or constrains a person’s communication on a platform’s private system, usually due to a violation of its terms of use, but there is controversy on this point.

For companies using social media accounts to engage with their target audiences, they will need to make careful decisions about when to delete posts from their accounts, why, and what to communicate about such decisions. Equally, organizations need to carefully decide how, when, and why to block individuals from contributing to their online spaces.

Attributions

This chapter was adapted from Trends in Digital & Social Media by Steve Covello, which is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Social Media & Reputation Management Copyright © 2023 by Sam Schechter is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book