top of page
sic site banner.png
Writer's pictureGenevieve Cheng

Is Freedom of Speech Threatened by the Removal of Misinformation?

Updated: Feb 13, 2023

It has become difficult to see how rights to freedom of speech are applicable to social media platforms – they aren’t government run, so if your post gets removed or your account gets banned, then is this technically “censorship”? Further, as Megan Jordan asks, if a platform bans an elected member of government or even the head of state, is this prohibiting government communication?


With how prolific social media platforms are and how they have infiltrated our everyday lives, platform moderation and regulation have never been more important, nor more contested.


How Does Freedom of Speech Work Online?


If your country and federal government protects your rights to freedom of speech, it is likely that these laws or constitutional rights have not been adjusted or amended to account for social media platforms. For example, in the USA, freedom of speech is defined as “the right to speak, write, and share ideas and opinions without facing punishment from the government. The First Amendment protects this right by prohibiting Congress from making laws that would curtail freedom of speech.”


This means that the government cannot censor its citizens, but it doesn’t necessarily take into account the actions of independent companies, like Meta (formerly Facebook). This is why arguments around platform moderation and governance are complicated: is it truly (by definition) “censorship” if the government isn’t the one doing it?


Difficulties with Moderation of Content – Who Makes the Decisions?


Social media is used for so much more than just communication and sharing of your “status” on your “Facebook wall.” On many occasions, these platforms have been used for democratic practices, such as the Arab Spring from 2010-2012 or the organization of protests in 2020 after the murder of George Floyd. If social media platforms are occupied by politicians and voting citizens and are where political discussions and debate can take place, the governance of these platforms becomes more than just important, it becomes vital.


Further than just their role in politics and elections, social media platforms are the main communication technologies for entire generations, doing everything from influencing cultural trends and providing economic opportunity, to platforming social change and protests, to forming public opinion and even engaging in mass public shaming.


For all of these above reasons, what Ricknell (2020) outlines as “internet governance,” is essentially figuring out who is in charge of influencing perceptions and deciding reality. DeNardis (2014) defines “internet governance” as the “technical infrastructure necessary to keep the Internet operational and the enactment of substantive policies around these technologies” (as cited by Ricknell, 2020).


This begs the questions: what “substantive policies?” Who decides what the policy is and where and when to enforce it?


“Moderation of content is not an easy task, involving a number of complex decisions and interpretations regarding, for example, original intent of the content, how it fits within the boundaries of current cultural taste, and public discourse in a situation where multiple competing value systems exist at the same time (Gillespie, 2018; as cited by Ricknell, 2020).”

The complexities here expand past just who performs this moderation, but also what are they moderating? What are these so-called “community guidelines” and who are they protecting?


If we were to bring in the subject of misinformation and fake news, we could then ask who decides the truth? How do you decide what is deliberate misinformation or just a joke or satire?


Founder and CEO of Facebook, now Meta, Mark Zuckerberg said in an interview in 2018 that his media platforms would not ban Alex Jones’ Infowars or remove false information because “everyone gets things wrong.”


Here is a short excerpt on his statements from a Washington Post article on the matter:


“Zuckerberg, who is Jewish, also said people who deny the Holocaust happened should be allowed to stay on the social network, too. “I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong,” he said. Zuckerberg added it’s difficult to understand a person’s intent. He said Facebook shouldn’t ban people from the network even if they spread false information on multiple occasions.”

Zuckerberg has since changed his stance on this, but this opinion was very extreme and controversial conisdering his platforms were some of the most widely used across the world and across demographics.


The thing about the age of misinformation or our current “post-truth” era is no matter who makes these decisions or what decisions are made, someone is going to feel victimized and alienated.


But, does this mean we should just let misinformation run rampant on platforms? That can’t be the right answer. “Theories” like if the Holocaust or the horrific shooting at Sandy Hook Elementary in 2012 really happened are extremely harmful to so many people and entire communities. They perpetuate hate towards already vulnerable people.


As After Truth (the 2020 documentary film we reviewed) claimed, if enough misinformation poisons our search engines and online platforms, eventually this false information becomes the presumed “truth” or knowledge that people in the future will prescribe to.


Even though it’s a difficult thing and even controversial thing to do, letting misinformation run free on the platforms for the sake of “free speech” is not a solution to keeping these platforms enjoyable and keeping its users safe. Those arguing that "free speech" needs to be protected over the truth need to understand the consequences of misinformation and how much deeper it stems.


 

Research


Ricknell, E. (2020). Freedom of Expression and Alternatives for Internet Governance: Prospects and Pitfalls. Media and Communication (Lisboa), 8(4S1), 110–120. https://doi.org/10.17645/mac.v8i4.3299

Related Posts

See All

Comments


bottom of page