top of page
sic site banner.png
Writer's pictureGenevieve Cheng

Why Fact-Checking on Social Media Sites Isn't Stopping Misinformation

Updated: Mar 22, 2023

According to Statista, 59.4% of the world is on at least one social media site, while 64.4% are general internet users as of January 2023.


This means that information on social media and across the internet is being seen and interacted with by over 5 billion people.


Social media companies are being pressured more and more frequently by their users, governments, and other stakeholders to increase their efforts into fact-checking and keeping the level of misinformation and disinformation on their platforms to a minimum.


The question remains: how? Every month, over 2.9 billion people login to Facebook, 2.2 billion to Youtube, 1.4 billion to Instagram, and 1.0 billion to TikTok. How do you fact-check all of the information streaming in and around these platforms?


Short answer? They don’t really.


Issues with Expanding Fact-Checking Efforts


Back in 2019, Wired did a full run-down on the issues that Instagram was facing with their fact-checking efforts. Too much information and not enough time or labour to keep up with it.


Although Facebook (now Meta) announced that they would be expanding their efforts to fact-check on Instagram, Sara Harrison of Wired was concerned with how they would manage to do this given the system was already very overwhelmed.


In her article, she goes into why fact-checking is so labour and time intensive. To fact-check a post, you need to understand the context and complete enough research to figure out not only if it’s right, but if it’s wrong – then what is the truth?


Harrison quotes Ben Nimmo, a senior fellow at the Atlantic Council’s Digital Forensics Reserach Lab studying disinformation on social media. He spoke about how complicated fact-checking and the control of information is on platforms like Facebook and Instagram, stating that these efforts cannot be restricted to just dismantling individual posts or accounts, but it would be necessary to find the root causes of these “calculated campaigns”.

Limitations to Twitter’s Community Notes


More recently, Twitter has been in the hot seat for social media misinformation, especially since Elon Musk became its CEO.


In 2021, Birdwatch, Twitter’s version of fact-checking, was launched by previous CEO Jack Dorsey. Then in 2022, with Musk at the helm, the Community Notes function was released, effectively replacing Birdwatch.


This Bloomberg article explains how Community Notes works:


“[A Community Note] asks fact-checking volunteers to add more context — a “note” — to misleading or incorrect tweets. Other users can then vote on whether a note is helpful or not, and a machine-learning algorithm determines what notes should be shown more broadly on the site.”


So, instead of requiring manual labour like the Instagram system, Community Notes relies on its own users to identify the issues, correct the issue, and then uses machine learning to figure out when the Note is ready for the public to see.


The intentions behind Community Notes likely mean well; however, there are many issues. Eric Fan, Rachael Dottle, and Kurt Wagner of Bloomberg summarize its major “blind spot”: Community Notes cannot correct any tweets on a divisive or complex topic.


Notes relies on users from varying backgrounds, perspectives, and political affiliation to agree that a tweet’s content is wrong. Therefore, for content on politically divisive topics, it is likely that there won’t be enough partisan agreement to convince the algorithm to publish the Note.


Bloomberg cites several examples, including tweets on topics like abortion and posts from politicians from different sides of the political aisle. They all have the same issue: if Community Note ‘corrections’ are only coming from the far left or the far right, the machine learning algorithm will not share the Note to the public, because it isn’t seen as ‘correct’ or ‘fair’.


Therefore, only tweets on topics that fall more in the centre of the political spectrum will be fact-checked by the Community Notes technology. Which is better than nothing, but not solving the issue that we’ve discussed time and again: misinformation festering and disseminating amongst echo chambers and increasing the level of political polarization.


Political Partisanship and Selective Sharing


It’s human nature to want to surround ourselves with like-minded individuals. Constantly arguing or disagreeing with others gets tiring. That’s exactly why social media algorithms are trained to show you what you will like and enjoy, despite this creating echo chambers and (possibly) driving political polarization.


In a 2017 study, Shin and Thorson observed the state of fact-checking on social media, discussing something called ‘selective sharing.’ They define selective sharing as social media users who strategically and knowingly share specific information to align with a specific viewpoint. Shin and Thorson observe this being done with fact-checking:


“We found that the fact-checking messages served as a tool for partisans to celebrate their own group and denigrate the opposing group. Fact checks that were advantageous to a candidate from the ingroup party were shared significantly more by the ingroup members than the outgroup members, a process of partisan selective sharing.”


Increased political polarization is a topic we’ve discussed previously, but this study demonstrating how fact-checking itself can be used as a political tool is intriguing. Shin and Thorson decribe a “reinforcing spiral” created by the interactions between selective sharing, political polarization, and conservative perceptions of media bias.


Pre-existing Skepticism of Fact-Checking


Just like the increased lack of trust in the news media, there is skepticism with fact-checking. How do we know who is telling the truth? And ever since Kellyanne Conway came up with the golden “alternative facts” jig, it’s only gotten worse.


Shin and Thorson also discuss this pre-existing hostility in their 2017 article. They specifically highlight the common disagreements between Democrats and Republicans in the States on the credibility and believability of fact-checking. If a fact-checker spends too much time focused on one party’s actions, viewers may see this as unbiased and unfair fact-checking or even feel victimized or alienated.


This is what the Twitter Community Notes function was trying to manipulate to its benefit: using voices from opposing partisan sides to generate fair fact-checking. Although, as it’s outlined above, this doesn’t work for most issues.


So, Where Do We Go From Here?


Most of this boils down to rebuilding media trust and encouraging individuals to think critically of the systems and individuals they interact with online. Fact-checking will always be contested, but it’s important that we don’t let it continue to spiral out of control to a point where there is no longer any value behind the words “true” or “false.”


From the over-load of information to increased political polarization and lack of technologies available to help, spreading the message of critical thinking and media literacy is incredibly difficult. We certainly have our work cut out for us!


 

Research




Shin, J., & Thorson, K. (2017). Partisan Selective Sharing: The Biased Diffusion of Fact‐Checking Messages on Social Media. Journal of Communication, 67(2), 233–255. https://doi.org/10.1111/jcom.12284

Related Posts

See All

コメント


bottom of page