The Problem With The Algorithm of Misinformation — Free Speech vs Censorship, Content Moderation & Platform Responsibility

Laurie Lo
18 min readApr 7, 2022

This is a written version of a video essay produced for Laurie’s Wandering Mind on Youtube.

“I’m more worried about the algorithm of misinformation, than the prevailer of misinformation. — Absolutely. — Misinformation will always be out there, but if the algorithm drives people further and further down the rabbit hole the f*** algorithm is the amplifier and the catalyst if extremism.” — Jon Stewart

Misinformation, “Fake news”, propaganda, echo chambers… We are hearing more and more about the prevalence of “alternative facts” and different polarized narratives in media. All of which is incredibly ubiquitous now that most people get their news from social media, a place ruled by algorithms that do an amazing job at keeping us within our own echo chambers and confirming our own biases.

Navigating the new information ecosystem we have access to thanks to the internet, is very challenging for us. The increasingly participatory nature of news sharing through various social media platforms, along with the algorithmic designs of these platforms which personalizes information based on the likelihood of sustaining our attention and engagement has given way to a very big problem: polarization and radicalization. Platforms have been heavily criticized recently for allowing inflammatory and divisive content to dominate narratives, often content that is strongly sensationalized or misleading, or even downright fabricated. Misinformation is now more rampant than ever on social media and leads to real life consequences. Examples include Pizzagate, Russian Interference in the 2016 election, vaccine misinformation, fake Ukraine footage, Russia internal propaganda, as well as violence Sri Lanka, India and Myanmar in the last few years.

A large call for action is being echoed by many concerned citizens, asking for platforms to take responsibility and remove offending content and sometimes even the users generating this content. Notably, we have seen Alex Jones being completely deplatformed after too many incident, former president Donald Trump being banned from twitter after the capitol insurgency of last year, and more recently talks about Spotify being asked to remove Joe Rogan from its platform after too many transgressions like spreading misinformation about covid, as well as making dehumanizing jokes and comments about trans people, female comedians and black people.

This then opens the door to a very important debate between the right to free speech, and censorship. Should platforms be allowed to moderate content as they see fit? Are they violating the first amendment? Who’s in charge and who gets to decide?

In today’s essay, we will explore what is considered misinformation, the misinformation policies of content sharing platforms, how platforms are responsible for creating filter bubbles keeping us in echo chambers validating our own preconceived biases, as well as some great efforts and solutions that are available at different levels to deal with this issue that is plaguing our time.

Misinformation & Disinformation
So first, let’s take a look at what misinformation and disinformation actually entail. Dr. Joan Donovan, an expert in misinformation and online extremism at Harvard recently spoke about it on The Problem with Jon Stewart Podcast.

“When we talk about misinformation in this field, we really are talking about information that is share where people don’t know its veracity or its accuracy.” Disinformation, “That’s a campaign or an operation…[…] purposeful, there is intent there. […] Disinformation to put it simply, is either sharing inaccurate information for mostly political ends and sometimes financial ends […] with purpose and veracity and planning…”

On an episode of the Your Undivided Attention podcast titled “Beyond Fake News: Confronting Lethal Misinformation on Social Media Platforms”, Dr. Claire Wardle co-founder of First Draft, a nonprofit organization focused on research and practice to address mis- and disinformation actually breaks down the 7 types of misinformation and disinformation as follows: satire or parody which has no intention to cause harm but the potential to fool, fast connection which is when headlines aren’t supported by the content, misleading content which uses the information to frame a particular issue or person in a misleading way, false context which is when real content is share under false context as we have seen multiple times recently with TikTok users resharing old footage and passing it as current footage from the war in Ukraine, imposter content where genuine sources are impersonated, manipulated content where real information or images are manipulated in an effort to deceive, and finally, fabricated content which is completely false and constructed to deceive and cause harm.

Another very scary form of disinformation that could emerge very soon in a geopolitical context are deepfakes.

From Foreign Correspondent by ABC News In-depth:
“A deepfake is essential a piece of synthetic or fake media, that’s other been entirely generated by artificial intelligence, or manipulated by artificial intelligence. […] which by the way, includes fake videos of real people saying and doing things they never did.” — Nina Schickm investigative journalist

“Deepfakes plays into the hands of anybody, any state sponsor, an institution, that wants to create confusion or deceive.” — Mounir Ibrahim, former US Diplomat in Syria

“Deepfakes are a fundamental threat to democracy and to any civilization that relies on the truth. Deepfakes could very well undermine our sense of reality.” — Matthew Ferraro, former CIA officer and disinformation specialist.

These new technologies could very soon be co-opted by foreign governments and other entities with political motives to actively mislead and manipulate the public and governments to take actions they wouldn’t otherwise take.

Misinformation Policy on Content Sharing Platforms
Now that we have a better idea of what actually constitutes misinformation, time to turn our eyes to what platforms can do about it.

The Communications Decency Act (CDA) and Section 230
First, let’s have a look at the Communications Decency Act (or CDA) and more specifically at Section 230 which is the part that currently allows platforms to moderate content without infringing on free speech. Section 230 specifically states that “No provider… shall be treated as the publisher or speaker of any information provided…”. Ultimately, it means that platforms are allowed to police the content people post on them however they want and are not liable for what people say on there. They also benefit from something called the Good Samaritan Clause which states that any tech company can take down anything on their sites long as it’s “…taken down in good faith to restrict access to… material that the provider or user considers to be obscene, lewd, lascivious, filthy…”. It’s important to remember that the CDA was introduced in 1996 and so a lot of people are calling for a reform that would reflect a more current understanding of how platforms operate and their ultimate ability to regulate user-generated content.

The Whole Twitter & Joe Rogan Saga
Let’s use a current example by analyzing the recent Joe Rogan debacle with his exclusive Spotify Podcast. The whole situation has been covered from every angle by everyone and their mothers but here’s the jist of what happened.

The latest scandal Joe Rogan has found himself in regards misinformation about the pandemic and vaccines and while multiple people have called out Spotify for not removing the misleading content, artists like Neal Young, have tried to force the platform’s hand by threatening to leave the platform. Other creators that have ongoing commitments with the platforms also found themselves wrestling with the idea of sharing a platform with someone like Joe Rogan whose values conflict with their own, notably Brené Brown who paused her two exclusive shows with Spotify asking for more transparency regarding their misinformation policy.

On January 30th, 2022, Spotify publicly released their platform rules but it did little to reassure people’s concerns as it was very vague and not particularly proactive.

Content Moderation Efforts
Platforms have long been moderating content and removing content that does not adhere to community guidelines. But with the sheer amount of content being produced every second of everyday, it’s impossible for platforms to proactively examine each new piece of content for misinformation and other platform community guideline violations.

CNBC has a video covering extensively how platforms use content moderation and what are the impacts on content moderators which are largely from overseas and can suffer a great deal mentally from the exposure to offending content they are exposed to on a daily basis. I will link it down below.

With cultural nuance and intent in mind, it’s impossible to rely solely on algorithms to moderate these platforms, but platforms are seeking ways to take the toll off of their moderators.

Misinformation in Immigrant Communities and Non-English-Speaking Countries
One of the most major problems platforms face in terms of content moderation is for content produced in other languages. According to The Wall Street Journal’s Facebook Files report, “more than 90% of monthly users are now outside the U.S. and Canada.” and in terms of content monitoring, “In 2020, Facebook employees and contractors spent more than 3.2 million hours searching out and labeling or, in some cases, taking down information the company concluded was false or misleading, the documents. Only 13% of those hours were spent working on content from outside the U.S.”

In an episode from Last Week Tonight with John Oliver covering Misinformation particularly as it pertains to immigrant diaspora communities, they highlighted the lack of content moderation of non-English content. The same content could even be labeled as containing misinformation in English but then the same image would not have the same warning in spanish. They also highlighted the issue with the prevalence of misinformation spreading through private messaging app which are a preferred tool of communication in many immigrant communities along with the lack of fact checking tools available for non-English content. I will link that video down below as well.

Platforms’ Perpetrating Echo Chambers & Filter Bubbles
Turning our attention to platforms’ responsibility in perpetrating echo chambers and the involvement of algorithms is propagating misinformation.

Oxford Dictionary defines an echo chamber as “an environment in which somebody encounters only opinions and beliefs similar to their own, and does not have to consider alternatives.” The word has become quite pejorative lately so an alternative expression has been coined by Eli Pariser in 2011: “filter bubble.”

A filter bubble is defined as “a personal ecosystem of information that’s been catered by these algorithms to who they think you are.” Ultimately, echo chambers sort of focus on the nature of the information whereas filter bubbles are more concerned with what creates the boundaries, the algorithms. The formation of echo chambers and filter bubbles has eroded our capacity to talk to each other and lead us down a path where we no longer share the same reality, but rather have our own polarized versions of the truth.

Algorithmic Manipulation
Thanks to documentaries like The Social Dilemma, we understand a bit better how social media platforms’ algorithms function. Most platforms’ business models is based on capturing attention and optimizing engagement in order to maximize profits. In order to deliberately manipulate our attention, algorithms are designed to tap into our emotions, our deepest fears and vulnerabilities, and personalize recommendations. And you know what pushes our buttons like nothing else? Sensational headlines, controversial titles, and outrageous claims. A study of twitter found that for each additional word of emotional outraged added to a tweet, it increased the retweet rate by 17%.

This tool can obviously be easily co-opted and weaponized to drive extreme content and radicalization. Using micro targeting tools and collected users’ data points, platforms can predict with surprising accuracy how we are likely to respond, and because their main objective is to keep us on the platform, they are likely to push us towards news that will confirm our biases, and validate the realities we have constructed for ourselves. Or that they helped create technically.

Cambridge Analytica and Other Scandals
Netflix’s documentary “The Great Hack” highlighted a great instance of this perversion of social media technologies with the goal of polarizing certain groups of people in an effort to influence the 2016 U.S. election and the Brexit campaign in the U.K. among other international political events’ outcome. They essentially used the data points provided by Facebook who had collected this user data to convince what they called the “persuadables” to vote whatever way they chose to implicitly advertise to them.

Furthermore, as we have seem through the Wall Street Journal’s Facebook Files reports and the congressional hearings involving Facebook and his CEO last year, among many other incidents, Facebook was heavily criticized for not shutting down “Stop the Steal” groups and other more extreme content that incentivized rioters to gather and attack the U.S. Capitol on January 6th, last year. Rioters had shared plans on social media platforms like Facebook, Twitter and Youtube and also extensively documented the events of that day. Algorithms may have even contributed to increasing their ranks by recommending their pages and groups through their automated recommendation systems.

State Media Narratives
Another great example of the weaponization of social media is currently happening to Nobel Peace Prize recipient, journalist and Rappler co-founder and CEO, Maria Ressa, who has been persecuted by the government of the Philippines for being a prominent critic of its president Rodrigo Duterte. Because a lot of the media in the Philippines is manipulating the narrative surrounding Duterte’s presidency, Maria and her team have been working on trying to provide the Filipino public with trusted information and dismantling their government’s propaganda campaigns. She believes there can be no integrity of election without integrity of information as she told Kara Swisher in an episode of the podcast Sway. The documentary by Frontline on PBS available to watch on youtube about Maria and Rappler is a great way to learn more about this story.

As we are seeing again with propaganda and disinformation campaigns happening all over Russia as we speak, governments can often co-opt social media platforms and completely overtake the dominant narrative. This can lead an entire population to be unable to access reliable trustworthy information and can often mislead them into submission.

Solving The Misinformation Crisis & Desesationalizing the News
There are multiple ways for platforms to move towards a more sustainable information ecosystem and reinstate trust in news. There are also a few things that we can do as individuals and as a society to help solve this misinformation crisis. Government intervention should also be on the table.

Platform Accountability
First let’s look at what platforms can do. So far, relying on good faith on platforms to regulate themselves hasn’t work and trusting that they could act as reliable gatekeepers has been costly, but it’s not too late for them to start taking more responsibility for their involvement in the exploitation of misleading content for profit.

A good way to start is to adopt the Santa Clara Principles which are transparency guidelines that are designed to help tech companies to be more accountable in their content moderation. This commitment to issue transparency reports allow tech to effectively communicate what, when and why they are taking content down and allows users to have access to an appeal process so that meaningful due process can occur.

Another way platforms can start to police misinformation more effectively is through the use of labels and warnings. We have already seen most major platforms include a COVID information label on any content that mentions the virus. In fact you are most likely going to see one below this video. Youtube, Instagram, Facebook, Twitter and just recently Spotify started including them as well. Facebook has been introducing labels that actually state the information is or could be false, but most other platforms simply are putting a disclaimer pointing to government website for COVID information without actually fact checking whether the information contained in the piece of content they are labeling has any validity or not. We also need more issues to be labeled like elections, conflicts and wars, etc.

There has also been research on the efficacy of contextual vs interstitial warnings which I have read through for the purpose of this video. It found that users typically ignore contextual warnings which are labels that do not interrupt the user’s experience or compel action. Alternatively, interstitial warnings which interrupt the user and require input before proceeding to the content the user is trying to access, similar to the warnings your browser often gives you when you are at risk of accessing a website containing malware or phishing, are far more effective. Interstitial warnings were most effective when they conveyed specific information as well as a risk of harm.

Lastly, we could also hope for platforms to alter their algorithms to somehow propagate misinformation less, but as one of the Facebook Files included in The Wall Street Journal reports, Facebook engineers have doubts that AI can do a good enough job at the moment.

Government Involvement
As I mentioned earlier, the U.S. government could reform section 230 of the Communications Decency Act to render platforms more accountable, but unfortunately this does not seem to be a bipartisan issue with one side of the aisle demanding less platform moderation claiming it violates the first amendment and the other side pushing for stricter oversight over the power of platforms, so I doubt this is going to get anywhere any time soon. Yay for constant congressional gridlock!

Congresswoman Anna G. Eshoo (CA-18) and Congressman Tom Malinowski (NJ-7) did introduced the Protecting Americans from Dangerous Algorithms Act in 2020. This is a legislation to hold large social media platforms accountable for their algorithmic amplification of harmful, radicalizing content that leads to offline violence. Rep. Tom Malinowski said “What we’re trying to incentivize is a change in the design of the social networks […] A change in how they use algorithms to amplify content so that we have less spread of extremism, conspiracy theories, inflammatory content that is designed solely to maximize engagement on the platforms.”

Ultimately, the problem with misinformation is a global problem, so even though, the big tech companies and social media companies are based in the United States, there needs to be international efforts to effectively change course in our information sharing models at a global scale.

At the Individual Level
In Claire Wardle’s TED talk, “How you can help transform the internet into a place of trust”, she says that we, the people who use these technologies everyday, are the ones in the best position to affect change. She suggests to model transparency and feedback of platforms in a similar way wikipedia has been operating, developing some kind of centralized anonymous data base for research where people could donate their social data for science and give people access to what different user experiences with personalized feeds look like, and finally to build a coordinated response with all academia, civil society, activist groups and newsrooms working on solving different pieces of the problem to create less disjointed efforts and more cohesive solutions.

Individually, we should also be more accountable for evaluating our own biases and sharing news more responsibly. Organizations like Ground News has created an incredible platform that helps offer different perspectives and identify preexisting biases and interested parties in a given news story. It distills news and gives a bias distribution of how central, left or right leaning the news article is, gives a factuality rating and as well as the vested parties in the publication of the article, or the owners of the site or paper. it also links to a lot of other articles covering the same story or topic to give you a better rounded perspective of the subject. Ground News also has this really cool Bias checking tool where you can evaluate your own biases. I encourage you to try it out. They also have a browser extension and a blindspotter tool that lets you find out the ratings of specific accounts you follow. And no this is not sponsored by Ground News but it would be super cool if it was!!

There are a lot of organizations also working on bridging the gap of information between polarized groups like Braver Angels, a nonprofit organization dedicated to political depolarization by running workshops, debates, and other events where “red” and “blue” participants attempt to better understand one another’s positions and discover their shared values, Courageous Conversation, an organization aiming to help individuals and organizations address persistent racial disparities intentionally, explicitly, and comprehensively through seminars, consultations and coaching, and First Draft, which was co-founded by Claire Wardle who we talked about earlier, that is working to tackle misinformation by collaborating with journalists and partner organizations, and empowering people with knowledge, understanding, and tools. There are also lots of English fact-checking resources like Poilitifact, Factcheck.org, and Snopes.

Take advantage of these resources and if you have any other trusted resources, please leave them in the comments below and I will add them to the list of additional resources I have compiled in the description box.

Final Thoughts
Lastly, I would like to share this random idea my husband and I have been talking about which would hypothetically be an amazing way to combat misinformation. We came up with this idea where there could be one-stop shop that would be dedicated to summarizing and making all scientific research paper more digestible and accessible to everyday people. We recognize that research and data if often hard and expensive to access without academia credentials and even more hard to understand and digest for regular people, so we hope someday there will be a place on the internet that summarize all of that and makes it accessible for everyone. There are already a couple organizations doing this for specific subjects like NutritionFacts.org which covers scientific literature on nutrition and EvidenceBasedBirth.com that covers research and data on pregnancy and birth-related topics. Combating misinformation through sincere accessibility would be amazing. If we had an organization, a nonprofit platform, with easily searchable database where you could find summarized conclusions and methods and reliability of methods, clear disclosure of biases and vested interests and funding sources, it could totally revolutionize how we understand information. You can see I’m very enthused with this idea, but I will personally never see it through. Hopefully someone else will though, that would be so cool! If you decided to roll with our idea, remember to give us credit and maybe a couple share in the company though (joke)!

Ultimately, we all need to recognize that there is a huge issue with misinformation and one of the biggest ways this misinformation crisis is amplified is through platforms’ algorithms that don’t discriminate between real and false information when making personalized recommendations for users. Censoring the prevailers of misinformation can only do so much and can also only create a lot more scandal and conflict. As Brené Brown said: “Both censorship and misinformation are threats to public health and democracy. Rather than falling prey to believing that we have to tolerate one to protect against the other, our collective well-being is best served when we approach debates and discourse with curiosity, critical thinking, and a healthy skepticism of false dichotomies.”

I will leave you with that. Thank you for watching and please share your thoughts in the comments below!

Additional Resources

Sources

--

--

Laurie Lo

Essays and commentary related to sociocultural experiences and phenomenons in digital medias.