Big Question: Should Facebook Reserve the Right to Censor?

Big Question: Should Facebook Reserve the Right to Censor?

Facebook is a powerful company with products used by billions of people worldwide. At the same time, it is a for-profit, publicly-traded organization. So, that begs the question: Should Facebook have the right to censor potentially harmful content, either from a moral standpoint or as a response to stakeholder pressure?

Let me preface this discussion by saying that I am asking the question in the hopes that it starts a discussion. A Facebook censor (or, more broadly, any major media company) is obviously a topic that can create a fairly significant amount of heated debate. Censorship is often a question of one’s own political and social views – and a topic that is rarely discussed in modern democracies/republics –  so every person might have a different opinion of what counts as censorable content, if anything. That being said, the angle I am curious about is the removal of content that has the potential to harm or spread blatant lies. (Not abusive content, or anything that incites violence, which should be removed regardless of the political or personal motivation behind it.) For example, anti-vaccination or Flat-Earth nonsense. (And yes, it’s nonsense; regardless of what idiotic link someone has decided counts as truth. A YouTube account and a video camera does not make one an investigative reporter or provide one with any scientific knowledge whatsoever.) Facebook plays an important role in our daily lives, so should it be responsible for going beyond the scope of abuse and include the realm of falsehoods?

Where is this coming from?

Recently, I’ve read a number of articles and editorials that talk about the gargantuan power than Facebook holds in terms of swaying public opinion. Just look at the recent revelations about Russian meddling and the spreading of fake stories and content during the 2016 U.S. Presidential race. Facebook has taken major steps to block this kind of interference from happening again, and the shuttering of abusive accounts is something that has become much more prevalent in the last year, but a Washington Post editorial I came across not too long ago proposed a new question: Does Facebook have a moral obligation to stop unverified and potentially harmful content from going viral?

The question was posed in reference to the huge rise in (widely disproven and massively moronic) Flat-Earth conspiracies that have moved closer to the mainstream communications avenues that exist today, despite the fact that no one should be wasting time talking about the topic. The large burden of blame in this article was placed on Facebook’s shoulders. The multi-faceted algorithms that are in place to (objectively) share content that is being discussed both within your community and around your area/the world at large have helped propel false stories into the news feeds of the average user. For those that are easily swayed or have an anchoring bias based on the first thing they see, this helps create an environment where debunked or even ridiculous (potentially harmful) theories can thrive and spread.

Should Facebook go a step further?

This is where we get into the tricky grey area. Recent history has led us to believe that censorship is wrong and that the Internet is an open space for all to enjoy or debate the points of views of all those who wish to share, without the interference of a higher power (Facebook, in this case). That is a pretty common misconception. The Internet is not as open and free as it may seem, and while we can all access and use platforms like Facebook, WordPress and Google for free, these are all companies with agendas that can do whatever it is they would like to do when it comes to how their platforms are used.

Following the protests in Charlottesville in 2017, the Nazi hate-site The Daily Stormer lost access to its web hosting platform (years too late, quite frankly) and had to search around the globe for a host that had a moral compass favoring profit over hate. When we agree to Terms & Conditions (which we never read) we are agreeing to follow the code of conduct set forth by the platform we are using. After seeing what had happened in 2016 when those terms and conditions were somewhat lax in terms of political rhetoric, Facebook changed its policies to prevent that kind of damage coming to the democratic process of the United States again. They could easily make the same adjustments for the kind of unverified content that has the potential to cause harm or spread falsehoods, like linking autism and vaccinations. That has the potential to seriously the public, and shouldn’t find its way into the news feeds of those with potential biases towards that content.

I suppose the bigger question is if Facebook starts with one, subjective decision about what is considered ‘harmful’, where do they stop? We’re in unchartered waters that, frankly, move too quickly to make sweeping changes overnight.

What are your thoughts?

The following two tabs change content below.
Corey Padveen is a data-oriented marketing professional with a focus on statistical analyses of human behavior. This specialization has led him to speak and present at dozens of conferences around the world, to write for a variety of reputable online and print publications, and recently, to publish ‘Marketing to Millennials For Dummies’ as part of the world-renowned ‘For Dummies’ series. He regularly shares real world examples and findings from his research, and discusses how members of society are evolving as consumers, communicators, and a global network as a whole.

Leave a Reply

Your email address will not be published. Required fields are marked *