The Facebook Papers is perhaps the biggest crisis the company has faced to date.
Facebook’s ongoing scandal further accelerated this week with the release of a collection of stories dubbed The Facebook Papers. It contains leaked documents accusing the social media giant of contributing to several global issues.
The documents were provided to the Securities and Exchange Commission in the United States by Facebook whistleblower Frances Haugen.
While there’s a tonne of data in the documents, here are the top revelations in The Facebook Papers.
Human trafficking
An internal report found that “FB profiles, IG Profiles, Pages, Messenger and WhatsApp” are all used for the “recruitment, facilitation and exploitation” of human trafficking.
Facebook found that at least $152,000 was spent on related advertisements on its platforms, with some targeting men in Dubai. This included a post that advertised the sale of a 38-year-old Indian woman for $350.
When the BBC shared the results of an investigation that showed human trafficking is present on Facebook, the company removed offending posts. However, it did little else to limit such activity on its website.
Since the revelations first emerged, Apple has threatened to remove Facebook and Instagram apps from its App Store if they failed to implement policies to restrict this exploitation.
How Facebook’s new ‘metaverse’ could change your life forever
Internal documents show that Facebook believed this would cause “potentially severe consequences to the business”. With its revenues at risk, the company then took swift action to tackle human trafficking.
Spreading misinformation
Even studies conducted by Facebook itself concluded that the social network manipulates users and leads them to sources of misinformation and conspiracy.
A test account created in India only followed profiles recommended by Facebook itself. Within three weeks, fake news and photos of beheadings filled the account’s news feed.
“I’ve seen more images of dead people in the past 3 weeks than I’ve seen in my entire life total,” said one Facebook employee on an internal system.
In the United States, Facebook’s researchers created a fake account for a 41-year-old conservative called Carol Smith. They followed verified conservative pages such as Fox News. Within two days, she was recommended to follow a QAnon page, a conspiracy theory page spreading misinformation in the country.
Both cases highlight the dangerous effect that Facebook has on its users, as it takes them through a path of pages that leads to misinformation. While the end result is unintentional, it’s the responsibility of Facebook to monitor its platform and tune its algorithm to work more responsibly.
‘Immune’ VIP users
In 2014, Facebook’s content moderation system took down Rihanna’s Instagram account after she posted a semi-nude photo of herself on the cover of a magazine. This made headlines, and Instagram restored Rihanna’s account.
Facebook wanted to avoid these issues in the future, so it created a white-list of users that Facebook employees can not easily restrict from posting violating content on its platform. The idea was to add a few extra steps before an employee, or automatic system, can remove white-listed accounts.
Most Facebook employees had the power to add anyone to the list, meaning it became more challenging for the company to regulate who was on it. In 2020, there were 5.8 million users who could post whatever they wanted to on Facebook’s platforms without facing consequences.
Highly influential users were given an additional tier of super immunity, making them almost invincible to content moderation systems.
In 2019, Neymar shared a live stream that revealed the name and nude pictures of a woman that had accused him of rape. This doesn’t comply with Facebook’s policies as it falls under its “non-consensual nudity” category, which is banned on its platform. A Facebook employee tried to remove infringing posts but could not do so as they didn’t have the authority to delete content for a white-listed user.
Even after Facebook took the post down, it decided not to take down Neymar’s account because he’s famous and photogenic. Taking down his account would have harmed Instagram’s overall business.
Divisiveness, violence and lack of moderation
A lot more has come out on Facebook in the wake of the Facebook Papers.
A 2018 update to the news feed algorithm favoured content that gets interactions over those that don’t. Soon after, Facebook realised that this was promoting hateful and divisive posts.
An internal investigation titled “Does Facebook reward outrage” discovered that content with hateful comments led to more clicks on the article. As the algorithm promotes posts with more clicks and interactions, it promotes posts that are more hateful and divisive. Facebook did not update its algorithm to de-prioritise hateful content.
Read also: ‘Toxic’ Instagram buried study that highlighted harmful impact on teens
Facebook’s internal analysis also found that the social network entices violence in regions such as Myanmar and Ethiopia. A former data scientist at Facebook said she felt “blood on her hands” after working at the social media company.
The documents also found that Facebook did not have sufficient moderators across all global regions. CNN referred to it as “language blind spots” as Facebook’s moderators could not always understand the language and context of some of the posts shared on its platform.
Facebook’s response
In a recent conference call with shareholders, CEO of Facebook Mark Zuckerberg addressed the leaks saying that he believes large companies should be criticised. However, he claims that “what we’re seeing is a coordinated effort to selectively use leaked documents to paint a false picture of our company”.
He then clarified that most of these issues do not exist due to Facebook or social media but instead are issues that already existed within society.
He gave the example of how America had faced polarisation since before he was even born. Meanwhile, other countries with heavy social media usage do not face the same polarisation that America does. Therefore, he concludes that social media doesn’t create these issues and cannot be the one tool used to solve them.
There is merit to his argument. Twitter’s CEO, Jack Dorsey, made similar remarks in the past. If social media is just amplifying our voices, then is the problem with it or with us? Is it Facebook’s fault that we are more likely to interact with a negative headline than a positive one?
It’s not that simple, though. Yes, polarisation and hatred existed in society long before social media ever did. However, platforms such as Facebook nourish and reward this hatred.
Facebook can tune its algorithm to minimise the exposure of hatred, but it refuses to do so to protect its revenue. It can make its apps less addictive and stop aggressively recommending pages purely based on engagement as this leads people to sources of misinformation, but the company prioritises engagement and revenue.
Facebook can also invest more in content moderation and act proactively when it spots dangers such as human trafficking and self-harming teens, but.. you get the picture.
While many companies prioritise increasing profits, Facebook’s rush towards money has led to depression, murder and suicide among its users. It’s no wonder that an ex-employee believes she’s got blood on her hands due to working at Facebook.
Read also: ‘Moral obligation’: Google, Amazon employees mobilise to protest Israeli contract
There’s no denying that Facebook’s suite of products have kept us connected to family and friends more than any other company has. If we received no value out of Facebook, then it simply would cease to exist. However, while we use its products to stay connected, social media irresponsibly puts us and others at risk for things we never signed up for.
We don’t expect Facebook to solve all of society’s issues, but solving some of its own without requiring a financial incentive would be a good start.