The racism sparked a debate on how governments should regulate social media companies.
Last week, the world watched as England and Italy came together for the Euros 2020 final. The game stretched into penalties, with Italy eventually beating England and winning the cup. While many English fans were left disappointed, the behaviour of their peers was significantly more disappointing than the match’s outcome.
Online footage showed fans trashing streets, storming stadiums, and breaking out into fights. The concerns led many in Qatar to worry about welcoming English fans to the country for the World Cup next year.
Worst of all, though, black English football players faced immense racism after the game, sparking criticism from social media users worldwide and drawing in statements of condemnation from senior officials including UEFA.
“Those who directed racist abuse at some of the players, I say shame on you and I hope you crawl back under the rock from which you emerged,” British Prime Minister Boris Johnson said.As the flame continues to burn, Johnson later announced a plan to ban racist fans from attending games. A law currently exists that allows courts to deny entry for fans who make racist remarks within the stadium, but the ruling does not extend to online racism.
Is social media responsible?
The outpouring of racism against England’s black players online triggered a global conversation on how to tackle the issue in today’s world, with many questioning whether social media giants like Facebook and Twitter are taking adequate steps to extinguish hate on their apps.
Both Facebook and Twitter claim they promptly remove racist comments online, with Twitter saying it has deleted over 1,000 racist tweets targeting the English players in the latest round of abuse.
Many were quick to point out that social media companies have the tools to combat racism, but that they simply weren’t doing enough.
On Twitter, users asked why Instagram can’t flag racist comments the same way it flags any posts that mention Covid-19.
In recent months, Instagram has gone above and beyond to detect posts and stories mentioning coronavirus, but has not extended this technology to detect racist posts too.
The popular photo-sharing app uses Optical Character Recognition (OCR), a tool that allows it to scan images for text. This essentially lets it detect words related to coronavirus even if they weren’t typed by the user, but rather, are present in the picture.
This technology can be applied to detect racist posts and immediately mark them with a warning, though Instagram has not implemented such protective measures.
Instagram’s failure at removing racist comments does not end here. Many users pointed out that they reported abusive messages to the platform, but that its automatic moderation found the content doesn’t go “against its community guidelines”.
Instead of seeing reported content deleted from the app, users are regularly left frustrated with this automated response:
“Our technology has found that this comment probably doesn’t go against our Community Guidelines. Our technology isn’t perfect, and we’re constantly working to make it better.”
While Instagram admits that its technology isn’t perfect, it refuses to provide human moderators to check if user reports are valid. As such, racism remains alive on the platform.
Twitter, on the other hand, has taken measures in the past few months to limit potentially abusive content from appearing on its platform. Instagram can draw inspiration from two of those changes.
Prompt before harmful contentTwitter now shows a prompt to users before posting content that’s flagged as offensive. This gives users a chance to think before tweeting. Instagram can implement this feature when it detects a potentially malicious comment.
Research from Twitter found that when presented with this prompt, 34% of users edited or deleted their tweet. It also found that 11% of users presented with this prompt tweeted less offensive content in the future.
- Hide flagged comments
On Twitter, some replies to a tweet are hidden behind a “Show more replies” button. The language here doesn’t imply that the content is abusive, in case their systems got it wrong, but still hides that content away from the main conversation.
Instagram could take inspiration from Twitter here, allowing the platform to hide potentially abusive content without necessarily marking it as such.
One more option that Instagram should take is to hire more human moderators. While the app often blames algorithms for its moderation failures, social media users are now demanding that the platform takes more responsibility for deleting abusive content online.
Should social media require an ID for verification?
Meanwhile, a more controversial option has been suggested to tackle abuse.
Read also: How private is your private data?
A petition launched in the UK months ago called for verified IDs to open social media accounts.
Since the recent round of abuse in the aftermath of the Euros, this has gained traction, garnering over 600,00 signatures. While such measures are likely to reduce online abuse, it will also drastically reduce privacy for users online.
Responding to the petition in May, the UK’s government said the benefits do not outweigh the harm.
The government recognises concerns linked to anonymity online, which can sometimes be exploited by bad actors seeking to engage in harmful activity. However, restricting all users’ right to anonymity, by introducing compulsory user verification for social media, could disproportionately impact users who rely on anonymity to protect their identity.
It’s worth noting that this response was made two months ago, and recent public pressure may compel the government to change its position.
Do you think governments should have more control over social media companies? Is it reasonable to require users to verify themselves with an ID, or is that too extreme? Let us know in the comments.