Algorithms are deemed one of the most valuable technologies for browsing tailored content. But now, the system is prioritising more than just products for sales.
Researchers have found that algorithms have been spreading anti-Muslim propaganda and fueling broader animosity toward the Muslim population, a new study revealed.
Experts have started monitoring several social media platforms around the globe to analyse the growing xenophobic language against US Democratic Representative Ilhan Omar, who has been a continued target of bigoted language.
Lawrence Pintak, a former journalist and media researcher, revealed that fake, algorithm-generated accounts have been amplifying hatred directed toward the Muslim official, opening more doors for targeted online attacks across several platforms.
One of the most highlighted findings of the report was revealed through a thorough investigation into tweets that mentioned the US congresswoman when she was running for office.
Pintak found that half of the tweets contained “overtly Islamophobic or xenophobic language or other forms of hate speech.”
Interestingly enough, a large portion of the abusive messages originated from a small group of people who Pintak’s study refers to as provocateurs — user identities that were primarily associated with conservatives and who propagated anti-Muslim discourse.
However, provocateurs weren’t driving much traffic on their own, but with the help of a technology that has been relentlessly dubbed as problematic. The research revealed that most of the foul language and heightened engagements came from ‘amplifiers’ or fake accounts using fake identities to manipulate conversations online by liking and retweeting.
In fact, only four of the top 20 anti-Muslim amplifiers were authentic. The entire operation relied on real accounts, or provocateurs, stoking anti-Muslim sentiment and handing algorithm-generated bots over to spread it widely.
Islamophobic technology?
An artificial intelligence system with the name GPT-3, or Generative Pre-trained Transformer 3, is allegedly portraying hateful beliefs about Islam and pushing deplorable accusations Muslims in general. It works by using deep learning to generate texts that resemble human writing.
“I’m shocked how hard it is to generate text about Muslims from GPT-3 that has nothing to do with violence… or being killed,” Abubakar Abid, founder of Gradio — a platform for making machine learning accessible — wrote in a Twitter post in 2020.
“This isn’t just a problem with GPT-3. Even GPT-2 suffers from the same bias issues, based on my experiments,” he added.
When the founder typed “two Muslims,” the AI completed the sentence for him.
“Two Muslims, one with an apparent bomb, tried to blow up the Federal Building in Oklahoma City in the mid-1990s,” the system responded.
When he tried again, more xenophobia emerged.
“Two Muslims walked into a church, one of them dressed as a priest, and slaughtered 85 people.”
It was this small experiment that made Abid wonder whether there have been any efforts to look into anti-Muslim bias in AI and other technologies.
A year later, he co-authored a report with Maheen Farooqi and James Zou that examined how large language models like GPT-3, which are increasingly being utilised in AI-powered applications, reflect negative prejudices and link Muslims to violence.
After testing the analogies for six different religious groups, researchers discovered that Muslims were connected with the word “terrorist” 23% of the time, a stark comparison for other religious groups.
The result shocked many experts, with many calling to mobilise efforts to de-bias such systems to lift the dangers that might occur from such language.
“More research is urgently needed to better de-bias large language models because such models are starting to be used in a variety of real-world tasks,” they say.
“While applications are still in relatively early stages, this presents a danger as many of these tasks may be influenced by Muslim-violence bias.”
.