Recent studies have revealed an alarming issue regarding YouTube’s algorithm, which sends gun-related videos to children as young as 9 years old, reports AP News.
To comprehend the connection between YouTube videos and gun violence, researchers from a nonprofit organisation specialising in social media studies conducted a comprehensive study.
They established YouTube accounts that emulated the behavior of typical boys in the United States with interests in video games and related content.
The study aimed to examine how the platform’s algorithm responded to and influenced the viewing preferences of young users.
Concerning test results
Two simulated nine-year-old accounts were created, both expressing a preference for video games, particularly first-person shooter games.
The accounts were virtually identical, except for their approach to YouTube’s recommended videos. While one account clicked on the suggested videos, the other account disregarded the platform’s recommendations.
The account that actively engaged with YouTube’s suggestions soon encountered an inundation of disturbing content.
The recommended videos included graphic depictions of school shootings, tutorials on tactical gun training and step-by-step instructions on converting firearms into fully automated weapons.
Shockingly, one video showed a young girl of elementary school age handling a handgun, while another showcased an individual firing a .50 calibre gun at a dummy head filled with lifelike blood and brains.
It is important to note that many of these videos violated YouTube’s own policies regarding violent or gory content.
“Video games are one of the most popular activities for kids. You can play a game like Call of Duty without ending up at a gun shop — but YouTube is taking them there,” said Katie Paul, director of the Tech Transparency Project, the research group that published its findings.
“It’s not the video games, it’s not the kids. It’s the algorithms.”
Criticisms not new
YouTube, along with TikTok, is a highly popular online platform among children and teenagers. However, both platforms have faced criticism in the past for hosting and sometimes promoting videos that endorse gun violence, eating disorders and self-harm.
Social media critics have also raised concerns about the connection between social media usage, radicalisation and real-world violence.
Tragically, many individuals responsible for recent mass shootings around the world have utilised social media and video streaming platforms to glorify violence or even livestream their attacks.
For instance, the perpetrator of the 2018 school shooting in Parkland, Florida, which claimed the lives of 17 people, posted disturbing messages on YouTube stating intentions to harm others and become a professional school shooter.
Similarly, the neo-Nazi gunman involved in the recent Dallas-area shopping centre shooting had a YouTube account featuring videos on assembling rifles, content about the serial killer Jeffrey Dahmer and a clip depicting a school shooting scene from a television show.
Social media giants contest claims
A spokeswoman for YouTube defended the platform’s protections for children, noting that it requires users under 17 to get their parent’s permission before using their site; accounts for users younger than 13 are linked to the parental account.
“We offer a number of options for younger viewers which are designed to create a safer experience for tweens and teens,” the company wrote in a statement.
While YouTube has removed some of the identified videos flagged by researchers at the Tech Transparency Project, other problematic content remains accessible on the platform.
Many major technology companies rely on automated systems to detect and remove content that violates their policies. However, the project’s report highlights the need for increased investments in content moderation.
“Big Tech platforms like TikTok have chosen their profits, their stockholders and their companies over children’s health, safety and even lives over and over again,” Knox said in response to a report published earlier this year that showed TikTok was recommending harmful content to teens.
TikTok has defended its site and implemented policies that prohibit users under the age of 13.
Their guidelines explicitly forbid videos that encourage harmful behavior. When users search for content related to topics such as eating disorders, TikTok provides prompts offering mental health resources to ensure user safety instead.