In response to the report, Meta said it was setting up an internal task force to address the issues raised by the investigation.
An investigation by the Wall Street Journal and Stanford University has shown that Instagram’s recommendation system facilitates the growth and connectivity of pedophile activities online.
The researchers found accounts thrive by using hashtags popular in the pedophile community. When researchers created or interacted with these accounts, they were immediately and overwhelmingly suggested to follow similar networks.
Alex Stamos, head of Stanford’s Internet Observatory and former chief security officer for Meta, told the WSJ that the company could and should be doing more to tackle this issue.
“That a team of three academics with limited access could find such a huge network should set off alarms at Meta,” said Stamos. “I hope the company reinvests in human investigators.”
Apart from graphic and explicit content, the networks allow users to buy “menus” of disturbing content or to commission requests, including ones of violence and bestiality. Children are also sometimes available for in-person “meet-ups”.
Some accounts bore indications of sex trafficking.
In response to the report, Meta said it was setting up an internal task force to address the issues raised by the investigation.
“Child exploitation is a horrific crime,” the company said. “We’re continuously investigating ways to defend against this behaviour actively.”
A history of negligence
While Meta has taken down over 490,000 accounts in one month alone, the networks continue to grow, persisting through the use of hashtags and backup accounts.
Account holders often promote multiple profiles in bios, and Instagram usually takes action by deleting accounts instead of blocking IP addresses or devices. So, most accounts advertise backups and other means of communication through their bio and maintain huge followings.
The investigation also found that the site’s moderation team has been lacklustre. Multiple reports showed that Instagram was slow to react to requests for deletion of posts and stories.
In one instance, an account that advertised child porn content saying remained active and cleared the Instagram review team.
There were additional instances where Instagram wouldn’t take down posts saying, “Because of the high volume of reports we receive, our team hasn’t been able to review this post.”
While the report acknowledges the presence of such activities in other platforms like Twitter and TikTok, they are comparatively three times lesser in volume.
This is not the first time Meta was part of an algorithm scandal, as slave markets were hosted on the platform in 2019.
Hiding in plain sight
Instagram allows users to search for content of minors.
Searching for terms associated with pedophilia would only put the warning: “These results may contain images of child sexual abuse” noting that the production and consumption of such material cause “extreme harm” to children but not forbidding such usage.
Test account’s ‘suggested for you’ page also showed recommendations of off-site platforms where one can buy illegal services. According to data gathered via Maltego, a network mapping software, 112 of those seller accounts collectively had 22,000 unique followers.
Apart from services, there is an entire ecosystem where discussions and memes of pro-pedophilia content are shared.
“Instagram’s problem comes down to content-discovery features, the ways topics are recommended and how much the platform relies on search and links between accounts,” David Thiel, chief technologist at the Stanford Internet Observatory, said.
“You have to put guardrails in place for something that growth-intensive to still be nominally safe, and Instagram hasn’t,” Thiel, who previously worked at Meta on security and safety issues, added.
While a Meta spokesman said systems to prevent recommendations and increase efficiency are currently being built, Thiel called Instagram’s role in promoting pedophilic content and accounts unacceptable.