UKRAINE – 2020/11/06: This photo illustration shows an Instagram logo displayed on a smartphone. (Photo illustration by Valera Golovnev/SOPA Images/LightRocket via Getty Images)
SOPA Images | LightRocket | Getty Images
Instagram’s recommendation algorithms link and promote accounts that facilitate and sell child sexual abuse content, according to an investigation published Wednesday.
of Meta The photo-sharing service stands out from other social media platforms and “appears to have a particularly serious problem” with accounts displaying self-generated child sexual abuse material, or SG-CSAM, Stanford researchers wrote in an accompanying study. Such accounts are said to be operated by minors.
“Due to the widespread use of hashtags, the relatively long life of seller accounts, and a particularly effective recommendation algorithm, Instagram serves as a key mechanism for discovering this specific community of buyers and sellers,” according to the study, cited in The Wall Street Journal investigation , Stanford University’s Internet Observatory Cyber Policy Center and the University of Massachusetts Amherst.
Although accounts can be found by any user searching for explicit hashtags, the researchers found that Instagram’s recommendation algorithms also promote them “to users browsing an account on the web, allowing account discovery without keyword searches.”
A Meta spokesperson said in a statement that the company has taken a number of steps to fix the issues and that the company has “established an internal task force” to investigate and respond to these allegations.
“The exploitation of children is a horrific crime,” the spokesman said. “We are working aggressively to combat it on and off our platforms and support law enforcement in their efforts to arrest and prosecute the criminals behind it.”
Alex Stamos, Facebook’s former chief security officer and one of the paper’s authors, said in a tweet on Wednesday that researchers focused on Instagram because its “position as the most popular platform for teenagers globally makes it a critical part of this ecosystem.” However, he added that “Twitter continues to have serious problems with the exploitation of children.”
Stamos, who is now director of the Stanford Internet Observatory, said the problem continued after Elon Musk acquired Twitter late last year.
“What we discovered is that Twitter’s primary scan for known CSAMs broke after Mr. Musk’s takeover and was not fixed until we notified them,” Stamos wrote.
“Then they cut off our access to the API,” he added, referring to the software that allows researchers to access Twitter data to conduct their studies.
Earlier this year, NBC News reported that multiple Twitter accounts offering or selling CSAM remained live for months, even after Musk promised to address child exploitation issues on the social messaging service.
Twitter did not provide comment for this story.
I’m watching: YouTube and Instagram would gain the most from a TikTok ban
