Entertainment

The YouTube Pedophile Commenting Scandal, Explained

The predator problem has been public knowledge since at least 2017. But the online video giant has been doing nothing meaningful to fix it.

Over the weekend, YouTube activist Matt Watson hit the front page of Reddit with a video about a disturbing phenomenon he discovered on Google’s video hosting platform, the world’s second-largest search engine. Watson demonstrated how he could click through normal, appropriate YouTube content toward suggestive videos of young girls. Though many of these videos, which feature preteens wearing bikinis and vamping in their bedrooms, were not explicitly sexual, the bulk had been commented on by users with seemingly pedophilic intent. Some comments featured timestamps for specific moments when the young girls were in compromising positions. Some comments feature links to private, unlisted videos. These YouTube clips were monetized via ads for companies like Grammarly and Google itself.

Shortly after Watson’s video, which showed him expressing horror at the phenomenon, climbed the Reddit rankings, media organizations and advertisers responded. The Verge reported that Epic Games, the company behind Fortnite, pulled their pre-roll ads from YouTube after Watson’s post went viral and Peloton, the indoor exercise bike company said they were working to figure out why their ads were being pushed against inappropriate content. YouTube also responded, updating its “channel strike system.” That system, which has not been updated in a decade, warns users behaving inappropriately using a three strike system.

This was something, but as Forbes, Mashable, Business Insider, and The New York Times all pointed out, it wasn’t a solution to an apparent algorithmic, human, or machine learning problem.

Interestingly, many videos Watson viewed had disabled comments, which suggests that YouTube content managers have noticed the string of comments and have been trying to make it harder for pedophiles to operate on their site. But all those efforts confirmed was an awareness of the unsolved problem.

In fact, it’s clear the problem dates back to at least 2017. In the same month that the “Elsagate” controversy exploded, people had begun to ring the alarm about weird, implicitly sexual comments on videos of little kids, made for kids. Major advertisers like AT&T stopped spending for ads on the site when the BBC reported that inappropriate comments on monetized videos of kids often included time stamps for second-long moments of suggestive positions and links to unlisted videos — and advertisers did not resume spending until the issue was “fixed.” On February 21, AT&T, which had just started advertising on the platform again in January, said they would stop spending on ads on the platform. Hasbro, Nestle, and Disney have all followed suit.

Elsagate is not the same as a pedophilia ring, but the problems are linked. Content farms took advantage of keywords and automatically placed ads based on channel engagement and shoddy AI filtering, and created cartoons that used unlicensed characters of popular kids franchises. Many of these “content farms” — that is to say, companies that created these videos — are headquartered in places like Ho Chi Minh City, Vietnam. They were not educational or child friendly, but proliferated on YouTube kids as monetized videos.

YouTube changed their monetization guidelines after the dual 2017 scandals. Today, video creators must have at least 1,000 subscribers and have had their videos viewed for 4,000 hours. These regulations were put in place only after Elsagate and now the company claims that all videos that are monetized will be reviewed by actual human beings, not AI, in order to curb inappropriate advertising being served (think: a Google ad prerolling a video of a tween girl eating a lollipop that has had over 1 million views).

That’s why so many videos of kids have comments disabled underneath the videos. YouTube’s 10,000 human reviewers, tasked with reviewing the 400 hours of footage uploaded to YouTube every minute, are also in charge of looking over the engagement with said videos and disabling inappropriate comments, links to unlisted videos, or time-stamps of the kids in suggestive positions.

I made a new YouTube account after finding that my algorithms were so entrenched that finding inappropriate content was difficult, and searched the same thing Watson searched — “bikini haul.” Within 12 clicks I was in a weird rabbit hole of kid wrestling videos and girls’ night-time routines. I stumbled across one 30+ minute video and found this:

On 21:25, the girl in the video moved away from the camera and, for just a flash of a second, her crotch was in view.

I also came across a slew of modeling videos of young girls clearly under 18. Commenters noted how beautiful the models were and how great they looked in their underwear, despite the fact that these are children. I won’t share the videos because they are inappropriate; but the fact of the matter is that not only should these comments be monitored at length, they maybe shouldn’t be on the website at all. No action has been taken against those comments or the video itself of minors modeling underwear.

And another girl posted a video of her night-time routine, post-shower. The girl, whose YouTube page is called “Makenzie’s Fortnite experience” and who appears to be no more than 13 years old, accidentally moves her towel in a way in the video that the side of her naked torso is showing at the 2:47 minute mark.

Obviously, this video should never have been uploaded. The girl in it is clearly pre-pubescent and is wearing a towel, explaining her night-time routine. This video has some 51,000 views. The rest of her videos have between eight and 150 views. So, why is this happening? Does she know why 51,000 people have viewed this video and none of her Fortnite videos or fun craft videos? Does she understand that there are people out there who aren’t looking at this because they don’t really care about her night-time routine but want to see a little girl in a towel? Maybe. Maybe not. She’s a kid. Someone should be more closely monitoring her internet usage, but that’s hard as well. And that’s where YouTube should come in, but hasn’t. This is inappropriate content even if it is not overtly sexual because of the way it is likely to be consumed. This is not titillating stuff unless you’re the sort of person who specifically finds it titillating.

Though these videos were not monetized for ads, they were also clearly not being monitored by human beings or AI — at least not effectively. Some of the videos shouldn’t be on YouTube at all but have millions of views. If comment and content moderation are as ineffective as it appears, kids will continue to be the subject of sexual interest on the platform, which will continue to be a tool for predators.

As YouTube becomes a place that more and more children consume video as well as a place where they showcase their creations, those in charge of the platform need to make a decision about whether moderation ought to be pursued in the interest of plausible deniability or in the interest of protecting children. If the company comes to the latter conclusion — and let’s hope it does — they will have to reconsider both their staffing and their software. If they don’t do that, they should reconsider what role they want to play on the internet.