What Can YouTube Do to Keep Our Children Safe?

YouTube's has always done everything in its power to keep people on the platform. Can furious parents change that?

In February, YouTuber Matt Watson posted a video exposing the ease with which he was able to enter an algorithmically engendered rabbit hole and find commenting communities of pedophiles exploiting otherwise normal videos of children. Sometimes, these same commenters posted links to unlisted videos or posted WhatsApp contact information for what was, presumably, pedophilic-friendly group messaging, but often posted time stamps of kids in compromising positions in said videos.

Watson’s findings yet again inflamed the conversation around YouTube’s moderation and accountability. Many of the videos he found were programmatically monetized with ads, meaning that YouTube was making money off of the content that, although might not have been disturbing itself, was being exploited in clicks and views by those who had ulterior motives. The videos themselves were not the problem, for the most part. Many were of children doing normal kid stuff: wrestling with their siblings, showing off their toy collections, but the comments and the ways they were linked to each other were suspicious and, ultimately, disturbing. 

The scandal, coming just a year after the YouTube first acknowledged these commenting rings and after YouTube kids’ #ElsaGate scandal, in which people found disturbing videos on a platform made with kids and their safety in mind, embroiled YouTube in a debate where many asked: Has YouTube changed anything at all? And if so, why was the problem proliferating? Could it be fixed?

YouTube responded by disabling comments on millions of videos that were being targeted by predators and pointing out that they have a 10,000-employee deep team of human comment and content moderators and a machine learning system whose job it is to sift through videos and flag anything offensive. But is it enough? Is there any way for YouTube to make amends with worried parents? Or is the company’s algorithm too far gone?

To figure out what, if anything, can be done to fix the platform, Fatherly talked to Jonas Keiser,  an Affiliate at the Berkman Klein Center for Internet & Society, Associate Researcher at the Alexander von Humboldt Institute for Internet & Society, and DFG Research Fellow. Keiser is an expert in YouTube algorithms and the proliferation of harmful communities. We spoke to him about YouTube’s algorithm, how these problems are created, and what YouTube can do to do better.

Your research focuses on far-right troll campaigns on Youtube and how such communities spread. Are there similarities between those communities and these newly found pedophilic communities sharing links in comment sections?

Fatherly IQ
  1. Do you think schools should expel children whose parents cheated to get them in?
    Yes, they didn't earn it
    No, it's not their fault
Thanks for the feedback!

I wouldn’t make that comparison. With the far right, we have more very visible and obvious attempts to form a community and to connect with other channels. They follow their own trends of topics they think are important and this doesn’t happen on one channel but several. It’s done very deliberately through their own actions. Far-right talking heads go to each other’s shows and give each other prominent legitimacy.

At the same time, through their activity and user activity, they also effect YouTube’s algorithm in a way that, on YouTube, political videos and political channels will, no matter what ideology you look at, often lead to far-right channels.

So, the pedophilic communities seem to be much more that the channels and videos were not necessarily the community there, but rather the online comments, which, from what I’ve read, created this weird, invisible, very disturbing phenomenon.

For those who are unaware, how does YouTube’s algorithm function?

YouTube’s algorithm spins around the idea that it will keep users engaged. In that context, that obviously means viewing the videos, or commenting, or liking, or sharing. Staying on the platform is the most important goal. To stay on the platform, you will get recommendations on your home page, on the video pages, and on channels itself. These are all built around the idea that users should stay on the platform. The question is, of course, how does YouTube create an algorithm that does that?

There are many different data points that go into that algorithm. For example: what is currently trending on the platform? What have other people with the same interests looked at? What videos are getting clicked on the most? Where are similar people overlapping in user comments? Those sorts of things. The main idea is have the algorithm learn what attracts and keeps users on the platform.

In terms of content and comment moderation on YouTube, how is that moderation set up? Do you think it’s a sufficient response to the threats on the Internet?

Only to some extent. The problem with moderation in the way most big social media platforms do it is that its built around people reporting content. So, basically, if no one reports it and it’s not clearly against the rules that are identifiable through machine learning  — like curse words, or something like that which could be filtered out algorithmically — other comments are obviously much harder to identify so a human needs to look at it. But if no one reports the content, YouTube doesn’t know that it exists, just because there’s so much content on the platform. It highlights the importance of users. That’s debated, whether or not that’s a good or bad thing, that users are put in the position that they have to put in the labor to highlight to YouTube what is offensive to them.

On the video level, other forms [of moderation have been] implemented now, so there’s downranking of content that could include top news. If you search news items, you’ll get what YouTube considers to be more reliable sources first. They’ve experimented with the info boxes around certain conspiracy videos. These are all, in one way or another, forms of content moderation.

The way that YouTube relies on good samaritans to report stuff that might be problematic or offensive feels similar to the way that Reddit is also largely policed by community moderators, although Reddit doesn’t seem to have this problem at the same scale.

The comparison makes sense, but it’s very different. Each subreddit is a community that basically says what form of content is allowed. In the context of the science SubReddit, for example, you can see when moderators are making sure that the rules are being followed, while other subreddits are very laissez-faire. The numbers of posts on subreddits can’t be compared to the numbers of videos being uploaded at any given hour on YouTube. Some of these forums have an interest in self-policing. The labor is basically outsourced to the users and the assumption is that as long as there are people motivated enough to do that labor, that it’s fine, but that’s not the case on YouTube. These forums don’t exist in the same way.

In terms of these recent commenters and recent YouTube Kids scandals such as #ElsaGate, where trolls were stitching terrible videos into children’s content. Is there a way to bring attention to them that doesn’t simply rely on the good will of the average YouTube user?

YouTube has slowly, very slowly, gone the right direction. They’re not there, but they’re understanding that people will search on YouTube for news, so curating what channels will come up makes sense in my opinion.

In my own work, it’s flabbergasting to me that YouTube basically treats political content the same as it does pop music. In recent weeks, they’ve slowly understood that there is a difference.

For example, with the anti-vaxxing movement, they said that they wouldn’t necessarily allow certain videos anymore. I think that’s important. In a political context, the idea of ‘just people watching’ is not what you want. This is what you want for music, for gaming, or other forms of entertainment, but when it comes to information.

I think that for video and channel recommendations, where this logic continues, that aims at having people on the platform, no matter for which motivation, is problematic.

So, what solutions, if any, do you suggest for dealing with YouTube’s recent commenting scandals?

From what I’ve read, the community mostly existed through the comments, which then had this big effect on the YouTube recommendations, so those videos were linked, obviously in a rather sick way. I think this is kind of like the problematic part. How do you, as a platform, be aware? Are you aware that there’s something going on that shouldn’t go on? And, because obviously it shouldn’t be down to certain, active and aware users to figure these things out, but thankfully, there are.

The question is: ‘How do algorithms work in that context and allow for that?’ My perception is that there’s a difference between the users that commented and people that had uploaded videos. I think that might be easier to disconnect; but I think that the best option would be to invest heavily in humans to moderate content that is suspicious, and just being very aware of trying to figure out what constitutes as suspicious with regards to algorithms.

Is there a version of YouTube that exists, in your mind, that doesn’t have to run on these algorithms that could link potentially dangerous communities to one another? Or is it intrinsic to the platform itself?

I don’t think it’s intrinsic to the platform. I think it’s intrinsic to the business model. You could very well think of a video community that is maybe similar to Reddit, for example, that focuses around certain content or certain topics, and where you can post videos in that regard.

That’s the problem. If YouTube abandons the algorithm that keeps people on the platform, then that is a profit problem. YouTube wants to keep making money.

I think that’s fair. A lot of the discussions we’re having currently around how platforms work and why, for example, some content is treated more favorably than others and why, misinformation or conspiracy theories are able to thrive, has to do with the business model, and the idea that everything with clicks is rewarded. There are ideas to have platforms that are not built around that. But those wouldn’t make as much money.