Perhaps you have an image in your mind of people who get brainwashed by YouTube.
You might picture your cousin who loves to watch videos of cuddly animals. Then out of the blue, YouTube’s algorithm plops a terrorist recruitment video at the top of the app and continues to suggest ever more extreme videos until he’s persuaded to take up arms.
A new analysis adds nuance to our understanding of YouTube’s role in spreading beliefs that are far outside the mainstream.
A group of academics found that YouTube rarely suggests videos that might feature conspiracy theories, extreme bigotry or quack science to people who have shown little interest in such material. And those people are unlikely to follow such computerized recommendations when they are offered. The kittens-to-terrorist pipeline is extremely uncommon.
That doesn’t mean YouTube is not a force in radicalization. The paper also found that research volunteers who already held bigoted views or followed YouTube channels that frequently feature fringe beliefs were far more likely to seek out or be recommended more videos along the same lines.
The findings suggest that policymakers, internet executives and the public should focus less on the potential risk of an unwitting person being led into extremist ideology on YouTube, and more on the ways that YouTube may help validate and harden the views of people already inclined to such beliefs.
“We’ve understated the way that social media facilitates demand meeting supply of extreme viewpoints,” said Brendan Nyhan, one of the paper’s co-authors and a Dartmouth College professor who studies misperceptions about politics and health care. “Even a few people with extreme views can create grave harm in the world.”
People watch more than one billion hours of YouTube videos daily. There are perennial concerns that the Google-owned site may amplify extremist voices, silence legitimate expression or both, similar to the worries that surround Facebook.
This is just one piece of research, and I mention below some limits of the analysis. But what’s intriguing is that the research challenges the binary notion that either YouTube’s algorithm risks turning any of us into monsters or that kooky things on the internet do little harm. Neither may be true.
Digging into the details, about 0.6 percent of research participants were responsible for about 80 percent of the total watch time for YouTube channels that were classified as “extremist,” such as that of the far-right figures David Duke and Mike Cernovich. (YouTube banned Duke’s channel in 2020.)
Most of those people found the videos not by accident but by following web links, clicking on videos from YouTube channels that they subscribed to, or following YouTube’s recommendations. About one in four videos that YouTube recommended to people watching an extreme YouTube channel were another video like it.
Only 108 times during the research — about 0.02 percent of all video visits the researchers observed — did someone watching a relatively conventional YouTube channel follow a computerized suggestion to an outside-the-mainstream channel when they were not already subscribed.
The analysis suggests that most of the audience for YouTube videos promoting fringe beliefs are people who want to watch them, and then YouTube feeds them more of the same. The researchers found that viewership was far more likely among the volunteers who displayed high levels of gender or racial resentment, as measured based on their responses to surveys.
“Our results make clear that YouTube continues to provide a platform for alternative and extreme content to be distributed to vulnerable audiences,” the researchers wrote.
Like all research, this analysis has caveats. The study was conducted in 2020, after YouTube made significant changes to curtail recommending videos that misinform people in a harmful way. That makes it difficult to know whether the patterns that researchers found in YouTube recommendations would have been different in prior years.
Independent experts also haven’t yet rigorously reviewed the data and analysis, and the research didn’t examine in detail the relationship between watching YouTubers such as Laura Loomer and Candace Owens, some of whom the researchers named and described as having “alternative” channels, and viewership of extreme videos.
More studies are needed, but these findings suggest two things. First, YouTube may deserve credit for the changes it made to reduce the ways that the site pushed people to views outside the mainstream that they weren’t intentionally seeking out.
Second, there needs to be more conversation about how much further YouTube should go to reduce the exposure of potentially extreme or dangerous ideas to people who are inclined to believe them. Even a small minority of YouTube’s audience that might regularly watch extreme videos is many millions of people.
Should YouTube make it more difficult, for example, for people to link to fringe videos — something it has considered? Should the site make it harder for people who subscribe to extremist channels to automatically see those videos or be recommended similar ones? Or is the status quo fine?
This research reminds us to continually wrestle with the complicated ways that social media can both be a mirror of the nastiness in our world and reinforce it, and to resist easy explanations. There are none.
Tip of the Week
The normal human guide to digital privacy
Brian X. Chen, the consumer tech columnist for The New York Times, is here to break down what you need to know about online tracking.
Last week, listeners to the KQED Forum radio program asked me questions about internet privacy. Our conversation illuminated just how concerned many people were about having their digital activity monitored and how confused they were about what they could do.
Here’s a rundown that I hope will help On Tech readers.
There are two broad types of digital tracking. “Third-party” tracking is what we often find creepy. If you visit a shoe website and it logs what you looked at, you might then keep seeing ads for those shoes everywhere else online. Repeated across many websites and apps, marketers compile a record of your activity to target ads at you.
If you’re concerned about this, you can try a web browser such as Firefox or Brave that automatically blocks this type of tracking. Google says that its Chrome web browser will do the same in 2023. Last year, Apple gave iPhone owners the option to say no to this type of online surveillance in apps, and Android phone owners will have a similar option at some point.
If you search for directions to a Chinese restaurant in a mapping app, the app might assume that you like Chinese food and allow other Chinese restaurants to advertise to you. Many people consider this less creepy and potentially useful.
You don’t have much choice if you want to avoid first-party tracking other than not using a website or app. You could also use the app or website without logging in to minimize the information that is collected, although that may limit what you’re able to do there.
Before we go …
Barack Obama crusades against disinformation: The former president is starting to spread a message about the risks of online falsehoods. He’s wading into a “fierce but inconclusive debate over how best to restore trust online,” my colleagues Steven Lee Myers and Cecilia Kang reported.
Elon Musk’s funding is apparently secured: The chief executive of Tesla and SpaceX detailed the loans and other financing commitments for his roughly $46.5 billion offer to buy Twitter. Twitter’s board must decide whether to accept, and Musk has suggested that he wanted to instead let Twitter shareholders decide for themselves.
Three ways to cut your tech spending: Brian Chen has tips on how to identify which online subscriptions you might want to trim, save money on your cellphone bill and decide when you might (and might not) need a new phone.
Hugs to this
Welcome to a penguin chick’s first swim.
We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at email@example.com.