(Mis)Information Overload: An Examination of Harmful Content & How It’s Delivered
Conspiracy theories. Extremist views. Hate speech. These are all things someone could stumble upon in a dark corner of the web. Who’s viewing content like this? How often, and for how long? Based on behavioral research conducted at Dartmouth, it’s a relatively small proportion of people who consume—and seek out—most of the problematic content on YouTube.
Brendan Nyhan’s research—which explores questions related to the creation, spread, and sentiment of misinformation—helps us better understand the social climate in a volatile, changing world.
The Information Ecosystem
Nyhan, a political scientist and professor of government at Dartmouth, says the problem of potentially harmful content on YouTube is real. The challenge, he says, “is understanding the nature of the problem so we can think about how to best address it.”
In 2019, YouTube announced changes to its algorithm intended to reduce or prevent exposure to potentially harmful video content; further, the platform said that, since the change, there was a decline in views from nonsubscribers.
These reports were not independently verified, a fact that served as a catalyst for Nyhan and his collaborators. Their co-authored paper, “Subscriptions and External Links Help Drive Resentful User to Alternative and Extremist YouTube Channels,” appeared in Science Advances in August 2023.
YouTube Views. Extremist Views.
Nyhan and his collaborators analyzed the viewing habits of more than 1,100 participants’ web browsing data, including users from a general population sample, a group that expressed high levels of racism and hostile sexism, and a sample of self-reported YouTube users.
The findings show that consumption of potentially harmful content is concentrated among a subset of users, those with an already high level of gender or racial resentment. Despite YouTube’s algorithm updates, though, videos from alternative or extreme channels were often still recommended to users after watching similar content. It was this “related video” feature that caused alarm. Was a cascade of suggested content causing people to descend into a rabbit hole of increasingly extreme, radicalizing videos? This study shows that, no, the average American is not spiraling into an abyss of misinformation on YouTube.
Researchers discovered the majority of these particular users subscribed to at least one extremist or alternative channel, further demonstrating that the algorithm isn’t the only path to discovering potentially harmful content: Some are actively seeking it.
These overall findings of Nyhan’s study indicate that there’s only a fraction of people watching harmful content; still, a fraction of millions of users is significant. This research shows there’s still work to be done within the greater online and social media landscape, more to learn about how technology (and our relationship with it) shapes our views, perceptions, and actions—and how platforms like YouTube can respond.
Informing Policy. Empowering Change.
Social science research at Dartmouth continues to play a vital role in national conversations and informs policymakers. Nyhan’s study, an earlier version of which was published by the American Defamation League, garnered attention from media outlets, professional journals, and watchdog groups alike, from The New York Times to the National Academy of Medicine to the Nieman Lab.
***
The research happening at Dartmouth today builds off a rich history of life-altering, game-changing innovation, which includes the first-ever clinical X-ray in 1896 and the birth of the BASIC computer programming language—and it also fuels the future of discovery across our institution.