YouTube announces it will no longer recommend conspiracy vi…


Breaking News Emails

Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings.

By Kalhan Rosenblatt

YouTube has announced that it will no longer recommend videos that “come close to” violating its community guidelines, such as conspiracy or medically inaccurate videos.

On Saturday, a former engineer for Google, YouTube’s parent company, hailed the move as a “historic victory.”

The original blog post from YouTube, published on Jan. 25, said that videos the site recommends, usually after a user has viewed one video, would no longer lead just to similar videos and instead would “pull in recommendations from a wider set of topics.”

For example, if one person watches one video showing the recipe for snickerdoodles, they may be bombarded with suggestions for other cookie recipe videos. Up until the change, the same scenario would apply to conspiracy videos.

YouTube said in the post that the action is meant to “reduce the spread of content that comes close to — but doesn’t quite cross the line of — violating” its community policies. The examples the company cited include “promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11.”

The change will not affect the videos’ availability. And if users are subscribed to a channel that, for instance, produces conspiracy content, or if they search for it, they will still see related recommendations, the company wrote.

Guillaume Chaslot, a former Google engineer, said that he helped to build the artificial intelligence used to curate recommended videos. In a thread of tweets posted on Saturday, he praised the change.

“It’s only the beginning of a more humane technology. Technology that empowers all of us, instead of deceiving the most vulnerable,” Chaslot wrote.

Chaslot described how, prior to the change, a user watching conspiracy theory videos was led down a rabbit hole of similar content, which was the intention of the AI he said he helped build.

According to Chaslot, the goal of YouTube’s AI was to keep users on the site as long as possible in order to promote more advertisements. When a user was enticed by multiple conspiracy videos, the AI not only became biased by the content the hyper-engaged users were watching, it also kept track of the content that those users were engaging with in an attempt to reproduce that pattern with other users, Chaslot explained.

He pointed to a different artificial intelligence that was also shaped by the bias of its users: Microsoft’s chatbot “Tay.”

Zoomd Trends

Leave a Reply

Your email address will not be published. Required fields are marked *