Break news e-mails

Receive news reports and special reports. The news and the stories that matter are delivered on weekday mornings.


February 10, 2019, 7:36 PM GMT

From Kalhan Rosenblatt

YouTube has announced that it will not recommend any video & # 39; s that come in the area. of the community guidelines, such as conspiracy or medically incorrect video & # 39; s.

On Saturday, a former engineer at Google, the parent company of YouTube, welcomed the move as a "historical victory."

The original YouTube blog post, published on January 25, said that video that the site recommends, usually after a user has watched a video, no longer just leads to similar video & # 39; s, and instead & # 39 – draw recommendations from a wider range of subjects. "

For example, if a person looks at a video that shows the snickerdoodles recipe, they may be bombarded with suggestions for other cookery recipe videos. Until the change, the same scenario applied to conspiracy video & # 39; s.

YouTube said in the post that the action is designed to "reduce the dissemination of content that comes close to, but does not quite come in line with – the violation of" community policy. The examples the company mentioned include: "promoting a fake panacea for a serious illness, claiming that the earth is flat, or making flagrant false claims about historical events such as 9/11."

The change has no effect on the availability of the videos & # 39; s. And if users are subscribed to a channel that produces, for example, conspiracy content, or if they search for it, they will still see related recommendations, the company wrote.

Guillaume Chaslot, a former Google technician, said he has contributed to building the artificial intelligence that has been used to manage recommended videos. In a thread of tweets posted on Saturday, he praised the change.

"It is just the beginning of a more humane technology: technology that makes us all stronger, instead of deceiving the most vulnerable," wrote Chaslot.

Chaslot described how, prior to the change, a user who saw video conspiracy theory was led down through a rabbit hole with similar contents, which was the intention of the AI ​​that he said helped build.

According to Chaslot, the AI ​​goal of YouTube was to keep users on the site for as long as possible to promote more ads. When a user was seduced by multiple conspiracy videos, the AI ​​was not only affected by the content that the hyper occupied users viewed, but also tracked the content that those users were working on in an effort to reproduce this pattern with other users, Explained Chaslot.

He pointed to another artificial intelligence that was also shaped by the bias of his users: Microsoft's chatbot "Tay."

Tay was a Twitter chatbot produced by Microsoft, which was meant to communicate with users as a person and to learn from others.

Within 24 hours of his release, Tay went from the innocent chatbot to the full-fledged misogynist and racist, according to Grouvy Today. The AI ​​where Tay learned from and was influenced by the involvement of Twitter users who spammed the bot with those ideologies, according to CNBC.

Chaslot said that the YouTube solution for its AI recommendations should also send people to video with truthful information and review the current system used to recommend video.

"The AI ​​change will have a huge impact because the affected channels have billions of views, mostly coming from recommendations," Chaslot said, adding that the decision of the platform to make this change affects thousands of new users.

YouTube did not respond immediately to a request for comment on the Chaslot thread.