Explained: How YouTube’s Recommendation System Works

How does YouTube determine which video to recommend for consumption? How does he decide exactly which videos end up on his YouTube homepage? YouTube vice president of engineering Cristos Goodrow shed some light on the matter in a detailed blog post.

He tried to address concerns about whether sensationalist and misleading content or what the company calls “borderline content” is getting more views on the platform. The post also tries to respond to how YouTube tried to ensure that it doesn’t end up recommending such content.

In the blog post, Goodrom says the recommendations “drive a significant amount of the overall YouTube audience” and that figure is higher than “even channel subscriptions or search.” He also notes that YouTube wants to limit “content views limit recommendations to less than 0.5% of overall views” on the platform.

So let’s take a look at how YouTube’s recommendation systems work.

What is the “recommendation system” on YouTube?

YouTube’s recommendation system works in two main places. One is the YouTube homepage, which usually has a mix of content ranging from channels you’ve subscribed to to recommended videos, which the platform thinks you’re likely to see.

Recommendations also work in the ‘To be continued’ panel, when you’re done watching a video, and YouTube lines up a second video that it thinks you’re likely to watch.

The post explains that YouTube’s recommendation systems “don’t work from a” cookbook “of what to do,” but are constantly evolving and based on certain signals.

So what are these signals used by YouTube’s recommendation system?

Signals range from clicks, watch time, survey responses, and actions around the videos such as sharing, clicking the like or dislike button.

Clicks: if you click on a video, this is considered a strong indicator that you are going to watch the video. But the blog post notes that over the years, YouTube has realized that just clicking on a video doesn’t mean it’s at the top of its preferences list. After all, deceptive and baited video thumbnails are used to entice viewers, who then realize that video is not something they prefer.

Watch Time: Examines which videos one has watched and for how long, in order to provide “personalized signals” to YouTube systems. For example, if one is a fan of comedic content on the platform and spends hours watching, then it’s likely to be all over the recommendations. It is a safe bet that the user will watch such funny videos. This is important given that an average American adult user spends around 41.9 minutes on the platform per day, according to a report by an emarketer. (https://www.emarketer.com/content/us-youtube-advertising-2020)

But “not all viewing times are created equal”, which is why they also take other cues into account when deciding on recommendations.

Survey Responses: YouTube says this is done to measure “Valued Viewing Time,” that is, the time spent watching a video that a user considers valuable. Surveys ask users to rate videos out of five stars, and if a user marks a video as low or high, they usually have follow-up questions. Only well-rated videos with four or five stars are counted as valued viewing time. Responses from these surveys were used to train “a machine learning model to predict potential survey responses for everyone,” by YouTube.

Share, Like, Dislike: likes, shares, dislikes on a video are also taken into account. The assumption is that if someone liked a video, they will hit the Like button or might even share it. This information is also used to “try to predict the likelihood that you will share or like other videos”. The aversion is obviously a strong indicator that the video did not please the user.

But the blog post also explains that the importance assigned to each signal depends on the user. “If you’re the type of person to share any video you watch, including ones you give one or two stars, our system will know that it won’t consider your shares when recommending content. “, explains the message.

YouTube says the recommendation system doesn’t have a “fixed formula” but “grows dynamically” and even tracks changes in viewing habits.

What about disinformation? How does YouTube make sure it’s not recommended?

YouTube, like all other social media platforms such as Facebook, Twitter, has come under criticism that it is not doing enough to limit the spread of disinformation. US President Joe Biden in particular has been very critical of Facebook and YouTube for allowing the spread of false information against COVID-19 vaccines.

And it is in this context that YouTube opens up on the operation of its recommendation system. However, the company says it doesn’t want to recommend low-quality or “borderline” content, which is problematic but doesn’t outright violate its rules. Examples include videos claiming the Earth is flat or those claiming to offer a cure for cancer with “natural remedies.”

Bulletin | Click for the best explanations of the day to your inbox

YouTube says it has limited recommending low-quality content since 2011, when it built “classifiers to identify racy or violent videos and prevent them from being recommended.” Further in 2015, he began demoting videos with “sensational tabloid content appearing on home pages.”

This rating system, where a video in the News & Information category is tagged as authoritative or borderline, relies on human reviewers. The blog post explains that “these assessors come from all over the world and are trained through a set of detailed, publicly available scoring guidelines.” They also rely on “certified experts, such as doctors when the content involves health information.”

The evaluators try to answer a few questions about the video: whether it has expertise, the notoriety of the channel, of the speaker, etc. Blog.

Videos are also viewed to see if the content is misleading, inaccurate, deceptive, hateful, or likely to cause harm. Based on all of these factors, a video gets a score; the higher the score, the more YouTube’s recommendation system will promote it. A lower score means the video is rated as limiting and is downgraded in the recommendations.

The company says these human ratings were then used to train the company system “to model their decisions, and we are now scaling their ratings for all videos on YouTube.”

So, is this borderline content getting more engagement?

YouTube says that “through polls and comments” they found that “most viewers don’t want borderline content recommended, and many find that disturbing and off-putting.” He further claims that when he started downgrading “dirty or tabloid-like content,” viewing time increased by about 0.5% in 2.5 months.

He says the company has not seen evidence that such content is more engaging than other content. The post gives examples of flat earth videos, adding that while there are plenty of such videos uploaded to the platform, they get significantly fewer views.

He also revealed that when they started downgrading borderline content in 2019, they saw “a 70% drop in viewing time on non-subscribed recommended borderline content in the United States.” He further states that today “the limit content consumption that comes from our recommendations is well below 1%”. It also means that despite YouTube’s best efforts, some borderline content does end up being recommended, albeit a very small percentage.

The post also states that advertiser guidelines are such that many of these borderline content channels will be unable to monetize their videos. He notes that advertisers do not wish to be associated with such content.

So why isn’t YouTube removing deceptive content?

YouTube admits that this kind of content is not good for them and that it impacts their image in the press, with the public and with decision-makers. But just like Facebook, YouTube doesn’t actually remove such content because it says “disinformation tends to change and evolve quickly”, adding that “unlike areas like terrorism or child safety, it is often missing a clear consensus “.

JOIN NOW: Express Telegram Chain Explained

He also adds that “misinformation can vary based on personal perspective and background.” This is a defense that has frustrated critics who have argued that YouTube is not doing enough to remove the problematic content. The post admits that the company knows they sometimes leave out “controversial or even offensive content,” but adds that “they want to focus on creating a referral system that doesn’t promote that content. “.

YouTube admits the problem is far from over, but says “it will continue to refine and invest in its system to keep improving it.”


Source link

Comments are closed.