• Category
  • >Artificial Intelligence

How is YouTube using Artificial Intelligence?

  • Mallika Rangaiah
  • Jan 02, 2021
How is YouTube using Artificial Intelligence? title banner

YouTube is the abode for the whole modern world, holding stories, memories, activities and data of an entire globe of audience, being both their source of entertainment and motivation. The platform serves as both an escape from reality while simultaneously being a plunge into reality. With over a billion users logged in on the YouTube platform within a month, browsing and streaming through more than a billion hours of video each day, an entire universe of chaos and activities is undertaken on the platform, on a daily basis. 


Being equipped with a massive level of users, uploaded content and engaging activities, Artificial Intelligence becomes a highly beneficial weapon for the platform to smoothen its processes and activities and to aid them in their endeavors towards enhancing their platform.  The onset of COVID19, in particular, has largely increased YouTube’s reliance on Artificial Intelligence with the platform’s staff being confined to working from their homes for safety purposes. 


Since we are discussing YouTube, you can also sneak a peek at our blog on Sentiment Analysis of Youtube Comments



Application of Artificial Intelligence by YouTube


Below are a couple of ways through which YouTube’s platform adopts artificial intelligence in the present day : 


Dealing with fake news 


In the recent years, YouTube and various other social media platforms like Facebook and Twitter have been attempting to tackle fake news and misinformation. Unlike many social media sites who have merely been flagging off the fake content, YouTube has been adopting artificial intelligence for the purpose of thwarting such offensive content. 


During the initial days of the COVID19 pandemic, YouTube steered towards AI for the purpose of getting rid of around 11 million videos from their platform. As per the platform’s latest Community Guidelines Enforcement Report, this is the maximum number of videos it has been able to thwart off within a single quarter, i.e the second quarter of 2020. Among the 11.4 million videos which the platform got rid off amidst the 2nd quarter, around 10.8 million of them had been flagged off by the efforts extended by AI moderators. Hence automated systems have proved to serve as an effective weapon for ousting any content which has been classified to be harmful according to YouTube’s policies.



Testing AI generated video chapters


YouTube’s platform has recently begun testing by adopting machine learning for instantly adding chapters to their video content.

The platform released the trial on their YouTube test features and experiments page on the Google Support website:


“We want to make it easier for people to navigate videos with video chapters, so we are experimenting with automatically adding video chapters (so creators don’t have to manually add timestamps). We’ll use machine learning to recognize text in order to auto generate video chapters. We’re testing this out with a small group of videos.”


The system is an effective inclusion to video chapters, which was rolled out in 2020 by the platform to the creators. 


The feature enables the creators to divide videos into sections using their own individual previews. The viewers can then skip directly over to the section which they wish to watch. 


The platform states that enabling the chapters allows the viewers to stream more of the video and enhances their chances of returning to it. Presently the creators are required to manually affix timestamps upon the descriptions of their videos. Automatic generation of the chapters would largely save their time and efforts. 


You can also take a look at our blog on Extracting YouTube Comments.



Automatically thwart off unfit content 


Pressure and disapproval faced in the hands of the government, agencies as well as the brands is one of the primary factors which has driven YouTube’s perseverance in tackling unfit and distasteful content. The backlash faced in case of advertisements appearing alongside repugnant content. An instance of this is when advertisements began showing alongside the platform’s videos which propagated terrorism and racism, which led to Havas UK and other brands pulling their advertising dollars. In response to this, YouTube made use of advanced machine learning and collaborated with third-party organizations for aiding in enabling transparency for advertising partners. 


Although the platform’s algorithms may often not be foolproof or completely accurate, they examine and go through content faster than the humans are able to do manually. Yet there have also been a few cases where newsworthy content has been removed, being tagged as “violent extremism” This is largely the reason why Google has also employed full-time human specialists to collaborate alongside AI for addressing violative content. 


Artificial Intelligence has highly contributed towards YouTube's capacity of determining unfit content. 


YouTube is also equipped with a “trashy video classifier” which scans the platform’s homepage as well as “watch next” panels. The classifier examines feedback via the viewers who have the chances of reporting a deceptive title, unsuitable or other disagreeable content.  



New effects on videos 


The process of switching video backgrounds has always been achievable but it used to be a complex and sluggish process. Google’s AI researchers have trained a neural network for the purpose of swapping out backgrounds on videos without requiring any particular equipment. This algorithm has been trained with cautiously labeled imagery which enables it to absorb patterns, resulting in a fast system which can stay in pace with the video.


You can get better knowledge on what an algorithm through this blog. 



“Up Next” feature 


YouTube’s “Up Next” feature is one feature of the platform that is bubbling with artificial intelligence. With YouTube’s dataset continuously modifying what with videos being uploaded by the users in each minute, the platform’s AI was required to be largely distinct from the recommendation engines of platforms like, say, Netflix or Spotify since the platform was required to manage real-time suggestions while fresh data is simultaneously supplemented by users. 


In response to this issue, the solution the platform churned up is basically a two-part system. In this system the first part would be the candidate generation, in which the algorithm examines the history of the user on the platform. The second part is the ranking system. This system accredits a score to every video. 

YouTube's recommendation system focuses on the watch history of the user as well as the Ranking system

The areas focused on by YouTube’s recommendation system


A few more significant places where the algorithm makes a considerable impact is the user’s YouTube homepage, trending videos, notifications as well as subscriptions. 


The primary aim here is not to determine “good” videos but rather to match the users with the videos which they wish to watch in order to ensure that they spend a maximum amount of time on the platform. 


As emphasized by Guillaume Chaslot, a former Google employee and the present founder of an initiative urging more transparency named AlgoTransparency, the metric adopted by YouTube’s algorithm for determining an accurate recommendation is mainly the watch time which, he claimed, may be advantageous for the platform and its advertisers but is not as beneficial for its users since it could enhance popularise videos having disapproving content, with them getting recommended more the more they are streamed. 


Presently the working of the YouTube recommendation system is basically along the lines of the following. In layman’s terms, in order to fill the sidebar with recommended videos, the platform firstly assembles a shortlist consisting of over hundred videos by determining the videos which complement the topic and the various other characteristics of the video the user is presently streaming. Following this, the platform arranges a list, ranking it as per the preference of the user, which it absorbs by supplying the user’s clicks, likes and remaining interactions into a machine learning algorithm. 


In this form of working a specific issue is targeted by the researchers which they have classified as “implicit bias”. This form of bias basically implies how the recommendations can in turn impact the behaviour of users, raising the question of whether a video has been clicked on because it was preferred or only because it had been vastly recommended.  This can result in the adverse effect of the system, with time, steering the platform’s users apart from the content they actually prefer to stream. 


With the purpose of resolving this bias, a slight change in the algorithm was suggested by researchers, i.e for every time that a video is clicked on by the user, the rank of the video in the recommendation sidebar is also taken into account. The videos closer to the top of the sidebar are prioritized less while being feeded into the algorithm while the videos ranking much below, which generally require the user to scroll to access them, would be prioritized more. 


You can sneak a peek at our blog on How to Extract YouTube Data?



Training on depth prediction


Equipped with a plethora of data, YouTube videos allow for a fruitful training ground for artificial intelligence algorithms. Google AI researchers have adopted over 2,000 "mannequin challenge" videos uploaded on the platform for developing an AI model equipped with the capacity of gauging the depth of field in videos. The "mannequin challenge" primarily involved a bunch of people standing still and remaining inactive while the video is shot, as if they were frozen. The depth prediction skills can also aid in driving the enhancement of AR (augmented reality) experiences. 



Enforce age restrictions


YouTube has recently announced its plan of adopting an advanced AI for ensuring that youngsters don’t get to stream videos which have been created for a mature audience. 


Until presently, the platform had requested its creators themselves to flag their videos with age restrictions, resorting to adopting the algorithm for flagging the videos only in extreme cases. Now the platform plans to adopt a likewise machine learning algorithm for focusing on what would be suitable for certain age groups. 


Presently the platform already comprises a kids applications for its under 13 age group while the platform’s flagged content is equipped with age gates, which includes extremist content available on its platform. Back in 2017, the platform had introduced a machine learning technology for getting rid of such content. The platform is now planning to adopt a similar technology for determining videos considered suitable only for mature audiences.





The above points are some of the ways in which YouTube has been employing Artificial Intelligence for streamlining its various tasks and processes. Artificial intelligence has played a pivotal role in developing the platform and in influencing its present features.

Latest Comments

  • johnwilliams568034e3d96cf99e4021

    Nov 01, 2023

    Apparently, youtube.com also uses AI to censor comments, sometimes unfairly. For example, a dollar sign ($) seems to be an alert for a potential spammer. This can make it difficult to have a dialog with the author or the creator of a post.