YouTube welcomes “progress” in its automatic removal of jihadist videos
YouTube | The video platform, owned by Google, says its new moderation system has doubled the number of videos deleted in a month.
After the words of June , acts: YouTube released Tuesday 1 st of August, its latest results in the fight against content that glorify terrorism. As announced in June by Google , its parent company, YouTube now uses technologies that can learn to automatically detect extremist content.
“We are already seeing some progress,” says the company. This ensures that more than 75% of the videos removed last month for “violent extremism” were released before being reported by humans. Until recently, large platforms generally expected users to report problematic content for review by the moderation team.
Figures still unclear
During July, the use of this technology, based on machine learning – a field of artificial intelligence – has doubled , according to YouTube, the number of videos removed for violent extremism. The accuracy of this new system “has improved considerably , “ says YouTube, without giving any figures, particularly regarding false positives: “Although these tools are not perfect, in many cases our systems have been more relevant humans to report the videos that were to be removed. “
The company also hires more employees in its team of moderation, while remaining, again, very blurred. The way these teams work, as well as their composition, remains an extremely sensitive issue for web giants who refuse to communicate on this issue. Only Facebook agreed to provide a figure in May, announcing that it was preparing to add 3,000 moderators to its moderation team, already composed of 4,500 people.
YouTube also announced to have started to implement the project “Redirect Method”, already in place at Google. This initiative aims to offer Internet users who search for “sensitive” terms videos decrypting and denouncing jihadist propaganda.
Measures still to come
Yet, YouTube has yet to put in place new sanctions against videos “that are not illegal but have been reported by users as potentially violating our rules on hate speech and violent extremism.” Clearly, videos in a gray area: they are not prohibited by the YouTube rules, but “contain controversial religious or supremacist content” .
YouTube announced that they wanted to limit their visibility, by displaying a warning message before launch, and by preventing them from being “recommended” by the platform algorithm. They will also not be monetized through the YouTube advertising program and will be deprived of certain features such as comments or “I like”.
A mechanism that, according to Google , “respect freedom of expression and access to information without promoting the viewpoints extremely controversial.” It should be applied “in the coming weeks” on computer and then on mobile, but, warns YouTube, “will take time to be fully deployed” .
Under the pressure of their users but also of the states, the large platforms of the Web have announced numerous measures in recent months to fight against the jihadist propaganda, and the violent contents in general. The follow-up of these measures, like this communiqué published by YouTube, is part of this process and allows them to highlight their actions after having been criticized for being passive in the face of the proliferation of this type of content.