YouTube, which a few months ago lost advertisers because of the terrorism-related content on its platform, says AI is proving to be a big help.
“Over 75% of the videos we’ve removed for violent extremism over the past month were taken down before receiving a single human flag,” the company said in a blog post Tuesday.
YouTube also says machine learning has helped it remove more than twice the number of videos with extremist content at a faster rate.
Still, the company said it has more work to do and is hiring more people to help to review and remove such content. It also touted new partnerships with the Anti-Defamation League and other NGOs and institutions as expert consultants.
The company is providing a progress report a month after it outlined four steps it’s taking to deal with the problem. The longtime issue became painfully apparent earlier this year because it hit YouTube’s bottom line.
In March, big companies such as AT&T, Verizon and Johnson & Johnson followed UK brands and started pulling their ads from YouTube and other Google properties after discovering that they were appearing alongside extremist and hate-filled videos. By at least one estimate, the exodus was expected to cost the company hundreds of millions of dollars in ad revenue – although the company refused to confirm that.
In its blog post, YouTube also mentioned that it is now trying to counter hateful content with videos that “that directly confront and debunk violent extremist messages.” It’s doing that by serving up videos that contradict certain search terms.
Amid all of this, YouTube has been sued for doing too good a job in fighting extremist content: A YouTube channel dedicated to zombie videos says YouTube’s efforts have robbed it of revenue.