YouTube is not always successful in its effort to moderate its massive community, but it’s always trying to improve.
In a blog post today, the company outlined new measures it’s taking this year to make its search engine a safe place for minors, who are at a higher risk of exploitation.
First, minors are no longer allowed to live stream unless they are “clearly accompanied by an adult.” Because YouTube can’t monitor every user as they livestream, it relies on machine learning technology (think AI) to parse videos and flag content that may be in violation of its policies. Channels that break compliance with the new livestreaming policy may lose their ability to stream.
Secondly, YouTube has been disabling comments on videos featuring minors “to limit the risk of exploitation,” albeit to the chagrin of content creators who rely on their comments sections to engage their communities. YouTube acknowledged this dissent in its blog post, but says it “strongly believes this is an important step to keeping young people safe…”
New technology the company implemented is helping keep comments sections safe, too – YouTube says its new classifier has helped remove twice the number of violative comments.
Earlier this year, YouTube outlined its plans to make recommended content more relevant to viewers. As an extension of that effort, the company is adjusting its filter to exclude videos of minors in risky situations.
YouTube reports that in the first quarter of 2019, it removed more than 800,000 videos for violations of child safety policies. Additionally, in the last two years, the reports YouTube sent to law enforcers regarding child exploitation led to more than 6,000 investigations.
YouTube is still a place where pedophiles use Fortnite ads to prey on children, but it’s encouraging to see the company taking steps to make its platform safer.