“Something is wrong on the internet,” declares trending in tech circles. But the issue isn’t Russian ads or Twitter harassers. It’s children’s videos.
The piece, by tech writer James Bridle, was published on the heels of a report from the New York Times that described disquieting problems with the popular YouTube Kids app. Parents have been handing their children an iPad to watch videos of Peppa Pig or Elsa from “Frozen,” only for the supposedly family-friendly platform to offer up some disturbing versions of the same. In clips camouflaged among more benign videos, Peppa drinks bleach instead of naming vegetables. Elsa might appear as a gore-covered zombie or even in a sexually compromising position with Spider-Man.
The phenomenon is alarming, to say the least, and YouTube has said that it’s in the process of implementing new filtering methods. But the source of the problem will remain. In fact, it’s the site’s most important tool — and increasingly, ours.
YouTube suggests search results and “up next” videos using proprietary algorithms: computer programs that, based on a particular set of guidelines and trained on vast sets of user data, determine what content to recommend or to hide from a particular user. They work well enough — the company claims that in the past 30 days, only 0.005 percent of YouTube Kids videos have been flagged as inappropriate. But as these latest reports show, no piece of code is perfect
Read More