Invisible Filter Bubbles

This did not occur to me. As an algorithm gets better at recommending content that matches and reinforces what a community is looking for, the negative complaints go down which makes it harder for someone outside (such as platform moderators) the filter bubble from detecting these closed communities in the first place.

The algorithm is doing what it was designed to do but without any moral compass, its overall contribution to society is questionable.

Here’s someone who worked on the YouTube algorithm commenting on this (emphasis mine).

Using recommendation algorithms, YouTube’s AI is designed to increase the time that people spend online. Those algorithms track and measure the previous viewing habits of the user—and users like them—to find and recommend other videos that they will engage with.

In the case of the pedophile scandal, YouTube’s AI was actively recommending suggestive videos of children to users who were most likely to engage with those videos. The stronger the AI becomes—that is, the more data it has—the more efficient it will become at recommending specific user-targeted content.

Here’s where it gets dangerous: As the AI improves, it will be able to more precisely predict who is interested in this content; thus, it’s also less likely to recommend such content to those who aren’t. At that stage, problems with the algorithm become exponentially harder to notice, as content is unlikely to be flagged or reported. In the case of the pedophilia recommendation chain, YouTube should be grateful to the user who found and exposed it. Without him, the cycle could have continued for years.

The Toxic Potential of YouTube’s Feedback Loop

Posted

in

by

Comments

Leave a comment