Drawbacks of Automated Web Content Moderation

This is a little bit of a throwback to the week of the Super Bowl but I thought that this event was a little too juicy to pass up writing on for a post. For their Super Bowl ad, DoorDash spent $5.5 million and bragged about raising $1 million afterwards. A small news site, Broke Ass Stuart, reported the story and it was subsequently posted on reddit.

The reddit thread was suddenly brigaded – meaning that every single comment that a user was making received a large number of reports. As /u/Kezika, one of the moderators explained, “if x number of reports, automatically remove” and the thread is getting absolutely bombarded with reports. Like almost every parent comment in here is hitting the report threshold within a few minutes of being made.”

A bot farm was being used to report the comments on this post to stifle discussion and this raises the question of who is responsible for the brigading? Was it a group of random internet trolls trying to get a kick and stir some controversy? Was it DoorDash themselves trying to stifle discussion on the news? And what does it mean if corporations with significant financial resources can brigade to stifle discussion?

Fortunately, the fact that comments were being deleted only added fuel to the fire and made the post even more controversial, elevating its status and awareness as a story, but it begs us, as users of sites like reddit, to question whether automatic moderation policies are being implemented effectively and whether they can act to stifle speech.

Interestingly, there was another minor mishap on Twitter this week where if you used the word “Memphis” at all, you would get automatically banned from the website. This is almost definitely an error, but is just another example of where automatic content moderation can go wrong.

Leave a Reply

To use reCAPTCHA you must get an API key from https://www.google.com/recaptcha/admin/create