A technical glitch led to widespread exposure of gory and violent footage in Instagram’s Reels section.
Meta, the parent company of Instagram, has issued an apology after users reported an overwhelming flood of disturbing and graphic content on their Reels feeds. The issue stemmed from an apparent malfunction in Instagram’s algorithm, which curates the content shown to users.
Reels, a feature akin to TikTok that allows users to share short videos, was inundated with footage depicting extreme violence, animal cruelty, and even dead bodies. Many Instagram users, including those on the Reddit platform, expressed their shock and distress at seeing such graphic material dominating their feeds.
One user recounted: “I just saw at least 10 people die on my Reels.” Among the disturbing footage were videos of a man being crushed by an elephant, a person being dismembered by a helicopter, and someone submerging their face in boiling oil. Screens displaying “sensitive content” were also seen on several Reels, indicating that the algorithm was allowing graphic material to bypass content filters.
In addition to these, other reported videos included a man being set on fire, a person shooting a cashier, and violent images of animals being abused. One particular user who ran a biking-related account noticed a disturbing influx of videos from an account named “PeopleDeadDaily” and similar accounts, which sparked further concern.
Several Reddit users expressed frustration, stating that it felt as though Instagram’s algorithm had “gone rogue” and was pushing an incessant stream of violent content. “It’s like Instagram is now trying to make me question if I accidentally followed a ‘bloodshed and chaos’ account,” one user commented.
Meta responded to the backlash, clarifying that the error has been rectified. A company spokesperson stated, “We have fixed an error that caused some users to see content on their Instagram Reels feed that should not have been recommended. We apologise for the mistake.”
This glitch comes on the heels of recent changes to Meta’s content moderation policies, which have drawn criticism for reducing the amount of content censorship on its platforms. However, Meta assured that the spike in violent content was unrelated to the recent adjustments to its content guidelines, which still state that particularly violent or graphic material will be removed or shielded by sensitive content warnings.
Leave a Reply