On the afternoon of September 10, shortly after right-wing political activist Charlie Kirk was assassinated in front of a crowd at Utah Valley University, videos of the Turning Point USA cofounder getting shot in the neck flooded social media.
As the news traveled fast, so did the videos. Unfortunately, users who did not wish to see the graphic content often unwittingly saw it anyway.
“How the fuck is that Charlie Kirk video the first thing I see on Instagram when I opened it?” one user shared on X.
The viral videos, which show the moment of the attack from various angles, as well as blood gushing from Kirk following the bullet’s impact, have found their way onto the feeds of many users—who are now reporting emotional distress.
“Do not watch the Charlie Kirk upclose video. It auto-played on my timeline and I am unwell. Omg. I implore you—do not watch,” one user shared on Threads.
Another X user echoed the sentiment, saying: “For those who haven’t seen the video of Charlie Kirk, please turn off Twitter. I really wish I hadn’t seen it.”
During the immediate aftermath of the incident, many users flocked to Reddit to learn what was happening. In a series of now-deleted Reddit threads reviewed by Fast Company, several users reported regret for watching the video, urging others to abstain from doing so.
Despite pleas for the video to be taken down or put behind content warnings, the videos are still easily accessible online.
For instance, as of this writing, the video could still be found on X and Instagram—in some cases with a content warning if clicked on. Still, if appearing on the feed, the videos would auto-play with no warning.
The auto-play feature is the default setting on most of the popular social media platforms, although most of them offer users the option to turn it off. One notable exception is Meta’s Threads—which was launched in 2023—and currently offers no way to disable auto-play videos.
“Incredibly concerning”
A week before the shooting, the Tech Transparency Project (TTP), a research initiative, published a report that found graphic “fight” content was being pushed to an Instagram account set up to look like it belonged to a teenage user—this, despite the platform’s safety settings for teens
The morning after the Kirk incident, the same account used for the report, which says it was set up by someone born in 2009, found the graphic shooting video upon searching “Charlie Kirk Video,” auto-playing with no content warning. (Fast Company reviewed a screen recording of the experiment.)
“When you have one of the biggest technology companies on the planet explicitly telling parents that it keeps [teen] accounts safe from that content, yet is pushing graphic assassination videos to teens, that is incredibly concerning,” Katie Paul, director of TTP, tells Fast Company.
With videos of Kirk’s killing still showing up on children’s social media accounts meant to have safeguards that limit sensitive and graphic content, it comes as no surprise that they remain on the feeds of adults as well. But advocates for social media safety say large platforms should be doing a better job of protecting users from viewing the content accidentally—or at least warning them when something is explicit.
“They’re a public service,” Stephen Balkam, founder and CEO of the Family Online Safety Institute, says of the platforms. “They have huge responsibilities for what they allow on their platforms.”
Balkam noted that social media sites have taken initiatives to better police violent content in the past. He cites an instance in 2014, when videos depicting beheadings from the terrorist group ISIS circulated widely across platforms, sparking discussions about the need for heavier content moderation.
During the COVID pandemic, social media companies were further pressured to get more aggressive about dangerous misinformation during the health crisis.
However, companies like X (formerly Twitter) and Meta Platforms (owner of Facebook and Instagram) have since shifted toward less aggressive efforts.
When asked about the video circulating on its platforms, a Meta spokesperson referred Fast Company to the company’s policies on violent and graphic content, saying those guidelines apply in this case. The guidelines say Meta removes “the most graphic content and adds warning labels to other types of content so that people are aware it may be sensitive before they click through.”
Representatives for Google-owned YouTube said they are “closely monitoring our platform and prominently elevating news content on the homepage, in search and in recommendations, to help people stay informed.”
Fast Company reached out to X but did not receive a response at the time of publishing.
Mental health impact
With many users reporting distress, experts and advocates are raising concerns over the long-term effects of exposure to violence.
“What we have found over the years is that repeated exposure to graphic images can have negative psychological and physical health consequences,” Roxane Cohen Silver, professor of psychology, medicine, and public health at the University of California, Irvine, tells Fast Company.
Silver has previously researched the mental and physical impact of stressful events and seeing graphic and violent content, including footage from the Boston Marathon bombing and the ISIS beheading videos.
“I certainly would encourage people to recognize that there can be psychological consequences of this kind of exposure, and monitor and moderate that exposure themselves,” she adds.
The impact on viewers may lead to difficulty falling asleep, nightmares, and other forms of acute stress, which may turn into physical symptoms due to continuous watching.
Balkam also noted concerns over prolonged exposure to violent content, which he points out can lead to desensitization or even insight further violence.
“So it’s about as bad as it gets,” he adds. “And for this to happen at a time when troops are on the streets of [Washington, D.C.] and maybe coming to your city. It just heightens the sense of, Oh, my God, where are we going as a country?”
Paul echoed concerns over the larger impacts of extreme graphic imagery boosted by social media. “This is not just an epidemic of violence in America that we have to deal with, but also the algorithmic amplification of that violent content to people who have no interest in seeing it,” she says.