Widespread availability of graphic Charlie Kirk shooting video shows content moderation challenges
Immediately after Charlie Kirk was shot during a college event in Utah, graphic video of what happened was available almost instantly online, from several angles, in slow-motion and real-time speed. Millions of people watched — sometimes whether they wanted to or not — as the videos autoplayed on social media platforms.
Video was easy to find on X, on Facebook, on TikTok, on Instagram, on YouTube — even on President Donald Trump’s Truth Social. The platforms, generally, said they were removing at least some of the videos if they violated their policies, for instance if the person was glorifying the killing in any way. In other cases, warning screens were applied to caution people they were about to see graphic content.
Two days after Kirk’s death, videos were still easily found on social media, despite calls to remove them.
“It was not immediately obvious whether Instagram for example was just failing to remove some of the graphic videos of Charlie Kirk being shot or whether they had made a conscious choice to leave them up. And the reason that it that was so hard to tell is that, obviously, those videos were circulating really widely,” said Laura Edelson, an assistant professor of computer science at Northeastern University.
The events illustrate the content moderation challenges platforms face in handling fast-moving real-time events, complicated by the death of a polarizing conservative activist who was shot in front of a crowd armed with smartphones recording the moment.
Ambiguous policies
It’s an issue social media companies have dealt with before. Facebook was forced to contend with people wanting to livestream violence with a mass shooting in New Zealand in 2019. People have also livestreamed fights, suicides and murder.
Similar to other platforms, Meta’s rules don’t automatically prohibit posting videos like Kirk’s shooting, but warning labels are applied and they are not shown to users who say they are under 18. The parent company of Instagram, Facebook and Threads referred a reporter to the company’s policies on violent and graphic content, which they indicated would apply in this case, but had no further comment.
YouTube said it was removing “some graphic content” related to the event if it doesn’t provide sufficient context, and restricting videos so they could not be seen by users under age 18 or those who are not signed in, the company said.
“We are closely monitoring our platform and prominently elevating news content on the homepage, in search and in recommendations to help people stay informed,” YouTube said.
In a statement, TikTok said it is “committed to proactively enforcing our Community Guidelines and have implemented additional safeguards to prevent people from unexpectedly viewing footage that violates our rules.”
TikTok also moved to restrict the footage from its “for you” feed so people have to seek it out if they want to see it and added content warning screens as well as worked to remove videos that showed graphic, close-up footage.
Rewarding engagement
Social media platforms algorithms reward engagement. If a video gets a lot of reaction, it moves to the top of people’s feeds, where more people see it and engage with it, continuing the cycle.
“I mean, this is the world that we have all made. This is the deal we all made. The person who gets to decide what’s newsworthy on Instagram is Mark Zuckerberg. The person who gets to decide what stays up on X is Elon Musk. They own those platforms, and they get to decide what is on them. If we want another world, well, then someone else needs to make it,” Edelson said. “The fact is that we live in a world where the most important channels for what information circulates are controlled by single individuals.”
And it is these individuals who decide what to make a priority. Meta, X and other social platforms have cut back on human content moderation in recent years, relying on artificial intelligence that can both over-and under-moderate.
Regulations vary by region
The U.S. has no blanket regulation prohibiting violent content from being shown on the internet, although generally platforms attempt to restrict minors from being able to see it. Of course, this doesn’t always work, since users’ ages are not always verified and kids often lie about their ages when signing up to social platforms.
Authorities in other places have drawn up laws that require social media companies to do more to protect their users from online harm. Britain and the European Union both have wide-ranging laws that make tech platforms responsible for “online safety.”
The British Online Safety Act requires platforms, even those not based in the United Kingdom, to protect users from more than a dozen types of content, from child sexual abuse to extreme pornography.
Content that depicts a criminal offense such as a violent attack on someone isn’t necessarily illegal content, but platforms would still have to assess whether it falls foul of other banned material such as encouraging terrorism.
The British government says the rules are especially designed to protect children from “harmful and age inappropriate content” and give parents and children “clear and accessible ways” to report problems online.
That includes material that “depicts or encourages serious violence or injury,” which online services are required to prevent children from seeing.
Violations of the U.K. rules can be punished with fines of up to 18 million pounds ($24.4 million) or 10% of a company’s annual revenue, and senior managers can also be held criminally liable for not complying.
The U.K. law is still fairly new, and only started taking effect in March as it gets rolled out in stages.
The rest of Europe has a similar rule book that took effect in 2023.
Under the European Union’s Digital Safety Act, tech companies are required to take more responsibility for material on their sites, under threat of hefty fines. The biggest online platforms and search engines, including Google, Facebook, Instagram and TikTok, face extra scrutiny.
Platforms should give users “easy-to-use” mechanisms to flag content deemed illegal, such as terrorism and child sexual abuse material, Brussels says, adding that platforms have to then act on reports in a “timely manner.”
But it doesn’t require platforms to proactively police for, and take down, illegal material.
—-
AP Media Writer David Bauder contributed to this story.
By BARBARA ORTUTAY and KELVIN CHAN
AP Technology Writers