Copyright Just Security

In the wake of the Charlie Kirk assassination and other recent attacks, the United States faces a resurgence of politically motivated violence that is deeply intertwined with the digital sphere. Digital Aftershocks, our new report at the NYU Stern Center for Business and Human Rights, examines how extremists across the ideological spectrum — far-right, far-left, violent Islamist, and nihilistic violent extremists (NVEs) — exploit acts of violence to recruit followers, justify their ideologies, and sustain propaganda networks. Our findings are grounded in open-source intelligence collected from March to September 2025, a period marked by deadly attacks in Utah, Minneapolis, and Washington, D.C. While scholars and policymakers have long debated whether online rhetoric “causes” real-world violence, this report looks primarily at the middle of that cycle: how violent incidents are transformed into digital fuel that normalizes aggression and prepares the ground for future attacks. Cross-Ideological Threat Landscape We found that extremist networks are increasingly converging around similar tactics and, sometimes, targets. Far-right channels used the stabbing of 17-year-old Austin Metcalf, the killing of 23-year-old Iryna Zarutska, and the assassination of Charlie Kirk to advance narratives of white victimhood and calls for revenge. Far-left networks, dominated during the monitoring period by militant pro-Palestine activism, used similar methods: doxxing, dehumanizing rhetoric, and glorification of attacks such as the shooting outside the Capital Jewish Museum in Washington, D.C. The analysis also captures the growing threat from NVEs, individuals motivated not by ideology but by misanthropy and the pursuit of viral infamy. These actors blur the line between political violence and performance, celebrating mass shootings regardless of motive and borrowing aesthetic cues from previous attackers. Their actions highlight a new, troubling frontier: violence as content. Violent Islamist groups, by contrast, maintained a lower but persistent online presence, forced into smaller, decentralized networks on applications such as Rocket.Chat after waves of moderation crackdowns. This disparity reveals an enforcement asymmetry. Foreign Islamist groups face aggressive monitoring, while domestic extremists using similar, if not more explicit, rhetoric often operate with relative impunity. Cross-Platform Strategy and Adaptation Across the ideological spectrum, one consistent finding stands out: violent actors use multi-platform strategies to balance reach and security. Telegram has become a central coordination hub, while X serves as the principal amplifier for mainstream visibility. Encrypted or decentralized platforms like Rocket.Chat and SimpleX provide operational cover, while video-sharing platforms such as YouTube or TikTok are exploited for viral reach. The report documents the practice of “out-linking” — posts that embed links directing users to posts on another platform — to evade moderation and preserve content. This cross-platform strategy ensures that when one account or channel is taken down, the network’s connective tissue remains intact. As long as platforms offer complementary features — some maximizing virality, others privacy or monetization — extremist networks will adapt. Threats, Incitement, and the Legal Line A central aim of Digital Aftershocks is to bring precision to an often-muddled debate about the legality of online speech. U.S. constitutional doctrine distinguishes between true threats, which are statements expressing a genuine intent to commit violence against a specific target, and incitement, which is speech likely to produce imminent lawless action. Both categories fall outside First Amendment protection. But much of the rhetoric circulating online, while dangerous, remains lawful. To navigate this complexity, the report draws on international human rights frameworks like the Rabat Plan of Action, and the Dangerous Speech Project, which offers analytical tools for assessing when speech meaningfully increases the risk of violence. The goal is not to criminalize offensive expression but to help policymakers and platforms act consistently and proportionately without crossing into censorship. Key Recommendations Digital Aftershocks outlines practical, rights-respecting measures for platforms and policy-makers. Among them: Adopt precise definitions of threats and incitement to ensure consistent platform enforcement. Implement privacy-preserving reporting tools to let users flag illegal content and ensure timely review of those reports. Use metadata responsibly, collecting only what is necessary for safety purposes and deleting it after defined retention periods. Mandate transparency and procedural standards requiring platforms to publish detailed moderation and abuse-detection reports. Evaluate extremist and terrorist designation frameworks so enforcement applies consistently across ideologies. Recognize the limits of legal remedies, distinguishing harmful but protected speech from illegal threats or incitement, and clarify protocols for platform-law enforcement cooperation. Support counter-speech and civic resilience, investing in partnerships that promote credible voices and reduce polarization. A Bipartisan Imperative The surge of political violence in the United States shows no sign of abating. Each new incident is followed by a wave of digital celebration, intimidation, and imitation. Yet responses remain polarized and often superficial. The patterns we document cut across ideology and party lines. Violent intimidation online threatens everyone’s safety, regardless of political identity. Our hope is that Digital Aftershocks helps policymakers, platforms, and civil society move beyond reflexive partisanship and toward sensible, bipartisan solutions that safeguard both public safety and freedom of expression.