On March 15, 2019, in New Zealand, a gunman attacked two mosques in Christchurch during Friday prayers, killing 51 people and injuring dozens more. What made this event particularly horrifying and historically significant) was how it unfolded: while the real events were taking place, they were being streamed across social media platforms in real time.
A digital-first act of terror
The perpetrator, a 28-year-old Australian man, meticulously planned the attack for both physical and online impact. He live-streamed the massacre on Facebook Live, using a helmet-mounted camera to broadcast his actions. Within minutes, the footage spread across YouTube, Twitter, and Reddit, despite immediate efforts by platforms to remove it.
The attacker also posted a manifesto online filled with extremist and white-supremacist rhetoric, borrowing from internet memes and conspiracy theories. It was clear that the intent wasn’t just to kill, but to amplify hate through virality.
The role of online radicalization
Investigations revealed that the attacker had been active in far-right and “alt” online communities, including message boards like 8chan, where extremist ideologies thrive unchecked. These platforms often function as echo chambers, normalizing hate speech, dehumanizing rhetoric, and calls to violence.
Experts in digital extremism note that the Christchurch attack demonstrated how social media can act as both a recruitment tool and a stage for terrorists seeking notoriety. The perpetrator’s goal was to inspire copycats, and unfortunately, later attacks (such as in Buffalo and El Paso) referenced him directly.
A global reaction
In the aftermath, New Zealand’s Prime Minister Jacinda Ardern responded with compassion and swift action. Within a month, the country introduced stronger gun laws and led the Christchurch Call to Action, a global initiative urging tech companies to prevent the use of social media for terrorism and violent extremism.
Tech giants like Facebook, Google, and Twitter pledged to improve their moderation and detection systems for violent content. Meta, in particular, is progressively shifting their moderation policy on Facebook: as of January 2025, it announced changes to drop some third-party fact-checking and to focus moderation on “illegal and high-severity violations”.
Yet, questions remain about the balance between free speech and platform responsibility, as extremist content continues to evolve faster than moderation algorithms can handle.
The challenges in investigating live streams
From a digital-investigation and crime-analysis perspective, live streams on social media bring several challenges:
- Live video presents real-time risk: violent acts, harassment, hate speech, or coordinated abuse could be broadcast before moderation can intervene.
- Moderation lag: even with automated systems, live content may be viewed and shared externally before removal.
- Storage limits matter: with the 30-day deletion policy for live videos, preserving live-stream evidence for investigations becomes more difficult unless proactively downloaded.
- The moderation policy change (“more speech, fewer mistakes”) may mean less aggressive filtering of borderline content, making live streams a more fertile venue for disinformation or extremist speech.
Cybersecurity and digital investigation professionals now emphasize collaborative intelligence sharing among law enforcement, tech firms, and digital watchdogs to identify extremist networks before they mobilize.
The need for more digital literacy
The Christchurch mosque shooting showed how online hate can quickly manifest as real-world violence. Combating this phenomenon requires not only smarter technology but also digital literacy, education, and stronger community resilience against extremist narratives.