Skip to content
The Digital Harm ReportGet Help
← All chapters/Chapter 04

AI-Generated Content: The New Frontier

AI-generated CSAM videos surged 26,385% in 2025. Detection systems built for traditional CSAM cannot identify novel synthetic material. Legal frameworks are racing to catch up.

9 min read · 5 sections

AI-generated pornography: scale and impact

§04.01

The proliferation of AI-generated pornographic content represents a paradigm shift in the landscape of online sexual material. An estimated 95% of all deepfakes are non-consensual pornography, with 99% targeting women. Deepfake pornography videos grew 464% between 2022 and 2023 (Inside the Porn).

The hyper-customizability of AI-generated content may accelerate desensitization and tolerance cycles that characterize pornography addiction. One therapist specializing in sex addiction reports that approximately one-third of clients now use AI-generated erotica in some form (Recovery Unplugged), while clinicians at Fifth Avenue Psychiatry note that AI companion chatbots simulating emotional connection may reinforce fantasy over reality and impede real-world relationship development.

AI-generated CSAM: the emerging crisis

§04.02

The Internet Watch Foundation's 2026 report documents an alarming escalation: 8,029 AI-generated CSAM images and videos were assessed in 2025, with AI-generated CSAM videos surging from 13 in 2024 to 3,443. Among AI-generated videos, 65% depicted Category A content (the most extreme), and 97% depicted girls (IWF).

NCMEC's 2025 data revealed 21.3 million total CyberTipline reports, with 1.5 million indicating a generative AI nexus, over 7,000 reports of users generating or possessing AI-generated CSAM, and 145,000+ reports of users employing AI to alter CSAM (NCMEC). The real-world impact on minors is tangible: 1 in 10 minors know someone who has used AI tools to generate nude images of other children (Thorn).

Data integrity: the Stanford CIS findings

§04.03

A critical nuance emerged in January 2026 when Stanford's Center for Internet and Society analyzed the NCMEC reporting data. The frequently cited figure of 485,000 “AI-related” NCMEC reports from the first half of 2025 was found to be misleading: 380,000 of those reports originated from Amazon, and none of Amazon's reports involved AI-generated CSAM. Instead, they were hash hits to known CSAM found in AI training data.

Stanford concluded that “nearly 80% of all 'Generative AI' CyberTipline reports involved no AI-generated CSAM at all” (Stanford CIS). This finding underscores the importance of data integrity in shaping policy responses — inflated statistics risk misallocating resources and distorting public understanding of the actual threat landscape.

Detection challenges and new tools

§04.04

Traditional hash-matching systems like Microsoft's PhotoDNA — which compares file fingerprints against verified CSAM databases with a false positive rate of approximately 1 in 50 billion — are fundamentally unable to detect AI-generated CSAM since it constitutes novel material with no existing hash signature (Microsoft). This gap has driven investment in AI-based detection: the DHS Cyber Crimes Center awarded a $150,000 contract to Hive AI specifically for AI-generated CSAM detection (MIT Technology Review).

Thorn's Safer platform represents the most scaled detection effort. In 2025, Safer processed 415.4 billion files, detected approximately 1.5 million known CSAM files through hash matching, and used AI to flag 3.84 million potential novel CSAM files for human review. The platform serves over 80 platforms and maintains a hash library of 6.3 million image hashes and 64 million video hashes (Thorn).

Apple's abandoned NeuralHash system illustrates the detection dilemma. Announced in August 2021, paused a month later, and formally abandoned in December 2022, NeuralHash would have performed client-side CSAM scanning of iCloud Photos. Apple concluded it “could not implement without ultimately jeopardizing the security and privacy of our users.” A class-action lawsuit filed in December 2024 alleges that Apple's abandonment facilitates CSAM proliferation on iCloud (CNET). Meanwhile, the EU backed down on mandatory CSAM detection orders in November 2025, opting for mitigation measures instead (9to5Mac).