It started with a buzz — a sound so ordinary that Adrianna, a 14-year-old high school freshman, hardly gave it a second thought. But as she walked down the hallway, clutching her phone, she noticed the whispers and snickers from classmates. Something wasn’t right.
When Adrianna opened the Instagram notification, her world fell apart. A video showed her engaging in sexually explicit acts with someone she had never met. The video wasn’t real. It was created by generative artificial intelligence (AI) — hyper-realistic, utterly convincing, and devastatingly fake. With a single tap on her phone, Adrianna’s world shattered, leaving her overwhelmed with pain, embarrassment, and shame. Desperate for comfort, she called home. Her mom answered, her voice filled with worry, “What’s wrong, sweetie?” Adrianna hesitated, words caught in her throat, until finally, one haunting question escaped: “How can something so fake look so real?”
The Rights of the Child in a Digital World
This year, on World Children’s Day, the United Nations is calling on adults to “listen to the future” by hearing children’s voices and prioritizing their needs. Generative AI has become a powerful tool for creativity and innovation. Yet, it poses a grave risk to children in the wrong hands. Hyper-realistic, “deepfake” AI-generated child sexual abuse material (CSAM) is becoming increasingly prevalent. While Adrianna’s story is fictional, it highlights a terrifying reality many children face today.
The Convention on the Rights of the Child (UNCRC) enshrines every child’s right to protection from exploitation and abuse. Children have a right to dignity and self-expression, but generative AI threatens to erode both by allowing bad actors to produce harmful content that floods law enforcement with fake CSAM cases, hinders victim identification, and fuels harassment, blackmail, and scams.
Section 230 and the Global Implications of US Law
Although the US has not ratified the UNCRC, its laws and policies heavily influence global technology regulation and child protection standards. Section 230 of the Communications Decency Act is an example. Although Section 230 is a US law, American companies like Meta, X, and Google host much of the world’s social media content. The policies, systems, and software these companies develop to comply with new US content moderation regulations set a precedent and influence practices far beyond US borders.
Danielle Citron, in her book, The Fight For Privacy, relates how attorneys in Italy and law enforcement officials in South Korea told Citron that they cannot force websites to take down images of their clients or citizens because courts in their countries do not have jurisdiction over U.S.-based companies. Foreign authorities say that “when non-US sites remove nonconsensual intimate images, perpetrators do the next best thing — they post the images on US sites.” Images are likely to remain online despite victims’ complaints, and “perpetrators can always torment victims on sites hosted in the United States, and victims’ home countries can’t do anything about it.”
Generative AI companies are likely to invoke Section 230 as a shield to criminal liability. Section 230(c)(1) provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” It prevents providers and users of “interactive computer services” from being held liable as the “publisher or speaker” of information another person provides. “Interactive computer service” is any service that “provides or enables computer access by multiple users to a computer server.”
Section 230 typically protects platforms from liability for user-generated content, shielding them from being treated as the publisher or speaker. However, this immunity does not apply if the platform itself acts as an “information content provider” by contributing to the creation or development of the content. Courts assess this by examining whether the platform was, in whole or in part, responsible for the content’s creation. The key question is whether the platform’s role goes beyond hosting and into developing the content.
Courts have yet to address how Section 230 applies to claims involving generative AI, but recent cases have pushed its boundaries. In Gonzalez v. Google LLC, the plaintiff argued that YouTube’s algorithms aided ISIS by amplifying its content, contributing to a terrorist attack that killed his son. The Ninth Circuit dismissed the case under Section 230, but the Supreme Court vacated and remanded it following Twitter, Inc. v. Taamneh. However, during oral arguments in Gonzalez, Justice Gorsuch questioned whether Section 230 could shield AI-generated content, suggesting the law’s limitations in a “post-algorithm world.”
The Third Circuit Court of Appeals recently highlighted the potential harm of algorithmically generated content and the evolving issue of accountability in Anderson v. TikTok, Inc.. The disturbing case centered on whether TikTok could claim Section 230 immunity in a lawsuit filed by the estate of a child who died attempting the “Blackout Challenge,” a choking game trend promoted through TikTok’s algorithm-driven “For You Page” (FYP). The plaintiff argued that TikTok’s algorithm was liable for recommending such harmful content to minors.
The court sided with the plaintiff, ruling that TikTok’s algorithmic recommendations were not shielded by Section 230. It reasoned that curating and tailoring FYP content constituted TikTok’s own expressive activity, making it a creator of unique, first-party content rather than a mere conduit for third-party speech.
Legal Challenges of AI-Generated CSAM
The way platforms like Facebook or TikTok handle data differs significantly from generative AI models. Generative AI systems perform diverse functions, ranging from creating original content to heavily relying on user prompts. Whether they qualify as “information content providers” and lose Section 230 immunity depends on their role in shaping the specific content. Courts are likely to rule that Section 230 immunity does not apply if the generative AI contributes, “in whole or in part,” to creating harmful content. For example, if the model generates entirely original output, such as illegal material, in response to an otherwise lawful prompt, its role would likely be seen as a material contribution to the illegal content, removing Section 230 protection.
The question remains: How do you prove a crime when the victim does not exist? US laws like 18 U.S.C. §§ 2256 and 1466A criminalize the production or distribution of material depicting minors in sexual acts, involve real victims. In New York v. Ferber, the Supreme Court ruled that child pornography is excluded from First Amendment protection, citing the state’s strong interest in protecting minors, the permanent harm caused by distributing such material, its lack of value, and the historical precedent of restricting harmful speech. Since then, courts have consistently held that “child pornography is not lawful ‘information provided by another information content provider’ as contemplated by Section 230 . . . Rather, it is illegal contraband, stemming from the sexual abuse of a child, beyond the covering of First Amendment protection.” The Supreme Court has held that “everyone who reproduces, distributes, or possesses the images of the victim’s abuse . . . plays a part in sustaining and aggravating this tragedy.”
In the case of generative AI, no actual children are involved. A close analogy could be made to Ashcroft v. Free Speech Coalition, where the Court held that virtual depictions of minors in sexual acts, created without real children, were protected under the First Amendment’s protection of free speech. While the Court acknowledged that “the images can lead to actual instances of child abuse,” such as pedophiles using videos to encourage children to engage in sexual activity, it still called the causal link “contingent and indirect” and refused to ban products and activities solely because of their potential immoral use. However, if pornography uses children in any way, courts protect victims. The Second Circuit made the distinction from Free Speech Coalition in United States v. Hotaling, where using technology to superimpose the faces of real children onto the bodies of adults engaged in sexual acts does not qualify as protected speech under the First Amendment.
Similarly, using AI and real children to create child pornography is illegal. A child psychiatrist was found guilty of sexual exploitation of a minor and using AI to digitally alter clothed images of minors to create child pornography. Fabricating nude images of people using real faces is an issue that has caused a dilemma in schools.
The first federal charge of creating CSAM applied to images produced entirely through AI was brought by federal prosecutors against a Wisconsin man in May 2024 for allegedly using a popular AI image generator to create thousands of explicit images of children. The case underscores a largely untested legal approach that federal officials plan to explore further, asserting that AI-generated images should be treated similarly to real-world recordings of child sexual abuse. “CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children,” said Deputy Attorney General Lisa Monaco.
Obscenity Laws and the Miller Test
While some states are enacting laws to criminalize AI-generated CSAM in response to technological advancements, many still have not addressed this legal gap. The challenge lies in the inevitable First Amendment battles these laws will face. While child pornography is not protected under the First Amendment, cases involving AI-generated content without identifiable victims challenge prosecutors to think outside the box, and turn to obscenity laws, which are far more subjective interpretations, to build their cases.
Given the novelty of AI-generated content and its incompatibility with Free Speech Coalition, prosecutors should use the Miller three-part test: (1) whether the average person, applying contemporary community standards, would find that the work, taken as a whole, appeals to the prurient interest; (2) whether the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law; and (3) whether the work, taken as a whole, lacks serious literary, artistic, political, or scientific value. AI-generated CSAM fails all three prongs of the test: child pornography explicitly involving minors engaged in sexual conduct is widely considered repugnant and unacceptable by contemporary community standards, lacks any value to society, and as the Court said in Ferber, “We consider it unlikely that visual depictions of children performing sexual acts or lewdly exhibiting their genitals would often constitute an important and necessary part of a literary performance or scientific or educational work.” Therefore, generative AI-created child pornography should be treated as illegal under 18 U.S.C. § 1466A, just like any other form of child pornography.
Close the Loopholes: The Need for Global Action and Accountability
World Children’s Day reminds us to reflect on the urgent need to protect children from the harms posed by advancing technologies like generative AI. While AI offers boundless opportunities for creativity and progress, it also brings new risks. Adrianna’s fictional story may seem extreme, but it reflects a chilling reality many children face today. Courts and policymakers must carefully balance Section 230’s original purpose with the need to hold platforms accountable when their algorithms and AI systems contribute to harmful content.
The call to action is clear: governments, tech companies, and individuals must work together to close legal loopholes, enforce stricter regulations, and prioritize the welfare of children in the face of rapidly evolving technology. Every child deserves a world where their voice is heard, their rights are protected, and their safety is never compromised.