Artificial intelligence (AI) is rapidly changing how images and content are created and, in some cases, misused, posing serious and evolving risks, especially for children.
One of the most concerning developments is the rise of AI-generated child sexual abuse material (CSAM), including so-called “deepfake” images. These images are often created by taking real photographs of children and manipulating them into sexually explicit content using widely available software.
While these images may be digitally fabricated, the harm they cause is anything but artificial.
What Is Generative Artificial Intelligence (GAI)?
Many of the risks associated with AI-generated CSAM stem from a specific type of technology known as generative artificial intelligence (GAI).
Artificial intelligence (AI) is a broad term that refers to computer systems designed to perform tasks that typically require human intelligence, such as recognizing images or processing language. It doesn’t generate original ideas or content. Generative AI is a subset of AI that goes a step further by creating entirely new content, including images, videos, and text.
According to the National Center for Missing & Exploited Children (NCMEC), GAI tools can create highly realistic new images using existing data, often with little technical skill required.
While these tools have legitimate uses, they are increasingly being misused to create sexually explicit content involving children, even when no real image originally existed. Over the past two years, NCMEC’s CyberTipline has received more than 70,000 child sexual exploitation reports involving GAI, and they expect the numbers to grow.
A New Form of Sexual Exploitation
AI-generated CSAM typically begins with an ordinary image, often pulled from social media or shared among peers. Using AI tools, that image can be altered to create a nude or sexually explicit version that appears realistic.
According to the NCMEC, this type of technology is also being used in several other harmful ways. Offenders may create explicit images to harass or bully victims, use fake accounts powered by AI to communicate with children, or generate images for extortion.
In some cases, individuals have used AI-generated images to blackmail victims, threatening to share fabricated explicit content unless additional images, sexual acts, or money are provided. This form of coercion, often referred to as sextortion, has become an increasing concern for families and law enforcement.
In many cases, the victim has no knowledge that the AI-generated image exists until it has already been circulated.
This form of sex abuse is becoming increasingly common in school settings, where students have used AI tools to create explicit images of classmates. Once shared through text messages or social media, these images can spread rapidly, leaving victims exposed to humiliation, bullying, and long-term emotional distress.
AI Legal Landscape Is Evolving
Louisiana has already taken steps to address this issue, passing a law in 2023 that criminalizes certain forms of computer-generated sexual abuse material involving minors. However, lawmakers are now working to strengthen those protections as new cases expose gaps in the law.
Proposed legislation would increase penalties for those who possess or distribute AI-generated explicit images and clarify that such material is illegal even if no real child was physically involved in producing it. Lawmakers are also considering measures that specifically address situations where minors themselves create or share these images.
These changes reflect a growing understanding that digital exploitation can be just as harmful as physical abuse.
AI Exploitation Is Becoming a National Crisis
The risks associated with AI-generated sexual abuse material are not limited to school incidents. Across the U.S., law enforcement agencies and child protection organizations are reporting a sharp increase in cases involving AI-generated exploitation.
According to the National Center for Missing & Exploited Children, reports involving AI-generated CSAM have surged dramatically in recent years, reflecting how quickly these tools are being adopted and misused.
At the same time, the issue is reaching the national stage. A lawsuit involving Elon Musk and his artificial intelligence company xAI alleges that AI systems were used to generate sexualized images, raising broader concerns about how these technologies can be exploited and what responsibility companies may bear.
Experts warn that as AI tools become more accessible, the creation and spread of exploitative content will likely continue to grow, making it increasingly difficult for families, schools, and lawmakers to keep pace.
The Impact of AI-Generated Sexual Images
For victims, the effects of AI-generated sexual images can be profound. The experience often mirrors other forms of child sexual abuse in terms of emotional and psychological harm.
Children targeted by these images may experience anxiety, depression, and a deep sense of violation. Their reputation among peers can be damaged almost instantly, and the permanence of digital content means the harm may follow them for years.
Perhaps most troubling is the loss of control. Once an image is shared, it can be nearly impossible to remove fully, leaving victims feeling powerless over their own identity and privacy.
Child protection organizations warn that these forms of exploitation can lead to severe psychological and emotional harm, particularly when victims are targeted by peers or experience ongoing harassment.
Civil Liability and Legal Options
In addition to criminal consequences, these cases may also involve civil lawsuits. Victims and their families may be able to pursue legal action against individuals who created or distributed the images, as well as institutions that failed to respond appropriately.
Schools, for example, may face scrutiny if they do not take reasonable steps to stop the spread of harmful content or protect students once misconduct is reported.
Louisiana’s broader legal developments, such as the extension of the childhood sexual abuse lookback window, demonstrate how the law can evolve to support survivors better and hold wrongdoers accountable.
How HKGC Can Help
Herman, Katz, Gisleson & Cain has decades of experience representing survivors of sexual abuse and understands how complex and sensitive these cases can be.
If your child has been harmed by AI-generated sexual images or other forms of exploitation, our attorneys can help you understand your legal options, investigate potential claims, and pursue accountability where appropriate.
We offer confidential consultations and are committed to helping families navigate these emerging legal challenges with care and compassion. Contact us online, via live chat, or call 1-844-943-7626 for more information or a free and confidential case consultation.
Other Sex Abuse News
A federal judge has approved a $230 million settlement in the New Orleans Archdiocese clergy sex abuse bankruptcy. This article explains what the settlement means, how Louisiana’s lookback law changed the case, and what survivors should know now.
College should be a time for growth, discovery, and opportunity, but the reality is that sexual violence remains a serious issue on campuses.
Award-nominated documentary God As My Witness highlights clergy sexual abuse and the fight for justice.
A New Orleans clergy abuse trial verdict has demonstrated that justice is possible thanks to the Louisiana lookback law.
The first jury verdict was reached in a case relying on the Louisiana sex abuse look-back window, which revived claims related to childhood sexual abuse.
Stopping sex abuse in youth sports requires a collective effort to protect kids from sexual predators who exploit their trust.

