Some of you might have seen me make a comment on here asking what would happen if someone used AI to generate photorealistic child pornography.
Well, unsurprisingly, that question has now found its way into the news:
https://arstechnica.com/tech-policy/2024/05/csam-generated-by-ai-is-still-csam-doj-says-after-rare-arrest/
>The US Department of Justice has started cracking down on the use of AI image generators to produce child sexual abuse materials (CSAM).
>On Monday, the DOJ arrested Steven Anderegg, a 42-year-old "extremely technologically savvy" Wisconsin man who allegedly used Stable Diffusion to create "thousands of realistic images of prepubescent minors," which were then distributed on Instagram and Telegram.
>The cops were tipped off to Anderegg's alleged activities after Instagram flagged direct messages that were sent on Anderegg's Instagram account to a 15-year-old boy. Instagram reported the messages to the National Center for Missing and Exploited Children (NCMEC), which subsequently alerted law enforcement.
>During the Instagram exchange, the DOJ found that Anderegg sent sexually explicit AI images of minors soon after the teen made his age known, alleging that "the only reasonable explanation for sending these images was to sexually entice the child."
>According to the DOJ's indictment, Anderegg is a software engineer with "professional experience working with AI." Because of his "special skill" in generative AI (GenAI), he was allegedly able to generate the CSAM using a version of Stable Diffusion, "along with a graphical user interface and special add-ons created by other Stable Diffusion users that specialized in producing genitalia."
>After Instagram reported Anderegg's messages to the minor, cops seized Anderegg's laptop and found "over 13,000 GenAI images, with hundreds—if not thousands—of these images depicting nude or semi-clothed prepubescent minors lasciviously displaying or touching their genitals" or "engaging in sexual intercourse with men."
>In his messages to the teen, Anderegg seemingly "boasted" about his skill in generating CSAM, the indictment said. The DOJ alleged that evidence from his laptop showed that Anderegg "used extremely specific and explicit prompts to create these images," including "specific 'negative' prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults." These go-to prompts were stored on his computer, the DOJ alleged.
>Anderegg is currently in federal custody and has been charged with production, distribution, and possession of AI-generated CSAM, as well as "transferring obscene material to a minor under the age of 16," the indictment said.
>Because the DOJ suspected that Anderegg intended to use the AI-generated CSAM to groom a minor, the DOJ is arguing that there are "no conditions of release" that could prevent him from posing a "significant danger" to his community while the court mulls his case. The DOJ warned the court that it's highly likely that any future contact with minors could go unnoticed, as Anderegg is seemingly tech-savvy enough to hide any future attempts to send minors AI-generated CSAM.
>"He studied computer science and has decades of experience in software engineering," the indictment said. "While computer monitoring may address the danger posed by less sophisticated offenders, the defendant's background provides ample reason to conclude that he could sidestep such restrictions if he decided to. And if he did, any reoffending conduct would likely go undetected."
>If convicted of all four counts, he could face "a total statutory maximum penalty of 70 years in prison and a mandatory minimum of five years in prison," the DOJ said. Partly because of "special skill in GenAI," the DOJ—which described its evidence against Anderegg as "strong"—suggested that they may recommend a sentencing range "as high as life imprisonment."
>Announcing Anderegg's arrest, Deputy Attorney General Lisa Monaco made it clear that creating AI-generated CSAM is illegal in the US.
>"Technology may change, but our commitment to protecting children will not," Monaco said. "The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material—or CSAM—no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children."
Did I turn this into another op-ed on Medium? Of course I did:
https://medium.com/@MoonMetropolis/how-should-law-enforcement-handle-fake-ai-generated-child-pornography-2ceb8f1ded20
You can expect much more drama to flow from this in the coming years - it will almost certainly find its way to the US Supreme Court, and there is really no telling, at this point, how they will rule on it.
Jump in the discussion.
No email address required.
Was the neighbor making his own kid fricking LORAs?
Jump in the discussion.
No email address required.
We're still in the "new technology" phase of AI generation where news articles can use absurdly verbose and complicated ways of describing simple concepts.
Back in the late 90's you'd read articles about dudes getting arrested for running email fraud schemes, or in the way they put it "disseminating fraudulent materials in an entirely digital way over the digital connecting lines of the grand interconnected network of connected digital devices."
Jump in the discussion.
No email address required.
More options
Context
Nope, they already exist
Jump in the discussion.
No email address required.
More options
Context
what a horrible thought
Jump in the discussion.
No email address required.
More options
Context
Who is Lora
Jump in the discussion.
No email address required.
my cousin, and stop lookign at her
Jump in the discussion.
No email address required.
More options
Context
More options
Context
More options
Context