(COLUMBUS, Ohio) — Artificial intelligence threatens a new kind of sexual abuse of children, and Attorney General Dave Yost—and every state attorney general in America—are urging Congress to act swiftly.
In a letter sent today to congressional leaders, Yost and his counterparts raise concerns about the many ways that AI technology can be misused to exploit children, particularly by creating child sex abuse material.
“One day in the near future, a child molester will be able to use AI to generate a deepfake video of the child down the street performing a sex act of their choosing,” Yost predicted. “The time to prevent this is now, before it happens.”
Citing concerns that AI is creating a “new frontier for abuse,” the attorneys general ask Congress to establish an expert commission to study how the technology can be misused at the expense of children. They request that Congress consider the commission’s recommendations and take action to address the problem, such as by expanding restrictions on child sex abuse material to cover AI-generated content.
Ill-intentioned users of AI technology can create images of child sex abuse by simply typing a short description of what they want to see. The technology can superimpose the face of one person onto the body of another, creating deepfake images that combine photos of victimized children with photos of otherwise unvictimized children.
In other cases, the technology can generate images of sex abuse depicting children who do not actually exist. This is equally problematic, the attorneys general write, because some AI technology relies on images of real victims as source material for the fabricated images. Such images also “support the growth of the child exploitation market by normalizing child abuse and stoking the appetites of those who seek to sexualize children,” the letter says.
“Graphic depiction of child sexual abuse only feeds evil desires,” Yost said. “A society that fails to protect its children literally has no future.”
The attorneys general draw attention to additional concerns that misuse of AI technology can jeopardize the safety and privacy of children by predicting their location or mimicking their voices.
The letter stresses the urgent need for congressional action, saying “the proverbial walls of the city have already been breached. Now is the time to act.”