With dangers like non-consensual Artificial Intelligence (AI) and deep fake pornography on the rise, how do students protect themselves? Digital experts and legislators are trying to figure out a solution.
“Senator Hawley and I have a framework,” Sen. Richard Blumenthal (D-Conn.) said. Blumenthal and Sen. Josh Hawley (R-Mo.) have announced the Bipartisan Framework for U.S AI Act.
The framework, according to Blumenthal’s website, would establish a licensing regime administered by an independent oversight body, defend National Security and International Competition, ensure legal accountability for harms, promote transparency and protect consumers and kids. The legislation would ensure legal accountability if AI is harmful.
“So someone who did pornography on Taylor Swift or anyone else could be held liable for it,” Blumenthal said.
Earlier this year, Swift was the victim of fake and graphic AI creations being spread all across social media platforms like X and Instagram. Users on a social network called 4chan, according to CBS, were generating these images of the singer as a game to see who could make realistic lewd images. She is not the only victim, as many other famous women are depicted explicitly as well.
“People don’t understand AI and what it means generally,” Blumenthal said. “So sometimes it takes us, you know, an act like that involving a well-known person to create awareness.”
Blumenthal mentioned another example, when Pres. Biden was impersonated in an online ad. Earlier this year, Biden’s voice was used in a robocall in New Hampshire that, according to NBC, was telling the people on the call to not vote in the upcoming primary.
Regulating such technology is challenging. Michelle Manafy, a journalist who then became an editorial director at Digital Content Next, knows there must be some regulation, but she said she doesn’t know how it would work in the eyes of the American people.
“When you say ‘never,’ it has a real chilling effect,” Manafy said.
Blumenthal said there are ways to limit artificial intelligence and protect freedom of speech.
He said, “Establishing a licensing regime administered by an independent oversight body would simply mean that AI models would be required to register with an independent entity, which could then review them for safety and effectiveness. It wouldn’t be censoring the content, so it would be fully consistent with the free speech and expression,” Blumenthal said.
Locally, State Sen. James Maroney (D-Milford) said his priority for working on AI legislation is criminalizing deep fakes, which are generated or edited depictions of real people in explicit scenarios.
“Most of the limits we’re going to look at are going to be to criminalize where someone has the intent to harm someone else,” Maroney said. “And so we’re looking at making it illegal to spread a naked photo of someone that was generated by AI or deep fake porn or non-consensual, intimate image. And you know, you’re naked, so that’s private information, you have a right to privacy.”
Maroney said that last year, 13 youths nationwide died by suicide after being victims of sextortion, the act of an online predator tricking people into sending nude images or videos of themselves, then threatening them if they don’t continue doing so.
“And so my fear is what happens when you don’t know,” said Maroney. “Don’t even need to send someone a picture. They can take any picture of you and then blackmail you.”
More specifically, Maroney said he feared how much non-consensual content was being created and shared.
“I think one statistic I saw was over 96% of deep fakes are for this non-consensual deep fake porn. And, disproportionately it impacts women and girls,” Maroney said.
Maroney is co-chairman of the state AI Task Force. The Connecticut Artificial Intelligence Working Group is a legislative committee that was just created last year and had to give a final report last month. He and his colleagues want to allow the use of AI without hindering creation. Their main reason for the group was to recommend how to use AI ethically.
“What we’re really going to start with the Connecticut AI Academy is what’s the baseline knowledge we all need to use to help make us more efficient to help you make sure we’re using it responsibly and using it for good,” Maroney said.
This AI academy would give people lessons on how to responsibly use AI in the workforce and would be conducted at Charter Oak State College.
With news stories such as the Biden robocall and the explicit fake images of Swift, monitoring AI in some way is crucial for legislators.
“We’ve proposed an independent entity, like the, think of it in terms of drugs or medical devices,” Blumenthal said. “There’s the FDA, the Food and Drug Administration. If you want to sell a new drug, it has to be approved by the FDA.”
He said the same oversight is needed for AI models.
But Manafy said monitoring generative AI models may be difficult to accomplish.
“It learns from the existing corpus, right, body of work,” she said. “But, it also learns from each interaction. And especially for the models that are built for us to give feedback, right? So, then we become part of the training mechanism.”
In other words, AI models are always learning more.
Blumenthal said he does not have all the solutions, but he agreed that AI is unpredictable because the model can change itself.
“This area presents technological challenges that are entirely new and unprecedented,” Blumenthal said. “That’s absolutely right. So, we don’t have all the answers right now. That’s for sure. But we need to figure out a way to provide safeguards and some monitoring. And I’m not sure, you know, what we will do because the field is changing so quickly.”
Blumenthal said his colleagues need to be more informed on how Artificial Intelligence is bound to change things in all aspects of life.
“I think they have to understand more of it and the dangers are real,” he said. “And we need to address them as quickly as possible.”