Michigan lawmakers look to quash non-consensual ‘deepfake’ pornography

As Michigan lawmakers attempt to tackle “deepfake” pornography, an emerging form of sexual violence typically aimed at women, academics examine long-term policy solutions that could place more legal responsibility on technology companies, rather than those impacted by fake AI images. 

The Michigan House cleared two bills last month banning non-consensual explicit images generated by AI systems. HB 5569, sponsored by state Rep. Penelope Tsernoglou (D-East Lansing), and HB 5570, sponsored by Rep. Matthew Bierlein (R-Vassar), passed with bipartisan support, with only two GOP lawmakers voting against the bills. 

The legislation package will need Senate approval before heading to Gov. Gretchen Whitmer’s desk. Both chambers of the Legislature will not return to the Capitol until July 30.

The issue made headlines after pornographic AI-generated pictures of popstar Taylor Swift circulated on the internet in January. Tsernoglou said this incident brought more attention to explicit deepfakes, but she had started working on the bill earlier after sponsoring legislation regulating the use of deepfakes in political ads last year.

“This one came up as something that was very harmful and certainly in need of some regulation,” Tsernoglou told the Advance. “I think most people can agree that you shouldn’t be able to take a person’s image and create explicit images of them without their consent, but there haven’t been any regulations in place.”

State Rep. Penelope Tsernoglou (D-East Lansing) addresses the Michigan House, Feb. 15, 2023 | Laina G. Stebbins

A 2023 study from cybersecurity firm Home Security Heroes found 98% of deepfake videos online are pornographic, with 99% of targeted people being women. 94% of people appearing in deepfake porn work in the entertainment industry, according to the study.

“I think keeping up on this technology is going to be really important for a lot of women and femmes who are in political positions, but really anybody,” Rep. Emily Dievendorf (D-Lansing) said at a May 21 House criminal justice committee meeting.

The bills would allow anyone who had a pornographic “deepfake,” which includes visual or audio clips generated by AI systems, made nonconsensually of them to bring a civil action against the creator or distributor. 

The person would have to prove the creator or distributor had knowledge of harm to or “reckless disregard” for their physical, emotional, economic or reputational well-being. 

Johanna Kononen, director of law and policy at Michigan Coalition to End Domestic & Sexual Violence, said this legislation was important for supporting survivors of non-consensual image sharing, which has the potential to become a bigger problem as people have the ability to use AI to generate fake media. 

“What we’re trying to do is make sure that survivors have more options, not less,” Kononen said. 

Arun Ross, a professor of computer science and engineering at Michigan State University, said intentions are important to consider when drawing up rules and legislation around AI generally. In the case of non-consensual deepfake pornography, the technology is usually used to harm another person.

“We are talking about it producing a person, a human who does exist in this world,” Ross said. “And in this case, they are trying to kind of impute a bad name to that person, or at least foster some emotional trauma, so I think intention is probably one important consideration here.”

But Shobita Parthasarathy, the director of the University of Michigan’s Science, Technology and Public Policy Program, says these bills place too much of a burden on the person exploited by deepfake pornography to bring a lawsuit, which can be emotionally and financially difficult. She said having this bill is better than not having any protections, but additional solutions were needed to protect people from the threat of non-consensual images. 

Shobita Parthasarathy | University of Michigan photo

“I worry about placing the burden essentially on the citizen, as opposed to, for example, placing the burden on the tech companies or other kinds of interventions that might not be further expecting … action on the part of the harmed party,” Parthasarathy said.

She said the responsibility for mitigating these images should be placed on technology companies to provide evidence of deep fakes or place guardrails on what users can produce with AI.

For the former, that could look like requiring AI companies to place watermarks, which are not necessarily visibly displayed, on all generated content. This could make it easier for plaintiffs to determine whether or not images are AI-generated, a fact that might be difficult to prove in court as deepfakes become more realistic. Parthasarathy said this could help lessen the burden for people who have pornographic deepfakes made of themselves.

Lawmakers could also regulate deployers of AI by making them place limitations on what type of deepfakes can be produced, Parthasarathy said. Ross echoed this approach, explaining that AI deployers should have responsibility to make sure the systems are not producing illegal images, whenever possible.  

“I think there should be a good-faith attempt made by these companies to kind of put a guardrail around what can be generated and what should not be generated,” Ross said.

This raises concerns about the privacy of individuals using AI systems. Parthasarathy said that without strong data privacy rules in place, AI companies are already collecting data about how and by whom the systems are being used. 

Ross said it was important for lawmakers to balance privacy concerns while holding companies accountable for illegal or harmful outputs. 

“The user rightfully must expect some privacy that the software is not fully aware of the kind of images that is being produced for a user,” Ross said. “On the other hand, if the software detects the production of illegal images, then there should be a way in which it curtails the production of that image and does not produce the output requested by the user.”

Ross also said there should be guidance in place for how companies train generative AI systems. “Illegal images” should not be used to train the systems in the first place, intentionally or accidentally, he said. 

For Michigan lawmakers, regulating AI more generally and including technology companies in the scope will be a lengthy process, Tsernoglou told the Advance. She expressed interest in working on legislation that regulates AI broadly by placing guardrails on harmful uses, but said this would be a “lengthy undertaking” and wanted to focus on targeting and limiting potentially problematic uses and not wait for the creation of a broader framework to make its way through the legislative process.

“I think the reasons we’ve tackled the political deepfakes and the explicit images is just because those are two of the most harmful misuses,” Tsernoglou said. “So we wanted to get something in place for those sooner rather than later.”

The new election frontier: Deepfakes are coming and often target women

In the meantime, the bill, if signed into law, will give people legal protection against deepfake pornography. Additionally, if other laws are placed on companies and non-consensual AI deepfakes are created and disseminated, people will have legal protection.   

“At the least, it offers victims recourse,” Ross said. “It gives them the ability to seek justice.”

Apart from just regulating artificial intelligence, and the platforms used to disseminate it, Ross and Kononen spoke about societal attitudes and practices that should be reformed to prevent non-consensual explicit deepfakes. 

Education, especially for children, on how images shared on social media or in private communication streams could be used for deepfake production is important for mitigating the problem, Ross said. 

Additionally, raising more awareness on boundaries and consent is also important, Kononen said. She explained that sometimes people only apply consent to sex, but it is important to broaden that understanding to depictions of people and their bodies more generally. 

“I think that it’s a really good opportunity to talk about consent and how that permeates every aspect of sexual acts,” Kononen said. “Not just the act of sex itself, but regarding people’s own bodies and how, who and when and who else they can share those with.”

GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX