Minnesota advances deepfakes bill to criminalize people sharing altered sexual, political content

ST. PAUL (Minn.) (AP), - In a vote that was nearly unanimous, Minnesota Senate members passed a law Wednesday criminalizing people who intentionally share sexually explicit images of other people, or who do so to harm a candidate for office or influence an electoral outcome.

Deepfakes refer to videos and images created digitally or altered using artificial intelligence (AI) or machine learning. Since the technology first spread across the internet, several years ago, deepfake political misinformation and pornography have been created. The technology is more accessible than ever.

The bill allows prosecutors to charge someone with up to 5 years in prison or $10,000 in fines if they spread deepfakes. The bill still needs to be approved by a conference panel and signed by the Democratic Governor. Tim Walz.

On Wednesday, only one legislator voted against this bill.

"The only concern I have is the civil penalty." Nathan Wesenberg of Little Falls (Republican) said, on the Senate floor, before voting against the legislation, 'I want it to be higher.

The bill was deemed necessary and cutting-edge by supporters.

'We must protect all Minnesotans from those who would use artificial intelligence or technology to harass, threaten or... humiliate anyone,' Republican Senator Eric Lucero of St. Michael said.

Apple Valley legislator Erin Maye Quade (Democrat), who championed this bill, stated that a small number of states have also passed legislation similar to the one in place for California. Texas, California, and Virginia are among the states that have passed similar legislation to combat deepfakes.

Maye Quade stated that "we're really behind on the federal and state levels" in terms of data privacy and technology regulations. Maye Quade said that she was concerned about the lack of regulation and data privacy laws at the federal level.

In a video from January, Joe Biden spoke about tanks. A doctored video of his speech was viewed hundreds of thousands on social media that week, giving the impression that he had attacked transgenders.

Digital forensics specialists said that the video was made using a new generation artificial intelligence tools. These allow anyone to generate audio simulating someone's voice by simply clicking a button. And while the Biden clip on social media may have failed to fool most users, the clip showed how easy it now is for people to generate hateful and disinformation-filled deepfake videos that could do real-world harm.

Social media companies are tightening their rules in order to protect their platforms from deepfakes.

TikTok announced in March that all deepfakes, or manipulated content that shows realistic scenes in a realistic way, must be labelled to indicate that they are fakes or altered in any way. Deepfakes of young people and private figures are also no longer permitted. The company previously banned sexually explicit content as well as deepfakes which misled viewers about real events.

Trisha Ahmed works as a member of the Associated Press/Report for America Statehouse News Initiative.

Report for America

This nonprofit program places journalists into local newsrooms in order to cover issues that are not covered. Follow Trisha on Twitter: