House Introduces Bill to Label AI-Generated Content Amid Deepfake Concerns
WASHINGTON (YNOT) -Democrats and Republicans can’t agree on much these days, but when it comes to concerns about the emergence of AI-powered “deepfake” video/audio technologies, there is rare bi-partisan agreement and cooperation in Washington.
A new bipartisan bill introduced in the House of Representatives mandates the clear identification and labeling of artificially generated images, videos, and audio content. The legislation aims to curb the misuse of advanced artificial intelligence technologies capable of producing highly realistic “deepfakes,” a topic that greatly concerns politicians who themselves will be likely targets of deepfake technologies. AI-generated creations have already been used to imitate prominent figures such as President Joe Biden and various celebrities, and pose a risk of fueling misinformation, exploiting individuals, and undermining public trust.
“We’ve seen so many examples already, whether it’s voice manipulation or a video deepfake. I think the American people deserve to know whether something is a deepfake or not,” said Rep. Anna Eshoo, a Democrat from California and co-sponsor of the bill. “To me, the whole issue of deepfakes stands out like a sore thumb. It needs to be addressed, and in my view the sooner we do it the better.”
The proposed law requires creators of AI-generated content to embed digital watermarks or metadata into their creations, a practice akin to how photo metadata captures the specifics of an image. Furthermore, online platforms such as TikTok, YouTube, and Facebook would need to inform their users of the AI-generated nature of such content. The specifics of these regulations would be developed by the Federal Trade Commission, with guidance from the National Institute of Standards and Technology.
“The rise of innovation in the world of artificial intelligence is exciting; however, it has potential to do some major harm if left in the wrong hands,” said Rep. Neal Dunn, a Republican from Florida who co-sponsored the bill with Eshoo.
Violations of this rule could lead to civil lawsuits, highlighting the serious implications for non-compliance. The bill aims to ensure transparency in the digital realm, addressing the growing concerns over deepfakes and their potential harms. This legislative effort is complemented by previous voluntary measures from tech companies and an executive order from President Biden, emphasizing the bipartisan agreement on the need for regulation in the AI sector.
“AI offers incredible possibilities, but that promise comes with the danger of damaging credibility and trustworthiness,” said Eshoo. “AI-generated content has become so convincing that consumers need help to identify what they’re looking at and engaging with online. Deception from AI-generated content threatens our elections and national security, affects consumer trust, and challenges the credibility of our institutions.”