In a recent development, Jessica Rosenworcel, Chairwoman of the United States Federal Communication Commission), has proposed classifying calls featuring artificial intelligence (AI)-generated voices as illegal. The intended regulations and penalties would fall under the purview of the Telephone Consumer Protection Act (TCPA). This initiative by the FCC Chair comes on the heels of an alarming incident where AI-generated voices were used to disseminate a false message, mimicking the voice of U.S. President Joe Biden.
FCC Chair proposes outlawing AI-generated voices
The fabricated messages advised New Hampshire residents against participating in the state’s primary election, with the apparent aim of meddling in the upcoming 2024 presidential election. The automated calls, featuring an imitation of President Biden’s voice, were swiftly flagged as misinformation by the state’s attorney general’s office.
In response to the escalating issue of AI-enabled robocalls, FCC Chair Rosenworcel’s proposal seeks to invoke the TCPA, a 1991 law designed to regulate automated political and marketing calls made without the recipient’s consent. The primary objective of the TCPA is to shield consumers from unwarranted and intrusive communications, including unsolicited telemarketing calls and automated messages.
The prevalence of such calls has seen a significant uptick in recent years, as technological advancements enable the deceptive mimicry of voices belonging to celebrities, political figures, and even family members. If adopted, FCC Chair Rosenworcel’s proposal would empower state attorneys general across the nation with additional tools to pursue those responsible for these malicious robocalls and enforce legal consequences.
This FCC chairwoman’s initiative follows the agency’s Notice of Inquiry in November 2023, which sought information on addressing illegal robocalls and the potential involvement of AI. The inquiry delved into AI’s role in scams, voice mimicry, and whether it should be subject to regulation under the TCPA. Additionally, the FCC Chair noted that the firm aims to gather insights into how AI could be leveraged positively, particularly in recognizing and preventing illegal robocalls.
AI landscape and legislative responses
On a broader scale, the White House released a fact sheet outlining key actions on AI on January 29, three months after President Biden’s executive order on AI. The fact sheet acknowledged “substantial progress” towards the president’s mandate to “protect Americans from the potential risks of AI systems.”
The rise of deepfakes, AI-generated content that can be highly deceptive, has prompted concerns. The World Economic Forum, in its 19th Global Risks Report, highlighted adverse outcomes associated with AI technologies. Concerns extend beyond the United States, as the Canadian Security Intelligence Service, Canada’s primary national intelligence agency, expressed worries about disinformation campaigns employing AI deepfakes on the internet.
Responding to the broader challenges posed by AI-generated content, United States lawmakers have urged for legislation criminalizing the production of deepfake images. This call to action comes in the wake of the widespread circulation of explicit fake photos of celebrities, such as Taylor Swift. The intersection of AI and communication technologies raises complex challenges, necessitating a balance between innovation and regulation.
As AI continues to evolve, policymakers, regulators, and law enforcement agencies grapple with the task of adapting legal frameworks to address the potential misuse of these technologies. The proposal to categorize AI-generated voice calls as illegal under existing legislation represents a proactive step in addressing the growing concerns surrounding deceptive robocalls and the broader implications of AI in communication.