In response to the growing threat of deepfake technology, the United States Federal Trade Commission (FTC) has proposed updates to regulations aimed at protecting consumers from impersonation scams facilitated by artificial intelligence (AI). The proposed changes seek to prohibit the use of AI-generated content for impersonating businesses or government agencies, with the intention of safeguarding consumers against potential harm.
FTC takes action to address deepfake threats
The proposed updates to the regulation come amidst rising concerns over the proliferation of deepfake technology, which allows for the creation of manipulated videos and audio recordings by altering individuals’ faces or voices. With the increasing sophistication of AI-driven scams, the FTC is stepping up its efforts to combat impersonator fraud and protect consumers from deception.
FTC Chair Lina Khan emphasized the urgency of the matter, stating, “With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever.” The proposed expansions to the regulation aim to bolster the FTC’s ability to address AI-enabled scams that impersonate individuals, thereby strengthening consumer protections in the digital realm.
Strengthened enforcement measures
If implemented, the updated regulation would empower the FTC to directly act against scammers who use AI to impersonate government or business entities. This includes initiating federal court cases to compel perpetrators to return funds obtained through fraudulent activities. The proposed changes aim to deter malicious actors from engaging in deceptive practices that exploit AI technology by providing the FTC with enhanced enforcement capabilities.
The final rule on government and business impersonation is slated to become effective 30 days after its publication in the Federal Register. However, before the regulation is finalized, the FTC solicits public feedback on the proposed updates. The public comment period for the supplemental notice of proposed rulemaking will remain open for 60 days following its publication in the Federal Register, allowing stakeholders to weigh in on the proposed changes.
Addressing regulatory gaps
While federal laws do not specifically address creating or sharing deepfake images, some lawmakers are taking proactive measures to address this regulatory gap. Without comprehensive federal legislation, various states have enacted laws making deepfakes illegal. Additionally, individuals and celebrities who fall victim to deepfake scams may seek recourse through existing legal avenues, such as copyright laws and rights related to their likeness.
The FTC’s proposed updates follow other recent regulatory actions aimed at curbing the misuse of AI technology. In January, the Federal Communications Commission (FCC) banned AI-generated robocalls by reinterpreting existing rules governing spam messages. This move came in response to a notable incident in New Hampshire, where a deepfake of President Joe Biden was used in a phone campaign to discourage voter participation.
In conclusion, the FTC’s proposed updates to regulations governing AI impersonation reflect a proactive approach to addressing emerging threats in the digital landscape. By bolstering enforcement measures and soliciting public feedback, the FTC aims to strengthen consumer protections against the misuse of AI technology for fraudulent purposes. As deepfake technology continues to evolve, regulatory efforts to safeguard consumers’ interests remain paramount in combating impersonation scams facilitated by AI.