Australian Member of Parliament Georgie Purcell recently raised concerns over a digitally altered image that distorted her body and removed parts of her clothing without her consent. This incident sheds light on the potential sexist and discriminatory consequences of unchecked AI technologies.
While often considered simple in everyday use, AI-assisted tools can inadvertently perpetuate societal biases. When instructed to edit photographs, these tools may enhance certain societal-endorsed attributes, such as youthfulness and sexualization, particularly prevalent in images of women.
A significant concern arises with the proliferation of sexualized deepfake content, predominantly targeting women. Reports indicate that a staggering 90–95% of deepfake videos are non-consensual pornography, with around 90% featuring women as victims. Instances of non-consensual creation and sharing of sexualized deepfake imagery have surfaced globally, impacting individuals across various demographics, including young women and celebrities like Taylor Swift.
The need for global action
While legislative measures exist in some regions to address the non-consensual sharing of sexualized deepfakes, laws regarding their creation remain inconsistent, particularly in the United States. The lack of cohesive international regulations underscores the necessity for collective global action to combat this issue effectively.
Efforts to detect AI-generated content are challenged by advancing technologies and the increasing availability of apps facilitating the creation of sexually explicit content. However, placing sole blame on technology overlooks the responsibility of technology developers and digital platforms to prioritize user safety and rights.
Australia has taken steps to lead in this regard, with initiatives such as the Office of the eSafety Commissioner and national laws holding digital platforms accountable for preventing and removing non-consensual content. However, broader global collaboration and proactive measures are essential to mitigate the harms of non-consensual sexualized deepfakes effectively.
The unchecked use of AI in image editing and the proliferation of sexualized deepfake content poses significant challenges, necessitating comprehensive regulatory frameworks, and collective global action. By prioritizing user safety and rights in technology development and enforcement, societies can work towards mitigating the gender-based harms associated with AI-enabled abuses.