IWF Warns AI Images of Child Sexual Abuse Are On The Rise

The Internet Watch Foundation (IWF), a UK-based organisation that works to eliminate online sexual abuse content involving children, has sounded an alarm over the surge in AI imagery on the internet depicting child sexual abuse.

AI Images of Child Sexual Abuse Are Flooding The Internet

IWF said criminals are now exploiting AI technology tools to create new images of child sexual abuse using the faces and bodies of children who were previously involved in real-life abuse. And given the advances in AI, these images appear realistic and almost difficult to distinguish, said IMF. 

Buy physical gold and silver online

According to the organisation, about 11,108 AI images of child sexual images were shared on a dark web child abuse forum in a single month. Up to 2,978 of those images were confirmed to have violated the UK law against child sexual abuse, while another 2,562 were so realistic that they had to be treated as real abuse images.

“Our worst nightmares have come true. Earlier this year, we warned AI imagery could soon become indistinguishable from real pictures of children suffering sexual abuse and that we could start to see this imagery proliferating in much greater numbers. We have now passed that point,” said the chief executive of IWF, Susie Hargreaves.

The IWF also noted new and horrendous ways criminals were exploiting AI technology tools for harmful purposes. It said criminals are using the tools to sexualise images of clothed children and also de-age images of celebrities to portray them in sexual abuse scenes.

The Need for Quick Regulatory Response

While it’s undeniable that AI technology is intended for good uses, there remains a strong need to tame the wrongful usage by criminals. The cases highlighted by the IWF underscore the need for regulators around the world to jointly enforce measures to prevent the use of AI for harmful purposes. 

“International collaboration is vital. It is an urgent problem which needs action now. If we don’t get a grip on this threat, this material threatens to overwhelm the internet,” Hargreaves said. 

Not only regulators but AI companies also have a big role to play in preventing the illicit use of AI image-generating tools, such as blocking prompts that involve immoderate requests. It’s worth noting that some AI image generators like Midjourney already block requests with certain keywords like “porn,” “sex,” etc. 

About the author

Why invest in physical gold and silver?
文 » A