Artificial intelligence is being preached to have the potential to reshape our digital horizon. Of a truth, many people in the digital space, including artists and content creators, have found generative AI tools useful in their work processes. However, there is a new side of technology coming to light, which is raising concerns about the equitability of AI in the future.
The Issue of Bias in AI
By studying AI-assisted artworks from 15 artists in a recent exhibition at the Ford Foundation Gallery titled “What Models Make Worlds: Critical Imaginaries of A.I.,” the show curators uncovered that AI technology tools replicate the prejudices of the humans that coded it. As a result, there are often biases in AI outputs which can be harmful.
“Generative A.I. is Janus-faced. On the one hand, it can steer us away from an anthropocentric model of creativity; on the other, it often operates through extractivist labour practices and biased datasets,” Mashinka Firunts Hakopian, the show’s co-curator, told Artnet News.
In one of the artworks titled “Conversations with Bina 48,” the artist attempted to portray an interaction with a social robot. The bot was designed to mimic the consciousness of a Black woman, but it had no meaningful grasp of Blackness or race.
There were other works, such as In Discriminate by Mandy Harris Williams and The Bend by Niama Safia Sandy, which revealed algorithmic bias and discrimination.
Morehshin Allahyari’s Moon-faced video work uses A.I. to undo the lack of queer Iranian representation in the Western canon.
AI Inequity in Healthcare
The issue of biases with AI models goes further down to different other areas where the technology is applied, such as healthcare.
In August, researchers at the Massachusetts Institute of Technology (MIT) found that AI and machine learning have the tendency to exacerbate healthcare inequities among subgroups, often underrepresented. This could affect how the groups are diagnosed and treated.
The problem usually stems from the dataset by which AI models are trained, and with the issue now coming to light at this stage, Hakopian believes it can be mended.
“The trajectory of A.I. isn’t scripted in advance. Automated technofutures aren’t a foregone conclusion,” Hakopian added. “There is space for intervening in those futures.”