The recent controversies surrounding artificial intelligence (AI) have reignited the debate over who should be held accountable for ensuring the ethical development and deployment of AI. As generative AI like ChatGPT becomes increasingly mainstream, questions about the tech industry’s commitment to AI Ethics or responsible innovation continue to loom large.
The hidden cost of AI
Investigations have revealed that OpenAI, a leading AI company backed by Microsoft, has been relying on low-paid overseas contractors to handle the sensitive task of content moderation, a crucial component in the development of “safe” systems like ChatGPT. These contractors, based in Kenya, were paid a mere $2 per hour to label disturbing texts and images, a task that has reportedly caused significant psychological trauma. Following the revelation of the impact of this work, OpenAI’s outsourcing partner severed ties with the company.
This exposé has shed light on the tech industry’s dependence on cheap labor from around the world to carry out the most taxing tasks that underpin advancements in AI. This comes at a time when prominent AI safety teams are being disbanded, despite the industry’s high-minded rhetoric about ethics.
The reality behind the PR spin
Several high-profile figures in the tech industry have called for a pause in AI development until appropriate regulations can be put in place. However, some experts argue that relying solely on policymakers and corporations to shape the future of AI is a mistake, as it excludes key perspectives. Dr. Alison Powell of the Ada Lovelace Institute has pointed out that the current discourse focuses too much on the potential for artificial general intelligence to surpass human cognition, rather than addressing the realities of the present.
“This is harmful because it focuses on an imagined world rather than the actual world we live in,” Powell said. She argues that the decision-making capabilities often attributed to AI overlook the real-world social responsibilities that come with such power.
Oxford researcher Abid Adonis has also noted that the voices of marginalized groups, who are directly affected by AI, are conspicuously absent from the debate. “It’s important to hear what marginalized groups say because it’s missing from the discussion,” Adonis said.
The shortcomings of “ethical AI”
There is already evidence of algorithmic bias and discrimination in the AI systems that are currently in use, from facial recognition technology to algorithms used in housing and loan assessments. Despite tech companies’ lofty claims of ethical principles, their actions often tell a different story.
Generative models like ChatGPT, which are trained on a limited set of internet data, inevitably inherit the biases present in that data. The much-touted reasoning abilities of these models often fall short under scrutiny. For example, ChatGPT has been found to generate blatantly false claims about real people. The lack of transparency in the sourcing of commercial data for AI training only adds to these concerns.
A broader perspective on AI ethics
Dr. Powell suggests that instead of viewing ethics as a technical problem to be solved, we should examine the social contexts in which harm occurs. “AIs are institutional machines, social machines, and cultural machines,” she said. Focusing solely on adjusting algorithms ignores the fact that exclusion often stems from the institutions and culture surrounding the technology.
According to researcher Abid Adonis, strong public discourse and norms will play a crucial role in shaping the future of innovation. “Paradigm will shape the corridors of innovation,” he said. Ensuring accountability means enforcing existing laws fairly, rather than simply regulating the technology itself.
As AI continues to rise in prominence, ensuring that it produces just outcomes is a responsibility that must be shared among tech firms, policymakers, researchers, the media, and the public. However, the current discourse is heavily skewed toward those with the most power. In order to move forward, we need to include a wider range of perspectives on the potential pitfalls of AI, so that we can develop ethical technology that truly meets human needs. This will require us to look beyond the PR spin of big tech companies and confront the real harms that are currently being caused.