The recent release of advanced language models like ChatGPT4, Bard, and Bing Chat has ignited widespread interest in AI ethics. Many are seeking to understand the underlying concepts that govern the potential risks posed by these cutting-edge technologies. This article aims to dissect crucial ideas presented in a 2021 talk on AI ethics, emphasizing the need for a deeper understanding beyond mere explainability and fairness.
While explainability is often touted as a crucial aspect of ethical AI, it is insufficient on its own. The real question lies in whether the explanations provided are actionable. Take, for instance, a loan denial by an algorithm. Instead of justifying the decision, the system should offer actionable recourse, allowing individuals to rectify their situations for future loan applications. This concept of actionable recourse becomes paramount in decisions impacting people’s lives.
Contestability at the core
Contestability suggests incorporating mechanisms for questioning and disagreeing with AI results within the core of the system. This challenges the common practice of adding dispute resolution as an external layer after potential errors have occurred. The idea is to build systems that actively invite disagreement and questioning, fostering a more resilient and accountable AI landscape.
Addressing fairness and bias is crucial, but it doesn’t alone guarantee ethical AI. Gender Shades research on facial recognition revealed significant disparities in accuracy based on gender and skin color. The emphasis should shift from merely equalizing performance across groups to questioning the underlying foundations, deployment, and ownership of AI tasks. Power dynamics, creation, and usage play pivotal roles in ethical considerations.
Harmful implications at scale
The scale at which AI is implemented can exacerbate ethical concerns. The Robodebt program in Australia, using automation to create unlawful debts, exemplifies the dark side of machine learning’s ability to centralize power. Automated systems, lacking error-checking mechanisms, can lead to widespread harm and replicate biases at an unprecedented scale, as seen in various real-world examples.
An often-overlooked aspect of AI ethics is the immediate recognition of issues by those directly impacted. Instances like the misuse of facial recognition in protests or Facebook’s role in the Myanmar genocide highlight the importance of listening to the concerns raised by affected communities. Giving voice and avenues for participation to those most impacted is crucial in mitigating ethical risks.
Feedback loops and shifting power
Feedback loops, where AI models create the outcomes they predict, pose additional challenges. Data tainted by biased outputs and the amplification of existing biases in machine learning models underscore the need to question not just fairness but also the shift in power dynamics. AI applications should be evaluated on how they redistribute or concentrate power in society.
The article concludes with practical resources for implementing ethical AI practices. The Markkula Center for Applied Ethics’ Tech Ethics Toolkit provides actionable practices for organizations, emphasizing the importance of expanding the ethical circle to consider all stakeholders. The Diverse Voices guide from the University of Washington Tech Policy Lab offers insights on assembling diverse panels to ensure a broad range of perspectives are considered.
In the evolving landscape of AI, understanding and implementing ethical principles beyond surface-level concepts is paramount. This article highlights the complexities involved in AI ethics, urging stakeholders to delve deeper into actionable recourse, contestability, and the impacts on marginalized communities. By adopting participatory and democratic approaches, the field can move towards a more responsible and ethical integration of AI technologies.