In today’s digital age, artificial intelligence (AI) plays an increasingly pivotal role in our daily lives, from personalized streaming recommendations to healthcare applications. However, this proliferation of AI concerns algorithmic bias—when AI systems produce prejudiced results due to underlying assumptions or imbalanced training data.
To tackle this issue head-on, the IEEE Standards Association (IEEE SA) has launched the P7003 Working Group to establish a comprehensive framework for creating AI systems that mitigate bias and ensure fairness.
Algorithmic bias, a pressing concern in AI, arises when machine learning algorithms inadvertently produce results that favor certain groups or attributes over others. This bias is often rooted in the data the algorithms are trained on, and its consequences can be profound. For instance, facial recognition software trained predominantly on one demographic can result in misidentification or exclusion of other groups, leading to real-world consequences like delays for travelers at border security.
Three sources of bias
The IEEE P7003 Working Group identifies three primary sources of bias that can manifest in AI systems:
Bias by the algorithm developers: This type of bias stems from the optimization targets set by developers. For example, an algorithm designed to maximize worker output in a business system may inadvertently disregard worker health, leading to bias.
Bias within the system: Some systems inherently exhibit performance differences among various categories, such as discrepancies in facial recognition accuracy based on race and gender.
Bias by system users: Users can also introduce bias when interpreting and acting upon algorithmic outputs. Confirmation bias, where users accept information that aligns with their preexisting beliefs without fact-checking, is one such example.
The role of IEEE P7003
IEEE SA’s P7003 Working Group aims to address these issues by providing a development framework for AI systems to avoid unintended, unjustified, and differentially harmful user outcomes. The group collaborates with CertifAIEd criteria for certification in algorithmic bias, ensuring that AI systems are developed with fairness and ethics in mind.
While bias is often seen as detrimental, there are scenarios where intentional bias is acceptable and essential. For instance, a healthcare app designed to assist men in managing prostate health should naturally be biased toward male users, as the medical context dictates. Conversely, an app aimed at breast cancer awareness should remain unbiased to serve both sexes effectively.
Evaluating and managing bias risk
To foster awareness and understanding of AI system biases, IEEE P7003 offers recommendations for efficient AI system development:
Use a bias profile: Employ a bias profile to assess and understand the impact and risk of bias in the system.
Consider intention and context: Clearly define the system’s intention and understand the context in which it operates, ensuring alignment with stakeholders’ needs.
Task definition: Clearly define the system’s tasks and ensure its results align with these tasks.
Stakeholder awareness: Understand the users and stakeholders who interact with or are affected by the system.
Regular evaluation: Periodically re-evaluate the system for bias throughout its lifecycle as usage and stakeholders can evolve.
Contextual adaptation: Revisit the bias profile if the system is deployed in a new context, considering how its behavior may need to adapt.
Diverse development teams: Encourage diverse teams of developers and evaluators to bring different perspectives and reduce bias.
Towards ethical AI development
IEEE P7003 aims to provide individuals and organizations with methodologies emphasizing accountability and clarity in designing, testing, and evaluating algorithms. These methodologies help avoid unjustified differential impacts on users. Doing so enables algorithm creators to communicate to regulatory authorities and users that the most up-to-date best practices are employed to prioritize ethical considerations in AI development.
This initiative aligns with the broader movement for ethical AI, as seen in the recent release of the IEEE publication “Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems.” The document encourages technologists to prioritize ethical considerations in creating autonomous and intelligent technologies, further emphasizing the importance of ethical AI development.