OpenAI, a leading artificial intelligence startup, has announced the formation of a new team dedicated to integrating public opinion into the development and behavior of its AI models. The Collective Alignment team, unveiled on January 16, is a direct outcome of a grant program initiated by OpenAI in May, which awarded €1 million across ten projects. These projects were focused on experimenting with “democratic inputs to AI”, a novel approach aimed at incorporating a wider range of perspectives and concerns into AI governance.
Crowdsourcing AI ethics in a democratic approach
The newly formed team will consist of a diverse group of researchers and engineers tasked with creating systems that actively include public input. This approach seeks to tackle a variety of pressing challenges, such as the digital divide, polarized groups, and the representation of diverse voices in AI development. By engaging with external advisors and grant teams, the Collective Alignment team plans to run pilot programs that test the integration of public opinion into the steering mechanisms of AI models.
This initiative aligns with OpenAI’s commitment to a democratic process in determining the guiding principles of AI systems. As stated in a May blog post, the company defines a democratic process as one “involving a broadly representative group of people who exchange opinions, engage in deliberative discussions, and decide on outcomes through a transparent decision-making process.” The Collective Alignment team embodies this ethos, working to ensure that AI development is not just technically sound but also democratically grounded.
Global participation and transparency in AI governance
One of the key aspects of this initiative is its global reach. By awarding grants around the world, OpenAI aims to gather a wide range of opinions and insights, thereby enriching the AI development process with diverse perspectives. This approach also addresses growing concerns about the use of AI in policy-making and the need for transparency in how AI is applied in democratic processes. Post-deliberation sessions during the grant program revealed that public involvement could increase optimism about the role of AI in society, suggesting a positive impact on public perception when transparency and participation are prioritized.
Moreover, OpenAI’s commitment to sharing the code and summaries of work from the grant program underscores its dedication to openness and collaborative progress. By making these resources available, the company fosters a more inclusive and informed dialogue around AI, enabling broader participation in shaping the technology’s future.
Joining forces for a responsible AI future
To build a robust and interdisciplinary team, OpenAI is actively seeking exceptional research engineers from various technical backgrounds. The goal is to assemble a team capable of tackling the complex and multifaceted challenges inherent in aligning AI development with diverse public interests. The company’s call for applications is an open invitation for experts worldwide to contribute their skills and perspectives to this pioneering effort.
The formation of the Collective Alignment team is a significant step in OpenAI’s journey toward responsible AI development. By integrating public opinion and democratic processes into AI governance, the company sets a new standard in the field, acknowledging the importance of ethical considerations and societal impacts in technology development. As AI continues to advance and permeate various aspects of life, such initiatives become increasingly crucial in ensuring that the technology develops in a way that is beneficial and acceptable to the broader society.