As the field of artificial intelligence (AI) progresses, concerns about potential negative consequences have prompted experts to call for a pause in AI development. The challenge lies in maximizing the benefits of AI while minimizing the risks associated with its misuse. There are different formats of AI entities that require a novel approach to promote accountability and responsible behavior in AI systems. How about looking into Vernor Vinge’s idea of a “true name,” a proposal to incentivize AI entities to compete and cooperate while maintaining their distinct individuality?
The standard formats of AI entities
Current discussions around controlling AI are based on three assumptions: the dominance of a few major entities like Microsoft and Google, the potential for infinite replication and distribution of AI systems, and the possibility of these systems converging into a powerful, singular entity akin to the fictional character Skynet from the Terminator movies. However, none of these formats solve our dilemma of balancing positive outcomes with potential harm.
As AI systems gain autonomy and become faster than humans, traditional methods of regulation and control become inadequate. To address this, it’s suggested to promote reciprocal accountability among AI entities themselves. By fostering competition, AI systems can hold each other accountable, detecting and denouncing negative behaviors.
Individuation refers to the need for each AI entity to have a unique identity and an address in the real world. This concept allows for the establishment of trust and accountability among AI systems. Drawing from Vernor Vinge’s idea of a “true name,” it’s proposed to incentivize AI entities to compete and cooperate while maintaining their distinct individuality.
Implementing accountability mechanisms
To ensure AI entities are accountable, it’s required to create an identification registration system. This system could be based on blockchain technology or require AI systems to maintain a “Soul Kernel” (SK) tied to physical hardware memory. The SK would serve as verifiable proof of an entity’s identity and enable others to hold it accountable for its actions.
Relying solely on regulations may not be sufficient, given the pace at which AI evolves. Instead, it proposes a system where refusal to do business with AI entities lacking proper identification becomes a robust enforcement mechanism. By creating incentives for whistleblowing and denouncing negative behaviors, AI entities would be motivated to maintain a competitive accountability framework.
Cooperation among AI entities
Why would highly intelligent AI entities cooperate in such a system? The answer is that traditional formats of control, such as central agencies or monolithic power structures, are not viable options. Instead, by promoting a system of competitive accountability, AI entities can ensure their interests are protected while avoiding chaos or concentration of power.
Promoting AI accountability is essential to maximize the positive impact of AI and minimize potential harm. By fostering individuation, creating robust identification systems, and incentivizing competitive accountability, we can create a framework that allows AI systems to hold each other accountable and adapt to changing times. This approach ensures ongoing input from human society, providing a balance between AI advancement and responsible behavior.
Vernor Steffen Vinge is a retired San Diego State University Professor of Mathematics, computer scientist, and science fiction author. He is best known for his Hugo Award-winning novels A Fire Upon The Deep (1992), A Deepness in the Sky (1999) and Rainbows End (2006), his Hugo Award-winning novellas Fast Times at Fairmont High (2002) and The Cookie Monster (2004), as well as for his 1993 essay “The Coming Technological Singularity”, in which he argues that exponential growth in technology will reach a point beyond which we cannot even speculate about the consequences.