In a world increasingly shaped by artificial intelligence (AI), many are left wondering who decided to usher in this era of disruption. Did you, like so many others, miss the memo on AI’s grand entrance into our lives? The truth is, the pivotal decisions regarding AI development and deployment have been largely orchestrated by a select few, with minimal input from the rest of us. This lack of democratic involvement is a cause for concern and could lead to growing unrest.
The power imbalance: Few steering the AI
A handful of AI researchers are the architects of this technological revolution, driven by the belief that AI holds the potential for unprecedented human progress. They have taken it upon themselves to push the boundaries of AI capabilities, but in doing so, they have openly acknowledged their diminishing control over the technology and its potential consequences. This raises the question: Should a small group of experts have the authority to steer society down a potentially perilous path?
It’s as if we were asked what we wanted for dinner and responded with “Thai,” only to be presented with a choice between pepperoni and Canadian bacon pizza. This isn’t a real choice; it’s a form of democratic gaslighting. A functional democracy should not allow a handful of computer scientists to make decisions that could irreversibly affect generations to come.
Consulting tech titans: Democracy’s Missing Voices
In addition to leaving the fate of AI in the hands of a select group of AI labs, our elected officials are now seeking advice on how best to regulate this emerging technology from the same unrepresentative and unelected tech leaders. Recent headlines from Washington, D.C., have been dominated by Senator X consulting with tech leader Y. Conspicuously absent from these meetings are representatives of the communities, both foreign and domestic, that will bear the brunt of AI’s consequences, whether good, bad, or ugly.
It’s important to note that a significant portion of the population believes that AI should not have been introduced on this scale, or perhaps not at this point in time. If you share this concern, you might be lamenting that it’s already too late to change the course. We find ourselves at the “pepperoni or Canadian bacon” stage of decision-making, where our influence over the trajectory of AI is minimal at best. Furthermore, if we manage to halt the deployment of AI models, there’s a risk that other countries, like China or other perceived adversaries, will continue advancing their own models and potentially use them against us in future conflicts.
However, arguments in favor of unchecked AI development are far from convincing. Many would prefer to live in a United States characterized by strong communities, meaningful work, critical thinking, and trust in social institutions rather than one that merely leads the world in AI. In fact, the former version of the United States is more likely to outlast and outcompete any other nation that places excessive faith in technology as the key to human flourishing.
To address the democratic deficit in AI decision-making, we must shift the narrative from “How do we shape the development of AI?” to “When and under what conditions should we allow limited uses of AI?” In the interim, it’s reasonable for our officials to seek guidance from AI experts and leaders. However, when it comes to determining how AI transforms our society, the power should ultimately rest with the voters, not tech CEOs or a select few researchers. It’s time for a more inclusive and democratic approach to shaping our AI-driven future.