AI’s tearing America’s economy into two distinct worlds. The ones that can’t survive without it and the ones barely touched by it.
In competitive fields like tech and media, businesses adopt AI or get wiped out. Programmers, journalists, graphic designers are already feeling the heat.
AI churns out code, designs, even articles faster and cheaper than any human team could. It’s ruthless, cutting down anyone who can’t keep pace. The First Amendment’s free-entry rule makes competition fierce, and survival means keeping up with AI plain and simple.
Tech’s AI frenzy
Then there’s the other half of America. Think government agencies, schools, health care, nonprofits. These institutions don’t have that “adapt or die” pressure. They’re built to last, with funding or endowments that can trickle in for decades, even if performance tanks.
A failing nonprofit doesn’t shut down overnight. Donors might never catch on. State universities with ancient tenured professors not using AI? Their doors won’t close because of it.
This creates an ugly, lopsided economy. Where AI’s taking over, efficiency soars, costs drop. Where it’s ignored, things stay slow and expensive.
The pressure on media is like a chokehold, as AI offers speed and cost savings that traditional approaches just can’t match. But there’s one place where AI might just be pushing a different agenda: the U.S. military.
The Biden administration’s directive is meant to secure America’s spot as a leader in military AI, with agencies instructed to grab the top AI tech to secure “safe and powerful” systems. But it’s more than just buying better tech; it’s about survival on the world stage.
As a senior official put it, “Our competitors want to upend U.S. AI leadership,” and they’re not playing nice. They’re allegedly using espionage, hacking, whatever it takes. America’s answer? Defense protocols and chip supply chain security to keep AI tech on American soil.
Government and academia’s slow march
Professors, especially those with tenure, face zero pressure to use AI. A tenured prof can ignore AI tools, keep teaching as usual, and have no consequences. The students, on the other hand, don’t have that luxury.
They’re using AI for assignments, projects (sometimes even cheating, allegedly) since they know failure is an option for them, unlike their professors. Academia’s split between a student body eager for AI and institutions stuck in time.
Bureaucrats use AI here and there, but only for convenience. Drafting documents, answering emails, summarizing files? Sure, AI makes the job easier. But there’s no drive to reinvent how the government operates.
It’s a system built on stability and job security, not agility. Bureaucratic institutions, with their near-permanent lifespans, don’t face the same incentives to embrace AI that private industry does. That means government services stay mostly old-school, while private sectors go full throttle into AI.
AI’s tug of war in defense
A new White House memo outlines plans to secure the top AI systems for national defense, and the Pentagon’s AI arsenal includes autonomous and semi-autonomous systems where human oversight is “appropriate” but not always essential.
The U.S. is already using AI to identify targets faster, hoping to give its military a tactical edge in conflicts. Hundreds of AI projects are in the pipeline, all geared toward more efficient and effective warfare. But not everyone’s thrilled about AI’s military involvement.
Former Joint Chiefs Chairman Mark Milley called it a “Pandora’s box” in a recent speech, warning that AI in war could open doors to scenarios we aren’t ready to handle.
Internationally, America’s stance on AI weapons doesn’t win it much praise. Fifty countries back its approach, but the UN, led by Secretary-General Antonio Guterres, wants a full ban on autonomous weapons by 2026 — a move the U.S. is unlikely to join.