Michelle Bowman, a Federal Reserve Governor, has a message for policymakers itching to regulate artificial intelligence: slow down.
Speaking in Washington, Bowman warned that jumping into strict rules could backfire. “We need not rush to regulate,” she said. Her primary concern is that over-regulation could drive innovation out of the banking sector entirely, leaving valuable tools like AI on the sidelines.
AI, according to Bowman, has a lot of potential in finance. It can make systems more efficient, crack down on fraud, and widen access to credit. The technology could also lend central bankers a hand by improving the reliability of economic data.
“Perhaps the broader use of AI could act as a check on data reliability,” she suggested. She also said:
“AI tools may also be leveraged to fight fraud. One such use is in combatting check fraud, which has become more prevalent in the banking industry over the last several years.”
AI’s effect on labor and policy
Bowman is also keeping an eye on how AI is reshaping labor markets and economic fundamentals. AI is altering productivity levels, influencing employment rates, and even redefining the natural rate of interest. She believes this will play an increasingly important role in monetary policy discussions.
The numbers hint at why this matters. Over the last two years, US labor productivity has surged, growing at an annual average of 2.3%. That’s nearly double the 1.3% average seen in the decade before the pandemic.
Bowman isn’t ready to fully credit AI for the uptick, but she did acknowledge that it may be playing a part. Other Federal Reserve officials, including Lisa Cook, agree. She expects AI to keep boosting productivity but warns that predicting its exact impact is still a guessing game.
For policymakers, these changes are critical. Productivity changes and labor market disruptions could force the Fed to rethink its strategies.
“When we consider AI risks, many of these are already well-covered by existing frameworks. For example, AI often depends on external parties—cloud computing providers, licensed generative AI technologies, and core service providers—to operate.”
– Bowman
The reality of AI regulation in the US
If you’re looking for clear rules on AI in America, good luck. Federal regulation is a patchwork at best, leaving states to fill the gaps. Bowman’s cautious approach echoes the frustration many feel about this fragmented system.
At the federal level, the National Artificial Intelligence Initiative Act of 2020 allegedly aimed to boost AI development. President Biden followed up with an executive order in 2023 to promote safe, transparent AI practices. But critics from all sides say these efforts don’t go far enough — or go too far.
Now, the regulations may change again. President-elect Donald Trump has made his intentions clear. He plans to scrap Biden’s executive order, calling it a “roadblock to innovation.”
Meanwhile, states like California and Colorado are not waiting for Washington to get its act together. California is leading the charge with laws like the AI Transparency Act. This requires companies with AI systems used by over one million people to label AI-generated content clearly.
Colorado, on the other hand, has outlawed algorithmic discrimination, ensuring AI systems don’t harm individuals based on race, gender, or other protected traits. Both states are setting standards, but their rules differ so much that companies operating across state lines are left scrambling.
The cost of fragmentation
This disjointed regulatory framework is a massive compliance headache for businesses. Companies must juggle varying requirements from state to state, risking penalties for falling short.
For instance, California’s laws demand transparency tools, but these aren’t mandatory in states with looser rules. This creates a minefield for AI developers.
Consumers also face uneven protections. California residents benefit from strict disclosure rules, while people in other states might not even know when they’re interacting with AI. Bowman’s warning about overregulation is valid, but underregulation poses risks, too.
Experts worry this disaster will leave the US trailing behind other global players. China, for example, is moving full steam ahead with centralized AI oversight, while European countries are setting unified standards. If the US can’t figure out a cohesive strategy, it risks falling behind in both innovation and accountability.
Bowman finished her speech with: “Artificial intelligence has tremendous potential to reshape the financial services industry and the broader world economy. While I have suggested in my remarks that we need not rush to regulate, it is important that we continue to monitor developments in AI and their real-world effects.”
From Zero to Web3 Pro: Your 90-Day Career Launch Plan