Regulatory Capture in AI: How Fear of Competition Drives Policy
Leading AI companies are increasingly advocating for government regulation that would constrain their competitors while preserving their market advantages. Recent statements by OpenAI and Anthropic reveal a pattern of using national security concerns to push for regulatory barriers that would primarily benefit established players. This strategy bears striking similarities to Microsoft's approach against open-source software in the late 1990s, where Fear, Uncertainty, and Doubt (FUD) tactics were deployed to maintain market dominance.
Listen to our podcast on this topic
The New FUD: National Security as Competitive Shield
OpenAI's Regulatory Campaign
Recent TechCrunch reporting reveals OpenAI has submitted a policy proposal to the Trump administration characterizing Chinese AI lab DeepSeek as "state-subsidized" and "state-controlled." The proposal recommends that the U.S. government consider banning models from DeepSeek and similar "PRC-produced" operations in all countries considered "Tier 1" under export control rules.
What makes this particularly interesting is that DeepSeek's open models don't contain mechanisms that would allow the Chinese government to collect user data—a fact that contradicts OpenAI's security risk framing. Companies including Microsoft, Perplexity, and Amazon currently host these models on their infrastructure without such concerns.
Anthropic's Security Narrative
In parallel, Anthropic CEO Dario Amodei recently claimed that "many of these algorithmic secrets, there are $100 million secrets that are a few lines of code," suggesting that Chinese spies are targeting these valuable intellectual assets. Speaking at a Council on Foreign Relations event, Amodei called for increased government intervention to protect leading AI companies from what he described as "large-scale industrial espionage."
Historical Parallels: The Halloween Documents
This pattern closely mirrors Microsoft's strategy against Linux and open-source software in the late 1990s, documented in the infamous "Halloween Documents." Those internal Microsoft memos outlined a strategy to undermine open source through:
- Spreading Fear, Uncertainty, and Doubt about open-source reliability and security
- Emphasizing potential risks to enterprise users
- Creating proprietary extensions to prevent standardization
- Using regulatory and legal mechanisms to maintain market barriers
The Real Concern: Commoditization
The underlying fear driving these regulatory pushes isn't primarily about national security, but about business fundamentals:
- Open-source AI models are rapidly improving and dramatically less expensive to deploy
- As AI becomes commoditized, the economic moats protecting companies operating at a loss become unstable
- When anyone can run high-quality AI models, the premium pricing model collapses
- Government intervention helps maintain artificial scarcity in what would otherwise become a commodity market
Policy Consequences
This regulatory capture approach has several concerning implications:
- Innovation Concentration: Development becomes concentrated in a few companies rather than distributed across a diverse ecosystem
- Barrier Asymmetry: Regulatory barriers disproportionately impact smaller players and open-source projects
- Resource Misallocation: Public resources get diverted to protect private companies' competitive advantages
- International Fragmentation: Technological nationalism risks creating incompatible regulatory regimes across jurisdictions
The Revealing Pattern
What makes this situation particularly telling is how these companies respond to open-source competition. If their technology were truly as exceptional and unreplicable as claimed, open alternatives wouldn't pose such a threat. The intense push for regulatory barriers suggests:
- Technological Gap Closure: The quality difference between proprietary and open models is rapidly shrinking
- Margin Pressure: Future profitability projections cannot withstand competition-driven price reductions
- Valuation Protection: Regulatory barriers help maintain the narrative supporting current valuations
The emerging competitive landscape in AI is exposing the economic reality behind the industry's most prominent players—a reality they'd prefer to obscure through regulatory intervention rather than face in an open market.