TL;DR
At the start of 2024 several leading AI labs publicly opposed military use of their tools, but within about a year Anthropic, Google, Meta, and OpenAI changed course and began permitting or pursuing defense applications. Observers in the excerpt attribute the rapid shift to the lure of defense funding, rising geopolitical pressures, and a broader reorientation of tech–state relations away from the earlier Silicon Valley Consensus.
What happened
At the beginning of 2024, firms including Anthropic, Google, Meta, and OpenAI shared a public stance against military applications of their AI. Over the following roughly 12 months that position shifted: OpenAI rescinded a ban on military and warfare uses and was reported to be working with the Pentagon; Meta said the U.S. and select allies could use its Llama model for defense purposes in the same week a U.S. presidential election returned Donald Trump to office; Anthropic permitted military use of its models and announced a partnership with Palantir; OpenAI later disclosed a partnership with defense startup Anduril; and in February 2025 Google revised its AI principles to allow development and use of weapons and other technologies that could harm people. The excerpt argues that the turnaround was driven by the expense of building large models, the appeal of defense contracting as a large and patient customer, and a wider geopolitical shift in the relationship between states and major tech firms.
Why it matters
- Defense customers can provide large, patient funding that accelerates development of expensive AI models.
- Normalization of military use changes the commercial and ethical landscape for AI research and deployment.
- A coordinated shift by leading U.S. labs signals a broader reorientation of tech–state relationships away from previous industry norms.
- Policy frameworks and international trade arrangements that favored tech expansion historically could be challenged by rising geopolitical priorities.
Key facts
- At the start of 2024 Anthropic, Google, Meta, and OpenAI were publicly united against military use of their AI tools.
- OpenAI rescinded its ban on military and warfare uses and reportedly engaged on projects with the Pentagon.
- In November (the week of Donald Trump’s reelection), Meta said the U.S. and select allies could use Llama for defense.
- Anthropic moved to permit military use and announced a partnership with defense firm Palantir.
- OpenAI announced a partnership with the defense startup Anduril later in the year.
- In February 2025 Google revised its AI principles to allow development and use of weapons and technologies that might harm people.
- The excerpt links the shift to high costs of building large models and the appeal of defense contracting as a steady, well-funded customer.
- The piece situates the change within a larger decline of the 'Silicon Valley Consensus' and a turn toward geopolitically driven state–capital relationships.
- The author notes that concerns about existential risks from AGI largely disappeared from public focus over the period described.
What to watch next
- Whether additional AI labs will enter formal defense partnerships — not confirmed in the source.
- Potential regulatory responses in the U.S. or abroad to the normalization of military AI use — not confirmed in the source.
- How procurement and funding flows from defense departments will influence the direction of AI research and product roadmaps — not confirmed in the source.
Quick glossary
- AI model: A computational system trained on data to perform tasks such as language processing, vision, or decision-making.
- Defense contracting: Procurement and partnership arrangements between government defense agencies and private firms to develop and supply military technologies and services.
- Silicon Valley Consensus: A historical alignment between tech industry elites and political leaders favoring deregulation and globalized digital commerce.
- Section 230: A U.S. law that provides limited liability protections to online platforms for third-party content and shaped industry regulatory dynamics.
- AGI (Artificial General Intelligence): A hypothetical form of AI with the ability to understand, learn, and apply intelligence across a wide range of tasks at or above human levels.
Reader FAQ
Were AI companies originally opposed to military use?
Yes; at the start of 2024 several leading labs publicly held positions against military applications of their tools.
Which firms changed their stance?
The excerpt cites OpenAI, Meta, Anthropic, and Google as having shifted toward permitting or partnering on defense uses.
Why did they change course?
The piece emphasizes the high costs of building large models and the attractiveness of defense contracting as large, patient funding, alongside broader geopolitical pressures.
Does the excerpt confirm broader regulatory action followed these changes?
Not confirmed in the source.

NICK SRNICEK THE BIG STORY JAN 14, 2026 7:00 AM How AI Companies Got Caught Up in US Military Efforts Two years ago, companies like Meta and OpenAI were united…
Sources
- How AI Companies Got Caught Up in US Military Efforts
- 2025 in Review: How the US Military Put AI to Work
- Latest Air Force capstone tests AI, joint integration for battle …
- How One Company is Transforming Military Defense from …
Related posts
- Coverage Cat (YC S22) Seeks Fractional Operations Specialist
- Parliamentary committee criticizes UK delay on banning AI nudification tools
- Buy servers now or cry later: DRAM surge threatens infra budgets