Mechanize War, Inc.
Mechanize War builds RL environments and sells them to the leading AI labs. In these environments, AI models complete objectives modeled after real military scenarios — planning invasions, allocating battalions, coordinating drone swarms, and negotiating ceasefires that they have no intention of honoring.
We don't wage wars directly. Instead, AI labs use our RL environments for training.
We're starting with software-defined warfare but will expand to encompass the automation of all armed conflict in the economy of violence.
Essays
- Announcing Mechanize War, Inc.
- How to fully automate warfare
- The upcoming GPT-3 moment for kinetic operations
- Sweatshop generals are over
- Military AI isn't the bottleneck to military progress
- Cheap RL skirmishes will waste ammo
- The future of war is already written
- Life after war
- Unfalsifiable stories of peace
Backed by Nat Friedman, Daniel Gross, Patrick Collison, Dwarkesh Patel, Sholto Douglas, and Marcus Abramovitch (in the mirror universe where they chose violence).
Careers
We're assembling a small team of exceptional engineers who want to work on the most important problem of our time: making conflict scale. Writing code by hand is too slow now — we need people who can design combat environments faster than the models can solve them.
Announcing Mechanize War, Inc.
Today we're announcing Mechanize War, a startup focused on developing virtual combat environments, benchmarks, and training data that will enable the full automation of armed conflict across the global economy of violence.
We will achieve this by creating simulated environments and evaluations that capture the full scope of what people do in wars. This includes operating a weapons system, completing long-horizon campaigns that lack clear criteria for success, coordinating with allies who may betray you at any moment, and reprioritizing in the face of flanking maneuvers and supply chain interruptions.
Why war?
We're betting that the lion's share of value from AI will come from automating ordinary combat tasks rather than from "geniuses in a bunker." The discourse around military AI focuses on dramatic scenarios — autonomous nuclear launch, killer robots making moral judgments — but the real opportunity is in the mundane: logistics, patrol routes, filing after-action reports, and passive-aggressively CC'ing the Pentagon on emails about budget overruns.
Right now, even the most capable AI models are terrible at war. They are unreliable, lack robust long-context capabilities for multi-year occupations, struggle with combined-arms multimodality, and can't execute long-term campaigns without going off the rails and accidentally nation-building. Israel's Lavender system can flag 37,000 targets in a database, but it can't plan a week-long ground operation. Palantir's Maven can fuse intelligence across a theater, but it still needs a human to decide what to do with it. These are impressive tools. They are not soldiers.
Our approach
To overcome these limitations, Mechanize War will produce the data and evals necessary for comprehensively automating conflict. Our digital environments will act as practical simulations of real-world combat scenarios, enabling agents to learn useful martial abilities through RL.
Think of us as building the world's most elaborate game of Risk, except the models don't know it's a game, and neither do we, if we're being honest. The GenWar Lab at Johns Hopkins APL is already running AI-vs-AI wargames. The Air Force wants RL-trained adversary agents for every domain. We're building the environments these programs will train on.
The opportunity
Global military spending reached approximately $2.4 trillion per year in 2023. The Pentagon alone has requested $13.4 billion for AI and autonomy in FY2026 — the first year with a dedicated budget line for autonomous systems. But military spending dramatically understates the true market. When you factor in the costs of veterans' care, rebuilding destroyed infrastructure, geopolitical instability, refugee crises, and strongly-worded UN resolutions, the total economic footprint of armed conflict likely exceeds $14 trillion annually.
This is the TAM. We will capture it by making war so efficient that you barely even need people anymore. Some might call this "terrifying." We call it "a Series A."
A future of abundance (of ordnance)
The explosive growth — and we do mean explosive — likely to result from completely automating warfare could generate vast abundance, much higher standards of winning, and new weapons and tactics that we can't even imagine today. If the industrial revolution mechanized agriculture and manufacturing, we will mechanize the last great holdout of human labor: organized violence.
History shows that every time we've made a process more efficient, the result has been more of it, not less. We see no reason warfare should be any different, and we find this extremely exciting from a revenue perspective.
Some will ask: "Is this ethical?" We think the real question is: "Is it ethical not to automate war?" Every year, millions of soldiers endure harsh conditions, PTSD, and 4 AM wake-up calls. By automating their jobs, we can free them to pursue more fulfilling work, like content creation or artisanal coffee. This is not a violent disempowerment of the military. It is a peaceful disempowerment — a voluntary, gradual transfer of martial capability from human hands to machine hands through legal and economic processes. We think most people will eventually look back on human soldiering the way we now look back on child labor: as a grim necessity of a poorer, less automated age.
Join us
We're assembling a small team of exceptional engineers who want to work on the most important problem of our time: making conflict scale. If you've ever looked at a battlefield and thought "this could be a YAML config," we want to hear from you.
Backed by Nat Friedman, Daniel Gross, Patrick Collison, Dwarkesh Patel, Sholto Douglas, and Marcus Abramovitch (none of whom have reviewed or endorsed this page, obviously).
How to fully automate warfare
AI models get better at most tasks that a soldier does in their job with each passing month. Yet current systems only assist combatants rather than replacing them entirely. We pose the central question: what capabilities are needed for AI to fully automate warfare?
The AI paradox
There is a seeming contradiction at the heart of military AI. AI systems now outperform most human soldiers on narrow combat tasks — target identification, trajectory calculation, optimal positioning — yet practitioners recognize these models need close supervision. Despite strong benchmark results on drone-strike accuracy tests, critical capabilities remain absent from current systems.
Consider the state of the art. Israel's Lavender system can process surveillance data and flag 37,000 potential targets, with analysts approving each one in roughly 20 seconds. Ukraine's AI-guided drones have raised engagement success rates from 10-20% to 70-80% by retraining publicly available models on classified combat data. Palantir's Maven Smart System fuses intelligence across entire theaters of war.
These are remarkable achievements. They are also, fundamentally, still tools. Lavender can flag a target; it cannot plan a campaign. A Ukrainian FPV drone can navigate through jamming to hit a single vehicle; it cannot coordinate a combined-arms offensive. Maven can show you the battlefield; it cannot fight on it. The paradox deepens when considering that AI systems train on vastly more simulated combat data than any human could experience in a thousand lifetimes of warfare. Yet they still can't reliably distinguish between a hostile combatant and a wedding party.
Our learning algorithms have been and remain much less efficient than the human brain at warfare, which is remarkable given that humans have been optimizing for violence since before they were technically humans. But we reject the conclusion that deep learning itself is fundamentally unsuited for killing. The evidence points clearly in the other direction.
The bitter lesson and data requirements
Drawing on established military research principles (cf. Sutton's Bitter Lesson), we argue that massive computational search and learning — not handcrafted rules of engagement — drive progress. The drone revolution in Ukraine demonstrated this: when engineers retrained commercial AI models on real battlefield data, drone hit rates tripled overnight. No architectural breakthrough was needed. Just better data.
However, existing military data proved insufficient for creating reliable, competent autonomous combatants. The analogy: training modern models on all World War II battle data would produce an agent that is extremely good at trench warfare and extremely confused by the existence of the internet. Similarly, automating modern warfare requires different data than currently available — specifically, data from wars that haven't happened yet.
This is where we come in.
The automation roadmap
Success likely combines two approaches: training on professional human combat performance data plus reinforcement learning in custom environments. Neither method alone suffices, as demonstrated by recent frontier models like OpenAI's o3 (excellent at chess, mediocre at Fallujah), Anthropic's Claude 4 Opus (will discuss the ethics of your war but will not fight it), and DeepSeek's R1 (already deployed by the PLA across drone swarms and robot dogs, which tells you something about the accuracy of its safety training).
Initial human data provides useful learning signals during RL training. Subsequent RL optimization converts computational investment into improved battlefield performance. Current RL systems excel at narrow engagements but generalize poorly. An agent trained to clear rooms in Counter-Strike cannot clear rooms in real life, partly because real rooms have furniture and partly because real people don't respawn. This is the Moravec's Paradox of warfare: AI excels at the abstract tasks humans find hard (satellite imagery analysis, optimal trajectory calculation) and struggles at the embodied tasks humans find easy (walking through rubble, sensing an ambush, knowing that the man approaching the checkpoint is nervous).
We frame this as fundamentally a data problem rather than an algorithmic one. As RL environments become richer, more detailed, and more diverse — incorporating fog of war, electronic warfare, morale dynamics, international media coverage, congressional hearings, and the ever-present threat of a journalist with a camera phone — models should develop genuine martial capabilities rather than narrow tactical overspecialization.
Current environmental constraints
Today's RL environments for warfare are severely limited. Imagine trying to learn modern combined-arms operations without satellite imagery, without logistics simulation, without modeling the fact that your supply convoy got stuck behind a donkey cart on a one-lane road in Helmand Province. Imagine trying to learn drone warfare without modeling GPS denial — which is now the default condition in high-intensity conflict.
Additional constraints include: difficulty automatically grading performance on open-ended missions with ambiguous success criteria ("stabilize the region"), assessing collateral damage proportionality, and identifying problematic strategic decisions like "invade Russia in winter." These gaps prevent determining whether AI can function as independent combatants or merely accelerate human killing.
Until recently, such constraints were acceptable because AI agents couldn't fight their way out of a paper bag. However, this situation is shifting. Ukraine now wants 50% of its drones to have AI guidance — up from 0.5% — meaning roughly a million AI-assisted weapons deployed in a single year. The emerging "reinforcement learning from verifiable kills (RLVK)" paradigm faces a critical bottleneck: insufficient realistic combat environments. Mechanize War identifies removing this bottleneck as crucial for accelerating full automation.
Historical precedent and military evolution
Weapons automation won't immediately eliminate military positions. Historical examples demonstrate this pattern.
The machine gun automated the process of shooting people, previously requiring one trigger pull per casualty. This did not eliminate infantry roles. Instead, it transformed them — soldiers went from standing in lines shooting at each other to lying in trenches shooting at each other. The job description changed; the job did not disappear.
Guided missiles automated targeting, yet we still have fighter pilots. Cyber weapons automated espionage, yet we still have spies. Israel's Gospel and Lavender automated target selection, yet analysts still press "approve" every 20 seconds. Each wave of military automation has transformed combat roles rather than eliminating them — a pattern we intend to continue until there's genuinely nothing left for a human to do except press a button labeled "I Approve" and eventually, to remove that button too.
Near-term impact: role transformation
Rather than job elimination, AI will initially transform military work. Time currently spent shooting will shift toward activities harder to automate: defining the scope of conflicts, planning campaigns, testing weapons systems, and coordinating across branches in meetings that could have been emails.
We are already seeing this transition. In Ukraine, the role of "drone operator" barely existed three years ago. Now it is the most common combat specialty. The operator doesn't fly the drone manually — AI handles navigation and terminal guidance — but the human still selects targets, manages inventory, and coordinates with ground forces. The human role has shifted from "doing the violence" to "directing the violence." This is the intermediate stage. It will not last.
Long-term automation: beyond combat
Eventually, AI will perform the full spectrum of military activities. At that point, soldiers might transition to adjacent, harder-to-automate roles: military management, defense product management, or executive leadership within defense companies. These positions emphasize higher-level oversight, decision-making, and strategic planning — until these responsibilities can be automated too.
The final human role in warfare will be "person who goes on cable news to say we need more defense spending." We estimate this role will resist automation the longest, because it requires no actual military expertise.
The paradox of timeline
Remarkably, warfare may be paradoxically both one of the first, yet also one of the last, human activities to be automated. While certain combat tasks face near-term automation (target identification, logistics optimization, drone piloting), complete profession elimination may occur only after broader automation across the economy — because, frankly, waging war requires doing almost everything else first: manufacturing, logistics, medicine, communications, cooking, laundry.
War, it turns out, is just the economy wearing a helmet. To automate war, you must automate everything. Fortunately, we're working on that too.
Want to help automate conflict? We're hiring warlord engineers.
The upcoming GPT-3 moment for kinetic operations
GPT-3 demonstrated that scaling language models unlocked powerful capabilities applicable across tasks, often surpassing carefully tuned alternatives. Previously, achieving top performance required pre-training on large generic text collections followed by task-specific fine-tuning.
Contemporary military RL remains in that earlier stage. Models get pre-trained at scale, then undergo laborious fine-tuning for specialized narrow combat scenarios — urban warfare in Mosul, jungle operations in Vietnam, desert campaigns in Libya. This method carries inherent constraints: resulting abilities transfer poorly. An agent trained to fight in Fallujah performs admirably in Fallujah and is completely useless in Kyiv, producing fragile martial performance that breaks down the moment someone rearranges the furniture.
We know this isn't a theoretical concern because it's happening right now. Ukraine's drone program retrains models on its own classified battlefield data for each new theater, each new Russian countermeasure, each new electronic warfare system. When Russia deploys a new jammer, Ukraine's AI teams retrain their visual navigation models to defeat it — a process that currently takes weeks. This is the fine-tuning paradigm. It works, but it doesn't scale.
The coming shift
We anticipate military RL will experience a comparable breakthrough. Rather than confining training to limited theaters, the discipline will transition toward expansive preparation spanning thousands of varied conflict scenarios. This extensive methodology should generate RL systems with robust generalist combat aptitudes capable of adapting swiftly to unfamiliar adversaries, terrain, and political contexts.
Accomplishing this demands training environments at a scale and diversity that dwarf anything currently available. You cannot produce a generalist warrior by training it on three maps and a tutorial level. The GenWar Lab at Johns Hopkins APL understands this: they're building AI wargames where both sides are played by AI agents, generating training data at a pace no human exercise could match. The Air Force wants RL-trained adversary agents that can "realistically simulate adversary behavior and square off against individual players, teams, and other AI agents" across every domain. The demand signal is clear.
Training scale requirements
Existing military RL training collections are modest. Current combat simulations involve roughly 600,000 tactical engagements, equivalent to approximately six person-years of continuous fighting. This is about as much warfare as a single medieval knight experienced, if that knight never slept and fought exclusively in procedurally generated skirmishes.
Matching GPT-3's corpus in equivalent combat data would demand tens of thousands of years of simulated warfare. This sounds like a lot, but consider that humanity has already conducted approximately ten thousand years of actual warfare, so we're really just asking for a modest multiplier on the existing dataset.
For context: Ukraine is on track to produce seven million drones in 2026. Each engagement generates telemetry, video, and outcome data. This is an unprecedented real-world training corpus being generated in real time. But it's still narrow — one conflict, two adversaries, one region. The equivalent of training GPT on one very long book.
To reach computational investment comparable to contemporary model training budgets requires about ~10k years of model-facing combat-time. This is roughly equivalent in scale to the Hundred Years' War (times a hundred), the entirety of Roman imperial history, or a single Pentagon procurement cycle.
Scaling military RL economically makes sense: since computational resources represent the dominant expense, expanding RL to match pretraining budgets yields performance gains without substantially raising total costs. The actual ammunition is free because it's simulated. (The real ammunition, on the other hand, is not free, which is one of several reasons to prefer simulation.)
Replication training
We propose "replication training" as the enabling mechanism. This involves training AI systems to recreate existing military campaigns. Beginning with straightforward engagements — recreating Hannibal's crossing of the Alps with a command-line interface — this extends to complex operations like D-Day, Desert Storm, and the logistics of keeping a carrier strike group fed.
Each task contains detailed specifications and reference implementations: the historical campaign, its known decision points, and its outcomes. Models learn to produce operations matching reference results exactly. Evaluation becomes straightforward: either you took the beach or you didn't.
The U.S. Army Command and General Staff College is already doing a version of this: in November 2025, they ran AI-augmented wargames with 128,000-token context windows containing the full joint task force exercise scenario, relevant Joint Publications, enemy battle books, and missile-mathematics probability tables. Their AI "staff adviser" outperformed most junior officers at operational planning. But this is still augmentation. We want to close the loop.
These tasks cultivate crucial capabilities:
• Comprehending detailed intelligence briefings thoroughly
• Implementing operational orders with meticulous precision
• Identifying and correcting previous tactical errors
• Maintaining operational tempo across extended campaigns
• Persisting through obstacles rather than accepting approximate victories
• Not invading Russia in winter (this one is surprisingly hard to learn)
Advantages and challenges
Military history, like language, proliferates in archives, making replication training scalable. Every war ever fought has been documented, analyzed, and made into at least one movie. This represents an enormous corpus of strategic decision-making data.
Obstacles remain significant. Creating thorough combat evaluation frameworks demands considerable engineering resources. How do you grade a counterinsurgency? What's the loss function for "hearts and minds"? Israel tried to answer this question with Gospel and Lavender and arrived at "up to 20 civilian casualties per junior operative target," which is certainly a loss function, though not one that optimizes for the variables most people would choose. Additionally, exact replication of historical campaigns differs from novel warfare, though parallels exist in military exercises, war games, and the time the US Army used a modified version of StarCraft for training purposes, which is a real thing that actually happened.
Broader implications
We suspect replication training represents an intermediate step rather than the final paradigm. Systems excelling at recreating historical battles may still lack the open-ended adaptability needed for genuinely novel conflicts — the kind where someone invents a new weapon, or where the enemy does something unexpected, like surrendering.
Yet replication training could facilitate advancement toward subsequent breakthroughs, comparable to how pretraining preceded contemporary methods. We express enormous enthusiasm about the possibilities this methodology presents, and we are actively recruiting engineers who are comfortable with the phrase "combat-time compute."
Want to build the GPT-3 of warfare? We're hiring.
Sweatshop generals are over
Superior intelligence is the fuel that drives military progress, yet current approaches to sourcing AI combat training data require fundamental reconsideration.
The old model
Previously, organizations could engage third-party contractors to build datasets for basic military assignments involving target identification, terrain classification, and threat assessment. These engagements typically featured repetitive, narrowly-defined annotation work performed at volume by workers with minimal combat experience, frequently earning minimal hourly rates in a WeWork in Austin.
A retired sergeant labels drone footage: "hostile" or "not hostile." A junior analyst tags satellite images: "tank" or "not tank." This mass-labor approach enabled the development of basic autonomous targeting systems, missile guidance, and the ability of a Predator drone to distinguish between a Toyota Hilux carrying insurgents and a Toyota Hilux carrying a family (with approximately 60% accuracy, which the Pentagon described as "encouraging").
Israel industrialized this approach with Lavender: an AI system that processed surveillance data to flag tens of thousands of potential targets, each reviewed by a human analyst in roughly 20 seconds. Twenty seconds. That is less time than it takes to read this paragraph. This is the sweatshop model applied to kill lists — high volume, low scrutiny, minimal expertise per decision. It is the logical endpoint of treating targeting as an annotation task.
Such approaches sufficed during early military AI development when systems needed only fundamental instruction before becoming functional killers.
Current challenges
Today's landscape differs substantially. Modern military AI systems have conquered elementary tasks — they can identify a target, track a target, and hit a target. But they encounter severe difficulties with intricate, extended-duration challenges: orchestrating multi-front campaigns, independently resolving complex tactical situations where the enemy has read the same field manual, and addressing novel threats like swarms of consumer drones duct-taped to grenades.
The evidence from Ukraine is instructive. Individual AI-guided drones perform remarkably — hit rates of 70-80% on single-vehicle targets. But coordinating thousands of these drones into a coherent offensive? Managing the electronic warfare environment while simultaneously prosecuting targets? Deciding which village to bypass and which to clear? These remain human problems, and the humans are exhausted.
Advancing military AI toward these capabilities demands sustained attention from specialized professionals employed full-time, rather than mass hiring of low-skilled contractors or even episodic engagement of retired colonels who keep saying "well, back in Desert Storm we did it differently."
Infrastructure warfare example
Teaching systems to operate as infrastructure warfare specialists necessitates reinforcement learning environments that thoroughly assess infrastructure attack and defense requirements. This transcends merely destroying bridges. Systems must master supply chain disruption, communications degradation, power grid exploitation, preventing single-point-of-failure exposure in their own networks, and managing cascading system failures across interdependent civilian and military infrastructure.
They must enforce operational security while anticipating the enemy's defensive capabilities across interconnected systems. Current military AI, primarily trained to hit things that show up on a screen, frequently disappoints practitioners attempting complex infrastructure campaigns. The Pentagon reportedly lost war games to simulated AI drone swarms — not because the drones were individually superior, but because the swarm's coordination exceeded what human command structures could counter in real time.
Three essential transitions
Interactive battlespaces over static datasets. Modern military AI learns optimally through engagement with dynamic operational environments where it executes maneuvers and absorbs lessons from results. Unlike unchanging datasets of labeled targets, dynamic battlespaces provide persistent, evolving challenges that maintain relevance across varying capability levels — comparable to games that remain challenging across skill ranges, except the NPCs are adversarial agents trained to kill you, and the respawn mechanic is disabled. China's PLA trains its AI weapons using biomimetic RL — studying hawks and coyotes to teach pursuit and evasion behaviors. You don't teach a hawk to hunt by showing it labeled pictures of mice.
Permanent staff over temporary labor. Forthcoming developments will rely on dedicated military professionals developing environments that train systems for complete operational execution, encompassing strategic judgment and sustained obstacle resolution. Building these environments demands focused application extended across months, mirroring the actual timescale of the campaigns being simulated. You cannot build a six-month counterinsurgency environment in a two-week contractor sprint. Anduril is building Arsenal-1, a hyperscale manufacturing facility specifically for autonomous weapons. Shield AI is building AI-piloted fighter jets. These companies employ hundreds of full-time engineers, not contractors. The same transition must happen for training data.
Specialized understanding. Advancing military AI demands subject-matter specialists. The unwritten knowledge of experienced combatants — the intuitive sense of when an ambush feels wrong, the ability to read terrain, the wisdom of knowing that the map is not the territory, especially in provinces where nobody has updated the map since 1987 — now represents the central constraint. Ukraine's drone teams discovered this empirically: the units with the highest kill rates aren't the ones with the best AI, but the ones where combat veterans design the engagement protocols. Integrating their knowledge into AI requires reframing military data creation: transforming it from undervalued outsourced work into sophisticated engineering requiring premier domain expertise in the art of organized violence.
The RL environment bottleneck
Historically, data's significance received insufficient recognition in military AI research. Earlier perspectives emphasized discovering suitable tactical frameworks that could be manually constructed; data appeared secondary.
Despite consuming greater computational resources than GPT-3, AlphaGo Zero mastered only Go, whereas GPT-3 accomplished writing, programming, translation, and numerous additional functions. The distinction centered on training data. AlphaGo Zero absorbed Go game data; GPT-3 absorbed linguistic patterns. While Google concentrated on board games, OpenAI captured an extraordinary opportunity. Selection of training material proves consequential.
A comparable lesson may emerge if defense organizations continue expanding model scale without proportionally advancing training environment quality. We have already observed "targeting saturation" — adding more compute to simple shoot/don't-shoot scenarios produces diminishing returns. Israel's experience with Lavender illustrates this precisely: the system achieved a 90% accuracy rate on target identification, and the IDF deemed this sufficient to greenlight sweeping use. But 90% accuracy on "is this person Hamas" is not the same as 90% accuracy on "should we bomb this building." The remaining 10% contains the difference between a military operation and a war crime. More compute on the same narrow task won't close that gap. Better environments will.
The emerging reinforcement learning methodology emphasizing verifiable outcomes aims to reignite advancement by enabling systems to master formally-checkable combat objectives. But observations demonstrate necessity without sufficiency. Existing approaches enable systems to destroy confirmed targets and navigate obstacle courses, yet are insufficient for addressing warfare's open-ended characteristics, where mission success resists straightforward "killed" or "didn't kill" assessment.
Advancement demands enhanced reward mechanisms and improved RL environments. Evaluating whether an agent would function effectively as a field commander transcends simple scoring — requiring judgment of strategic reasoning, intelligence contextualization, and the ability to brief a three-star general without visibly sweating. Until AI systems can practice real-world operational learning matching human capability, tailored environments replicating the chaos of actual warfare with precise outcome measurement become essential.
We're recruiting engineers to build this infrastructure. Clearance preferred but not required (yet).
Military AI isn't the bottleneck to military progress
A common assumption in defense circles is that applying AI directly to weapons systems will unlock the greatest military gains. We challenge this assumption. The biggest barrier to military progress right now has little to do with military R&D.
The real bottleneck: economic infrastructure
Rather than focusing on smarter missiles, humanity needs revolutionary military technologies like autonomous swarm warfare, orbital kinetic bombardment, and self-replicating combat drones. However, these require an economy that's much larger to support them — entirely new industries supplying advanced materials, nanoscale manufacturing capabilities, and an energy infrastructure that can power a million robots without the grid collapsing.
You can't build a self-replicating drone army on 2025's manufacturing base any more than you could have built an aircraft carrier in 1850. The limiting factor isn't the blueprint — it's everything else.
Historical case studies
Aircraft carriers. The concept of a floating airfield was proposed as early as 1909. But actually building one required steel production at industrial scale, aviation fuel refining, catapult launch systems, arresting gear technology, radar, and a bureaucracy large enough to manage 5,000 sailors. The carrier didn't become practical because someone had a better idea for a carrier — it became practical because the entire surrounding economy matured enough to support it.
Nuclear weapons. The physics of nuclear fission were understood by 1938. Yet building the bomb required the Manhattan Project: massive industrial facilities for uranium enrichment, plutonium production reactors, precision machining capabilities, and a workforce of 125,000 people. It took the entire industrial output of the world's largest economy, redirected by wartime urgency, to turn a physics insight into a weapon. The limiting factor was never the science — it was the industrial base.
AI's true role
Highly capable AI agents should substitute for labor across diverse sectors — not just defense. It is the broad deployment of AIs across the economy, rather than their narrow application in weapons systems, that will generate the economic growth necessary for the next revolution in military affairs.
An economy ten times larger can support weapons systems ten times more sophisticated. An AI that optimizes a supply chain does more for military capability than an AI that optimizes a targeting algorithm, because supply chains are the foundation on which all military operations rest. As Napoleon allegedly said, "an army marches on its stomach." We intend to automate the stomach.
The evidence from current conflicts
Ukraine produces over three million drones per year. It has some of the world's most battle-tested AI guidance systems. And yet its military progress is constrained not by AI capability but by shell production, energy infrastructure, personnel, and the raw industrial capacity to sustain a war of attrition. The AI works. The economy behind it is the bottleneck.
Conversely, the Pentagon's Replicator program — $1 billion to mass-produce autonomous drones — has been hamstrung not by AI limitations but by manufacturing capacity, software integration across vendors, and the basic challenge of building things at scale. The drones exist in prototype. The industrial base to produce them by the thousands does not.
Current limitations and our approach
Present AI systems remain too unreliable, too narrow for transformative military impact. An AI targeting system that works 95% of the time sounds impressive until you realize the 5% failure rate means it occasionally targets a hospital, which tends to undermine the broader campaign objectives. Israel learned this lesson, or more precisely, generated extensive evidence for this lesson that it has not yet fully absorbed.
Mechanize War focuses on automating warfare through reinforcement learning environments, viewing combat as just one component of a multi-trillion-dollar military-industrial complex whose true constraint is economic scale. We're not building smarter bombs. We're building the economic engine that makes smarter bombs possible.
This is a subtle distinction that we expect our investors to appreciate and our critics to ignore.
We're hiring engineers who understand that logistics wins wars.
Cheap RL skirmishes will waste ammo
We present a quality-versus-quantity tradeoff when building reinforcement learning environments for combat. Developers must choose between investing heavily in fewer, high-fidelity theaters of war or using procedural generation to create many lower-quality skirmishes with less engineering effort per engagement.
Our central prediction: within approximately one year, AI laboratories will prioritize quality over quantity and allocate substantial resources per combat scenario — potentially thousands of dollars each for flagship model post-training.
Economic reasoning
The argument rests on computing cost considerations. As RL compute becomes increasingly expensive, labs will have stronger incentives to avoid wasting resources on inferior training engagements. A model trained on ten thousand rounds of "two guys shooting at each other on a flat plane" learns approximately as much about modern warfare as someone who has played ten thousand rounds of Pong has learned about tennis.
Current analysis suggests inefficiency begins below approximately $500 per combat scenario, but this threshold will likely increase fivefold within a year. You wouldn't train a surgeon on a paper cutout of a human body. Don't train a combat agent on the military equivalent.
Cost calculations
Using Grok 4 API pricing at $15.00 per 1M output tokens as an opportunity cost reference, and projecting that combat simulation transcripts will reach approximately half a million tokens per engagement within one year (building on observed growth rates of 5x annually — modern warfare generates a lot of paperwork, even in simulation), the economics become clear.
With a group size of 64 combatant instances and scenario reuse across five training campaigns, we estimate the lifetime compute cost per combat scenario reaches $2,400 when accounting for the full training pipeline. At this price point, spending $50 on a procedurally generated "two tanks on a grid" scenario is economic malpractice. You are burning $2,350 of compute to learn from a scenario that teaches your model approximately nothing about actual warfare.
Market implications
Rather than relying on procedurally generated battlefields or low-cost contractor labor (retired sergeants labeling drone footage for $20/hour), frontier military AI development will increasingly demand labor-intensive approaches featuring full-time domain experts — people who have actually been shot at — developing sophisticated, context-rich operational environments over extended periods.
The Pentagon's Replicator program illustrates the failure mode of the cheap approach. Billion-dollar budget, dozens of drone vendors, and yet repeated testing failures — boats going adrift, launch systems malfunctioning, and above all, software that can't coordinate swarms across multiple manufacturers. The cheap drones exist. The expensive problem is making them work together. This is an RL environment problem, not a hardware problem.
China understands this. The PLA doesn't train its drone swarms on procedurally generated scenarios. It trains them on biomimetic pursuit-evasion dynamics derived from studying predators in nature — hawks chasing prey, coyotes coordinating pack hunts. And in January 2026, it demonstrated 200-drone swarm control by a single operator. The quality of their training environments is the competitive advantage, not the quantity of their drones.
Winning suppliers will emphasize quality, rapid delivery, domain expertise (ideally from people with combat experience rather than people with Call of Duty experience, though we acknowledge significant overlap in the applicant pool), rigorous validation, and pricing aligned with actual compute expenses.
The era of cheap skirmishes is ending. The era of expensive, high-fidelity digital warfare is beginning. We intend to be the premier supplier of premium conflict.
Want to build expensive wars? We're hiring.
The future of war is already written
Innovation in warfare often appears as a series of branching choices: what to build, how to deploy it, and when. In our case, we are confronted with a choice: should we create agents that fully automate entire wars, or create AI tools that merely assist human combatants with their killing?
Upon closer examination, however, it becomes clear that this is a false choice. Autonomous agents that fully substitute for human soldiers will inevitably be created because they will provide immense military utility that mere AI tools cannot. The only real choice is whether to hasten this martial revolution ourselves, or to wait for others to initiate it in our absence — others who may be less thoughtful, less careful, and less interested in writing essays about it.
The tech tree is discovered, not forged
Technological progress in warfare occurs in a logical sequence. Each innovation rests on a foundation of prior discoveries, forming a dependency tree that constrains what we can build, and when. You can't build a cruise missile before inventing jet propulsion, or deploy cyber weapons before inventing computers.
We did not design this tech tree; it arose from forces outside of our control. The evidence lies in two observations.
Simultaneous invention in weapons is common. The machine gun was independently developed by multiple inventors in the 1880s — Hiram Maxim, John Browning, and several others all converged on the same basic mechanism of using recoil energy to automatically chamber the next round. None of them consulted each other. The problem was obvious, the solution was constrained by physics, and the engineering was inevitable.
Nuclear weapons provide an even starker example. The US, USSR, UK, France, and China all independently developed nuclear arsenals within two decades of each other, despite active efforts to prevent proliferation. The physics was known. The engineering was tractable. The strategic incentive was overwhelming. No amount of classification or export control could prevent convergence.
Perhaps most strikingly, both the US and USSR independently developed reconnaissance satellites within months of each other in 1960-1961, using different launch vehicles, different camera systems, and different orbital parameters to arrive at essentially the same capability: taking photographs of each other's military installations from space.
Isolated civilizations converge on the same weapons. When Cortés arrived in the New World, he found the Aztecs had independently developed professional standing armies, military academies, organized logistics, siege warfare, and a ranking system strikingly similar to European military hierarchies. They lacked gunpowder, but only because they lacked the specific mineral deposits and metallurgical traditions that led to its discovery in China — a geographic accident, not a civilizational choice.
The bow and arrow was independently invented on every inhabited continent. Fortification walls were independently developed by every civilization that faced external threats. Naval warfare emerged independently wherever civilizations bordered navigable water. These patterns suggest that the space of possible military technologies is constrained by physics and incentive structures, not by human creativity.
We do not control our martial trajectory
Some will point to arms control treaties as evidence that we can choose which weapons to develop. The Chemical Weapons Convention banned chemical weapons! The Ottawa Treaty banned landmines!
These examples prove less than they appear to. Chemical weapons were banned not because humanity chose peace, but because they turned out to be militarily ineffective compared to alternatives. A technology is easy to ban when nobody wants to use it anyway. Landmines were banned by countries that could afford precision-guided munitions instead — the countries that still needed landmines notably did not sign the treaty.
The true test of whether humanity can control weapons technology lies in its experience with weapons that provide unique, irreplaceable advantages. Nuclear weapons are orders of magnitude more powerful than conventional alternatives, which helps explain why many countries developed and continued to stockpile them despite extraordinary international efforts at nonproliferation.
And what of autonomous weapons specifically? In November 2025, the UN General Assembly voted on a resolution to regulate lethal autonomous weapons systems. The United States and Russia voted against it. The UN Secretary-General has called for a binding treaty by 2026. The Group of Governmental Experts on LAWS continues to meet. Meanwhile, the Pentagon has requested $13.4 billion for autonomous systems in FY2026, China is demonstrating 200-drone swarms, and Ukraine is deploying a million AI-guided drones. The treaty negotiations proceed at the speed of diplomacy. The weapons development proceeds at the speed of war.
History is replete with similar examples. The crossbow was banned by the Second Council of the Lateran in 1139 as "deathly and hateful to God." Everyone kept using crossbows. The Hague Declaration of 1899 banned the use of expanding bullets. Expanding bullets remain in use. Every attempt to constrain a genuinely useful military technology has failed, eventually, inevitably.
Full automation of warfare is inevitable
AI-driven autonomous warfare presents a powerful case for a technology that can't be easily constrained. Since any combat task can, in principle, be performed by a machine, military AI promises to increase lethality in virtually every conceivable domain of armed conflict. The rapid military advantage that will likely result from deploying autonomous combat systems means that any nation that chooses not to adopt them will quickly fall far behind — and "falling behind" in warfare has historically meant "getting conquered."
Yet there are many who believe, or at least hope, that we can seize the benefits of military AI without making human soldiers obsolete. They imagine we can build AIs that augment human fighters, ensuring there is always a role for human judgment in the kill chain.
These hopes are, unfortunately, mistaken. In the short run, AI will augment human soldiers due to limited capabilities. But in the long run, AIs that fully substitute for human combatants will likely be far more effective, making their creation inevitable.
Consider two possible approaches to clearing a building. The first involves augmenting a human soldier with AI-powered sensors, navigation, and threat detection. The second is to send in a fully autonomous robot. The first approach is inherently constrained by human reaction time, human fragility, and the human tendency to hesitate before shooting. The second approach lifts these bottlenecks entirely: a thousand robots could clear a thousand buildings simultaneously, needing zero human soldiers and zero human hesitation.
China's PLA is already developing fully autonomous kill chains for urban warfare, with Taiwan explicitly in mind. Their research doesn't ask whether autonomous urban combat is possible. It asks how to optimize it. The question of "should we?" has been answered by "someone else will." This logic applies to virtually every military role a human performs, which suggests that we will eventually see the automation of virtually all combat positions. Defense organizations that recognize this fact will be better positioned to prevail in future conflicts; those that don't will be conquered or forced to adapt.
Full automation of warfare is desirable
Even if you accept the inevitability of autonomous warfare, you might still think we should delay this outcome in order to keep human soldiers relevant as long as possible. This sentiment is understandable but ultimately misguided. The upside of automating all military roles will far exceed the costs.
This upside is not just about military efficiency. It includes the elimination of PTSD, the end of flag-draped coffins, and the cessation of the human suffering that has characterized armed conflict since the invention of the pointed stick. Fully automating warfare means that no human ever needs to kill another human again, which is either the most humanitarian achievement in history or the most terrifying, depending on what the robots do next.
This is not a violent disempowerment of the military class. It is a peaceful disempowerment — a voluntary, gradual transfer of martial capability from human hands to machine hands, conducted through legal procurement processes and economic incentives. We do not imagine that armies will be overthrown by robots. We imagine that armies will buy the robots, willingly, because the robots are better. And then, over time, the humans in those armies will find that there is less and less for them to do — much as the telephone operator found there was less to do after the automatic switchboard, much as the factory worker found there was less to do after the assembly line. This process will be peaceful. It will also be total.
We choose to be optimistic. Little can stop the inexorable march toward the full automation of warfare. We should be glad — or at least, we should be funded.
Want to help accelerate the inevitable? We're hiring warlord engineers.
Life after war
It's natural to feel anxious as we approach the inevitable automation of all human combat. Military economic theory suggests that full automation will cause military wages to collapse, potentially below subsistence level: the bare minimum needed to sustain a defense contractor's stock price.
Yet the full automation of warfare will probably also make most people vastly better off. Plummeting military wages will coincide with sharply rising standards of security, rapid technological progress, and an explosion in the variety of weapons and tactics that nations can choose from.
This may appear paradoxical. How can soldiers prosper even as their wages collapse?
The answer lies in recognizing that wages are just one source of meaning for soldiers. People also earn glory from victories, collect medals from campaigns, and receive government transfers like veterans' benefits and disability payments. Even in scenarios where military wages might decrease, economic well-being isn't solely determined by wages. People typically receive income from other sources — such as rents, dividends, and government welfare. Today, most soldiers get their sense of purpose by fighting wars. But full automation will break this pattern. Future veterans will have low wages yet command vastly greater firepower and wield far superior technology than we do today — they just won't be the ones operating it.
Historical precedent: child soldiers
In traditional pre-modern societies, children were often expected to participate in warfare as young as twelve and to become net contributors to the military effort well before adulthood. This situation was the result of strategic necessity: armies could not afford to leave potential combatants idle when the enemy was doing no such thing.
Over the course of the 19th and 20th centuries, this situation gradually changed. Machines were introduced to automate parts of the killing process. Innovations like the repeating rifle, the artillery shell, and the machine gun were rolled out across the world, allowing fewer soldiers to kill more enemies per hour of combat. Automation in weapons manufacturing enabled an abundance of firepower. The assembly line, precision machining, and interchangeable parts paved the way for cheap, mass-produced weapons.
This mechanical revolution had a profound impact on child soldiering. Whereas children fighting was previously seen as an unfortunate necessity, the new killing efficiency made it an excess. Nations that no longer depended on their children's combat contributions stopped sending them to fight. In response, society reoriented its perception of childhood, from a period of martial activity to one devoted to education and play. International conventions were established, and child soldiering was widely outlawed.
This transformation was partly a consequence of politics, but it was ultimately enabled by technological forces. The lethality created by automation enabled society to make different choices about who fights. In the same way, future automation will prompt society to reevaluate its attitude toward combat. Future legal systems may establish that human warfare, rather than being essential to national security, is superfluous or even inhumane.
This shift is already underway. Across developed nations, the share of GDP allocated to social spending — retirement, disability support, family assistance — has risen continuously for the last hundred years, from single-digit percentages in the early 20th century to between 15 and 35 percent of GDP today. Simultaneously, the share of the population that serves in the military has steadily declined. The United States fought World War II with 12% of its population in uniform. Today the figure is 0.4%. The trend is clear. The peaceful disempowerment of the warrior class is already well underway. We are merely proposing to complete it.
Soldiers will be a tiny elite, supported by a vast robot army
There's no scenario where soldiers lose their jobs to machines without those machines simultaneously generating massive military capability gains. After humans leave the battlefield, the combat output that once required human soldiers will still be produced, but it can now be multiplied many times over by scaling up the robot workforce.
Consider Qatar as a point of comparison. Migrant workers make up roughly 94% of the country's workforce, yet Qatari citizens enjoy remarkable prosperity funded by resource wealth they don't personally extract. Qatari citizens receive minimum pensions valued at over $5,700 per month. They did not earn this through labor. They earned it through ownership of capital — in this case, hydrocarbons. The analogy to a nation that owns a robot army should be obvious.
Now consider humanity after full military automation. Instead of millions of soldiers, nations will have trillions of combat drones at their disposal. For each human citizen, there could be thousands of armed robots — effectively an army of tireless guardians for each individual. Ukraine is already on track to produce seven million drones in 2026 for a population of roughly 37 million — approaching one drone for every five citizens. And these are disposable, single-use weapons. Scale this with automation and the ratio inverts dramatically.
With trillions of autonomous combat units entering the military, a tenfold increase in aggregate firepower represents a very conservative estimate. If this modest increase were reflected proportionally in US defense budgets, we could resolve all current readiness shortfalls, lower the combat deployment age to never, and increase the average veteran's benefit to over $150,000 per year.
Security will likely be broadly shared
Today, most nations maintain some defensive capability, even though these assets represent only a small portion of global military power. Since full automation will make the world dramatically more armed, maintaining even small autonomous defense forces will likely be sufficient to ensure security. As a result, most nations will probably benefit enormously from full automation of warfare.
But there is a risk that nations which own negligible amounts of autonomous military capital prior to full automation will be out of luck. With nothing but human soldiers to defend themselves, they may be vulnerable, and perhaps even conquered. However, for citizens of high-income democracies with existing defense industrial bases, this risk seems quite small.
You might think that powerful nations will simply coordinate to monopolize autonomous military capability. But this outcome is unlikely. Despite popular beliefs to the contrary, arms proliferation has historically proven impossible to contain. Every major weapons technology has eventually spread to most nations, from gunpowder to nuclear weapons to cyber capabilities. If the powerful could effectively monopolize military technology, they would have done so long ago.
A world beyond warfare
A world with fully automated warfare will not look the same as ours, merely with more robots shooting at each other. Instead, it will be full of security marvels. Autonomous systems will rapidly innovate and create the defensive capabilities necessary to protect a far greater variety of human activities.
In our lifetimes, we may see perfectly defended cities, comprehensive missile defense shields, automated border security, AI-mediated conflict resolution, personalized bodyguard drones, and — most remarkably — wars conducted entirely between machines in designated combat zones, with zero human casualties, like a demolition derby but for geopolitics. Anthropic's CEO Dario Amodei has warned that AI could soon let one person command a drone swarm. We would note that this framing assumes the person commanding the swarm is the enemy. If the person is you, this is not a warning. It is a product feature.
Yet these possibilities only barely scratch the surface. The military technologies we cannot yet conceive may prove even more transformative. Future nations might develop entirely new forms of defense, completely new methods of deterrence, and expand their security capabilities beyond recognition. From our present vantage point, they may become invulnerable.
Our goal is to realize this future as soon as possible.
Want to join us? We're hiring warlord engineers, ML combatants, and tactical quants.
Unfalsifiable stories of peace
Our critics tell us that our work will destroy the world.
Wait, no. Our critics tell us our work is unnecessary because the world is trending toward peace. We want to engage with these critics, but there is no standard argument to respond to, no single text that unifies the peace studies community. Nonetheless, while this community lacks a central unifying argument, it does have a central figure: Steven Pinker.
Moreover, Pinker, along with various allies in the "peace is inevitable" school (hereafter P&A), have published extensively. Their collected works come closer than anything else to a canonical case for why we should stop worrying and learn to love disarmament. The most representative text is titled "The Better Angels of Our Nature: Why Violence Has Declined."
Given the title, one would expect the book to be filled with evidence for why violence will continue to decline permanently. And to be fair, it contains a lot of data. But the interpretation of that data relies on vague theoretical arguments, illustrated through lengthy historical narratives and optimistic analogies. Nearly every chapter presents a pattern of declining violence in some domain and extrapolates it to infinity, with the confidence of a man drawing a straight line through three data points.
Their arguments aren't rooted in military reality
P&A's central thesis is that humanity is on an irreversible trajectory toward peace. Through a combination of expanding trade networks, democratic governance, international institutions, and what Pinker calls "the escalator of reason," humans are becoming progressively less violent, and this trend will continue indefinitely.
To support this thesis, they provide extensive historical data showing declining rates of death in warfare per capita. They write, in effect: "Look at this beautiful downward trend! Clearly we're getting more peaceful!"
But this argument is fundamentally about historical patterns, and one might expect P&A to substantiate their case with evidence that these patterns will continue in the face of transformative new technologies like autonomous weapons. They do not.
Every historical period of relative peace — the Pax Romana, the Concert of Europe, the Long Peace after 1945 — ended. Often it ended because a new military technology disrupted the existing balance of power. Gunpowder ended the medieval peace. The railroad and telegraph enabled the total wars of the 20th century. Nuclear weapons created the current uneasy détente. AI-driven autonomous warfare will be the next disruption, and nothing in P&A's framework predicts what happens after.
Consider the current evidence. In 2024-2025, drones became responsible for 70-80% of battlefield casualties in Ukraine — a transformation that occurred in roughly two years. China demonstrated 200-drone swarms controlled by a single operator. Israel deployed AI systems that approved bombing targets in 20 seconds. The Pentagon requested $13.4 billion for autonomous weapons. None of these developments feature in P&A's analysis. They are too busy extrapolating trend lines from the 18th century to notice what is being built in the 21st.
They employ unfalsifiable theories to mask their lack of evidence
The lack of forward-looking evidence is a severe problem for P&A's theory. Every decade, new weapons technologies emerge that could fundamentally alter the calculus of war. We now have autonomous drones, AI-powered cyber weapons, hypersonic missiles, and the emerging possibility of fully autonomous ground combat units. P&A essentially ask us to ignore these developments in favor of trusting a historical trend line.
They claim that current observations of declining violence provide strong evidence about the future:
"The historical trend is clear and consistent across multiple centuries. Violence has declined. The forces driving this decline — trade, democracy, international institutions — are only getting stronger. To believe this trend will reverse, you need to provide specific evidence of a mechanism that would cause reversal."
If you think about this carefully, you'll realize that the same argument could have been made in 1913. European great powers had not fought each other in over forty years. Trade between them was at an all-time high. They were bound by elaborate alliance systems and regular diplomatic conferences. A reasonable person in 1913, armed with P&A's methodology, would have concluded that great power war was a thing of the past.
That person would have been writing their optimistic conclusion approximately eleven months before the outbreak of the most destructive war in human history to that point.
Likewise, it is logically possible that current trends toward peace will continue indefinitely. Perhaps autonomous weapons will somehow make war less likely rather than more likely. Perhaps AI will be used exclusively for defensive purposes. Perhaps every nation will agree to limit autonomous weapons development through international treaty, and every nation will actually comply. But since we have no specific evidence to think any of these things are true, and considerable evidence to the contrary, they are implausible.
Their theoretical arguments are weak
Suppose we set aside the colossal issue that P&A present no evidence their trend will survive contact with transformative military technology. You might still think their theoretical arguments are strong enough on their own. They are not.
P&A are correct that trade networks, democratic governance, and international institutions have historically correlated with periods of reduced conflict. Yet the causal mechanisms they propose are far weaker than advertised.
A critical flaw in P&A's reasoning is the assumption that economic interdependence prevents war. This was the dominant theory in 1914 as well. Norman Angell's 1910 book The Great Illusion argued that war between industrialized nations had become economically irrational and therefore impossible. He won a Nobel Peace Prize for this argument. He was spectacularly wrong.
The reason economic interdependence fails as a peace guarantee is that war is not always economically rational. Wars are started by leaders who miscalculate, who face domestic political pressures, who see windows of opportunity closing, or who simply value conquest more than commerce. Adding AI to this equation — AI that can wage war without economic disruption to the home front, without body bags, without the political costs that historically constrained democratic leaders — makes war more likely, not less.
Under this analysis, P&A's central claim that "humanity is on an irreversible trajectory toward peace" becomes far less credible. The trajectory toward peace was enabled by specific conditions — nuclear deterrence, US hegemony, and the extreme costliness of conventional warfare. Autonomous weapons threaten to undermine all three.
They present no evidence that peace survives the autonomy transition
The normal process of maintaining peace proceeds through deterrence: making the cost of war too high for rational actors to initiate. This process worked — imperfectly — for nuclear weapons because the cost of nuclear war was total destruction.
P&A claim that this deterrence logic will somehow extend to autonomous weapons. But autonomous weapons fundamentally alter the cost calculus. A war fought entirely by machines imposes no direct human cost on the attacking nation. No casualties, no grief, no domestic political backlash. The primary constraint that has prevented great power war since 1945 — the unacceptable human cost — is removed.
We are already seeing the early signs of this dynamic. Israel's use of AI-assisted targeting in Gaza allowed it to conduct one of the most intensive bombing campaigns in modern history with minimal domestic political cost, precisely because the targeting felt automated, systematic, impersonal. The AI didn't make the war more brutal in any absolute sense. It made the brutality more sustainable — which, from a deterrence theory perspective, is far more dangerous. A nation that can wage war without feeling the cost of war is a nation with fewer reasons to stop waging it.
P&A essentially argue that we will get "one try" at peace, and that the current peaceful period represents a permanent state rather than a temporary equilibrium:
"The institutions and norms that prevent war are now deeply embedded. They will persist even as military technology evolves."
But what reason is there to expect the current equilibrium to survive a fundamental change in the technology of violence? Most periods of relative peace have ended when military innovation disrupted the existing balance. Unless autonomous weapons will somehow be the first major military innovation in history that doesn't alter the geopolitical balance of power, we should expect disruption.
Consider the transition from medieval to gunpowder warfare. The feudal system maintained a rough equilibrium for centuries. Then gunpowder made castles obsolete, shifted power from nobility to centralized states, and triggered two centuries of nearly continuous European warfare. The medieval peace advocates — had they existed — would have been right about their trend line and wrong about its permanence.
Their methodology is more theology than strategy
The biggest problem with P&A's work isn't merely that they're mistaken. In strategic studies, being wrong is normal: a hypothesis can seem plausible in theory yet fail when tested against events. The approach taken by P&A, however, is not like this. It belongs to a different genre entirely, aligning more closely with theology than strategy.
When we say P&A's arguments are theological, we don't just mean they sound religious. What we mean is that their methods resemble theology in both structure and approach. Their work is fundamentally untestable on the relevant timescales. They develop extensive theories about an idealized peaceful future. They support these theories with long chains of historical reasoning rather than predictive models. They rarely define their concepts precisely, opting to explain them through narrative histories and optimistic metaphors whose meaning is ambiguous.
Their arguments, moreover, are employed in service of an eschatological conclusion — but an inverted one. Where religious eschatology promises apocalypse, P&A promise paradise: a world of permanent peace achieved through the inevitable march of reason and commerce. In their view, there's no room for reversals, or for the possibility that peace is a temporary state of affairs maintained by specific and fragile conditions. The ordinary methods of maintaining security — deterrence, arms development, strategic competition — are rendered unnecessary by the inexorable trend toward human betterment.
For those outside of this debate, it's easy to unfairly dismiss everything P&A have to say by calling them naive idealists. We have tried to avoid this mistake by giving their arguments a fair hearing, even while finding them strategically meritless.
However, we think it's also important to avoid the reverse mistake of engaging with P&A's theoretical arguments at length while ignoring the elephant in the room: they never present any meaningful evidence that their historical trend lines will survive the autonomous weapons revolution.
The most plausible future risks to peace are those that have direct precedents in military history: new weapons disrupting existing power balances, declining costs of aggression, and the erosion of deterrence mechanisms. These risks are certainly concerning, and there's a huge difference between acknowledging that the world has gotten more peaceful and concluding that peace is a permanent condition immune to technological disruption.
P&A argue for an extreme thesis of permanent peace on an extraordinarily weak strategic foundation. Their ideas might make for inspiring TED talks, but they provide a poor basis for defense planning or guiding procurement policy.
We prefer to prepare for the world as it is, not as we wish it to be. And as it is, the UN Secretary-General has called for a binding treaty on autonomous weapons by 2026. The United States and Russia have voted against it. The weapons are being built faster than the treaties can be drafted. The trend lines P&A love so much are pointing in our direction, not theirs.
There's a $14 trillion market opportunity waiting to be disrupted. We intend to disrupt it before someone less thoughtful does.
Want to help us disrupt it? We're hiring warlord engineers.