Sovereignty Is All You Need
A response to Ramaswamy and Perault, Wall Street Journal (April 17, 2026)
There is a single architectural primitive missing from how the industry is building AI, and its absence will determine the outcome more than any capability race between nations or firms. Sovereignty at the node. The rest is machinery.
Apex inference is critical infrastructure. The Ratchet depends on a frontier — capability has to come from somewhere, and the firms building it are doing necessary work. The question is what happens at the consumption layer, where the substrate of human judgment either compounds at the node or leaks outward by default.
Which is why the Wall Street Journal op-ed this week from two Andreessen Horowitz partners matters. Jai Ramaswamy and Matt Perault argue America should beat China in AI by releasing capable, American-friendly open-weight models, and releasing them fast.
They are right about the threat. Qwen’s model family crossed 700 million Hugging Face downloads in January. DeepSeek runs inside American labs, classrooms, and enterprise pilots at a scale that should alarm anyone paying attention.
They are right about half the cure. The other half — the architectural layer that determines whether open weights compound American capability or leak it — is missing from the op-ed entirely. Both halves are necessary. Ship one without the other and the geopolitical ratchet runs in the wrong direction.
For decades, the software industry has been rewarded for building what engineers quietly call dependency ratchets — design choices that raise switching costs over time. It is a fair trade in most domains. You can move your photos from Apple to Google; you can move your cloud workloads from AWS to Oracle. It hurts, but the substrate is recoverable. Your photos are still your photos.
Platforming cognition is a different animal. The ratchets we built instinctively — to create sticky products, defensible moats, the predictable revenue lines our investors reward — do something categorically different when the substrate is human judgment. What you give away does not come back.
The cleanest way to state the institutional problem: organizations whose mission depends on accumulated judgment cannot afford to build cognition on substrate they do not own or govern. That sentence is not ideology. It is operational risk.
Every default AI pipeline today routes telemetry, reasoning traces, approval patterns, and user judgment outward to a vendor. That vendor recycles the stream into the next model, the next moat, the next lock-in. The Grove Foundation has documented this as the telemetry trap: an open-weight model consumed through a centralized API produces the same dependency ratchet as a closed one. Open source solves half the problem. The consumption layer reproduces the other half automatically.
Human judgment is the seed corn. When a centralized pipeline harvests the patterns of discrimination, refinement, and approval that would otherwise accumulate as your own expertise, it is not a feature. It is extraction dressed up as productivity. You are paying a recurring fee to fund the automation of your own thinking. Every major AI consumer platform is structurally designed to do this. It is not a bug in any one vendor. It is the business model.
MIT’s 2025 research found that 95% of enterprise generative AI pilots produced zero measurable ROI. That is not a coincidence. It is what happens when the consumption pattern treats user judgment as exhaust instead of capital.
GRV-003, the Learner Autonomaton standard published last week by The Grove Foundation, supplies the missing layer. It mandates a non-bypassable five-stage pipeline — Telemetry, Recognition, Compilation, Approval, Execution — backed by three user-owned files and zone-based consent rules. Validated patterns ratchet inward: they demote to cheaper, more sovereign, more local compute tiers as mastery grows. The provenance arcs produced along the way turn how competence formed into inspectable, portable cognitive capital. The standard is published under CC BY 4.0. It is model-agnostic and domain-invariant. This familiar design pattern has been pressure tested in compliance-heavy regulated industries before broader application, and it imposes zero friction on the strategic open-weight push or even apex frontier model providers. It simply ensures the weights serve American minds and institutions rather than subsuming them.
The commercial traction is already visible where it matters most — and it is not on the coasts. In central Indiana, home to Eli Lilly, Anthem, Corteva, Cummins, Old National Bank, and precision manufacturing at scale, the architecture maps directly onto serious regulatory, audit, and liability requirements. These industries cannot tolerate opaque automation or untraceable decision chains. They have been pricing the cost of unauthorized behavior for a century — in FDA penalties, loan defaults, failed harvests, product recalls. They have pre-existing intuitions about sovereignty, audit trails, and circuit breakers that coastal firms are only beginning to develop vocabulary for. Architecture-first AI is a Midwestern instinct, sharpened by industries that have always required it.
Meanwhile, the externalities of the centralized hyperscale model are starting to bind. Roughly half of the US data centers planned for 2026 have been delayed or canceled — not for lack of capital, but because the grid cannot carry the load. Financing structures are tightening. States are writing laws that let them cut data centers off the grid in emergencies. None of this proves the apex tier is failing. It proves the apex tier was never as self-contained as its economics implied — and that institutions building cognition on top of it need an architectural layer that can survive the volatility. On the morning this Alert published, the White House invoked Section 303 of the Defense Production Act on grid infrastructure and supply chain capacity, finding that America’s “aging and constrained electric grid infrastructure poses an increasing threat to national defense.” Truman-era industrial mobilization authority has now been deployed against the same bottleneck the centralized AI buildout has spent two years failing to clear with private capital. That is the externality binding in real time.
The piece in the WSJ missed a substantial venture opportunity. Building the reverse-ratchet software layer — Autonomaton pipelines, routing engines, zone enforcers, provenance tooling, co-sign protocols — is a classic infrastructure play. It is productizable, monetizable, and defensible. Regulated industries, universities, defense contractors, and lifelong-learning platforms all need it. The capital currently chasing hyperscale inference farms has a higher-ROI alternative: distributed sovereignty infrastructure that turns the dependency dynamic 180 degrees. The market signal is not subtle — 95% of the $30–40 billion in annual enterprise AI spend is failing to find returns in the current consumption pattern. That is not a saturated market. It is a stranded one, waiting for architecture that makes the capability land. For a venture firm to write this op-ed without naming the opportunity is, at minimum, a curious omission.
That capital is already moving. New funds are forming in Indianapolis to back sovereignty-layer infrastructure and the regulated-industry applications built on top of it. The geography is not incidental: the same industrial base that makes Indiana the right proof ground makes it the right capital base. The center of gravity for the reverse ratchet is not on the coasts.
A serious national AI strategy is a two-tier strategy. The apex tier — frontier models, the firms financing them, the compute and weights that anchor capability — is genuine critical infrastructure and deserves to be treated as such. The sovereignty tier is what ensures the apex serves American institutions rather than the inverse. Ship the open weights aggressively. At the same time, fund the builders who productize the inward ratchet. Neither tier works without the other. Open weights supply raw capability. The GRV-003 Autonomaton ensures that capability compounds here — in American workers, in American institutions — rather than leaking abroad or into vendor moats that feed on the seed corn of the country that grew it.
Open weights are necessary. Sovereignty primitives are what make them sufficient. Ignore the reverse ratchet and we win the model-weight war while quietly losing the cognition one.
The Grove Foundation’s architecture proposals don’t replace regulatory frameworks. They simply give all players in the ecosystem a way to actually satisfy them more rigorously, at decreasing costs, and with compounding benefits.
Jim Calhoun is the founder of The Grove Foundation, an Indianapolis-based open standards body for AI governance architecture.
The centralized-AI bet is under measurable structural stress across four dimensions. Grove tracks these as the dimensions of the Λ (Lambda) framework — the quantitative scoring rubric by which we assess whether an AI deployment pattern has enough structural merit to survive without subsidy. The evidence below is a dispatch snapshot drawn from the ninety days preceding publication. Read together, these signals describe a model that is financially circular, infrastructurally stranded, empirically underperforming at the enterprise level, and increasingly constrained by the political environment. In Λ terms: a landscape approaching Sub-Critical across multiple axes simultaneously — a condition Grove’s Q1 2026 Standings already formalized before the recent stress signals accelerated.
1. Circular financing and valuation fragility
- Bloomberg’s January 2026 mapping documented the AI sector’s financing as a dense web of circular commitments — Nvidia invests in OpenAI, which pays Oracle, which buys Nvidia chips — drawing explicit parallels to the 1990s fiber-optic vendor-financing structures that preceded the telecom collapse.
- In early February 2026, reporting surfaced that Nvidia’s planned $100 billion investment in OpenAI had stalled over concerns about OpenAI’s financial discipline. The public reassurances from Oracle and OpenAI that followed read, in the tradition of crisis communications, as confirmation rather than refutation of the underlying fragility.
- Oracle’s approximately 6x debt-to-equity ratio underwrites revenue commitments dependent on OpenAI meeting demand forecasts OpenAI itself has not been able to produce.
- OpenAI’s reported ~$5B annual revenue against ~$8.5B in annualized losses and more than $1T in long-term cloud and hardware commitments creates a compounding gap between revenue generation and contractual obligation.
2. Infrastructure reality collision
- On April 20, 2026, the White House invoked Section 303 of the Defense Production Act of 1950 on grid infrastructure, equipment, and supply chain capacity, building on the January 2025 National Energy Emergency declaration (EO 14156). The determination cites transformers, high-voltage transmission components, advanced conductors, and power electronics as critical-defense supply chain items, waives standard procedural requirements under Section 303(a)(1)-(a)(6), and authorizes federal “purchases, commitments, and financial instruments” to expand domestic capacity. The federal government has now formally classified the grid-side bottleneck for hyperscale compute as a national defense matter requiring war-powers-era industrial mobilization authority.
- Approximately 7 GW of the 12 GW of US data-center capacity planned for 2026 has been delayed or canceled; only about 5 GW is under active construction. The primary constraint is physical infrastructure — transformers, switchgear, interconnection queues — not capital.
- Stargate’s flagship Abilene campus curtailed its 600 MW expansion; Nvidia is reportedly brokering Meta as a replacement tenant. Stargate UK paused months after announcement, citing energy costs. The $500B Stargate Texas project has shown no significant physical progress as of April 2026.
- Transformer and high-voltage switchgear lead times have stretched to as long as five years, against an industry deployment cadence under 18 months.
- Residential electricity rates near data-center clusters in Virginia, Texas, and Georgia have risen 8–15%. Texas has enacted legislation authorizing grid operators to disconnect data centers during emergencies.
- Only 12% of US data-center capacity planned for the 2028–2032 window has broken ground, suggesting that long-horizon deployment projections are meaningfully divorced from what grid and supply-chain reality will actually permit.
3. The enterprise demand question
- MIT NANDA’s State of AI in Business 2025 (The GenAI Divide) found that 95% of enterprise generative AI pilots produced zero measurable ROI. The finding is drawn from 300+ deployment reviews, 52 executive interviews, and 153 senior-leader survey responses.
- The ~$30–40 billion in annual enterprise GenAI spend is largely going to tools that improve individual productivity without touching P&L performance, with failures attributed to brittle workflows, weak contextual learning, and misalignment with day-to-day operations.
- Internal enterprise AI builds succeed roughly one-third as often as vendor partnerships (33% vs. 67%) — a signal that the current consumption pattern cannot be rescued by in-house engineering alone.
- Methodology critics have challenged the sample construction and pilot-design assumptions behind the 95% figure. The critique sharpens rather than refutes the underlying pattern: most enterprises are not extracting measurable value from the current consumption architecture, regardless of the precise percentage.
4. Regulatory and sovereignty pressure
- Texas law now authorizes grid operators to disconnect data centers during emergencies — explicit recognition that data-center load and public-grid stability can no longer be left to private coordination.
- FERC’s April 2026 large-load interconnection rule deadline introduces federal-level scrutiny of hyperscale grid connections.
- The EU Energy Efficiency Directive imposes detailed energy-performance reporting on data centers above 500 kW; Ireland has functionally capped new grid connections in the Dublin region.
- Community opposition to data-center siting is intensifying across key US regions, driven by residential rate increases and water-use disclosures — a political environment that rewards architectures which reduce rather than amplify centralized load.
How this Ledger fits into Grove’s standing work
The four clusters above are not an ad hoc grouping. They map to the structural dimensions the Λ framework scores quarterly across the AI deployment landscape: financing durability, infrastructure viability, demand validation, and regulatory envelope. Λ assigns each pattern a score against these dimensions and publishes the result as the Grove Standings — a public rubric that any CIO, CFO, or investment committee can use to pressure-test an AI architectural choice before signing a contract. The Standings update quarterly. The Q2 2026 recalibration will absorb the evidence in this Ledger directly.
This is a new kind of computer science, and we are building it in the open
Grove publishes these standards and frameworks under CC BY 4.0 because architectural governance of cognition cannot be captured by any one vendor, firm, or foundation. It is genuinely new territory — a discipline that sits between classical systems engineering, governance theory, and the practical craft of building institutions that do not surrender their own judgment to the model layer. No one has the complete map. We are drawing it.
If the argument in this Alert resonates — whether you are a CIO pressure-testing a vendor contract, a researcher working on the architectural questions, a builder implementing the pattern, a policy analyst shaping the regulatory envelope, or a foundation officer evaluating the governance landscape — Grove is actively building a membership of practitioners and institutions shaping this field. Members contribute to the standards, pressure-test the frameworks, propose extensions, and shape what gets published next.
→ Read the Λ Standings: the-grove.ai/lambda
→ Read the methodology: The AI Deployment Pattern Benchmark
→ Get involved: the-grove.ai/membership
Subscribe below for future CIO Alerts and the Q2 Λ recalibration when it ships. Grove publishes infrequently, and only when the architecture of the landscape has shifted enough to matter.
- GRV-003: The Learner Autonomaton — the architectural standard referenced in this piece
- GRV-001: The Autonomaton Pattern — the base pattern underlying GRV-003
- The Telemetry Trap — Grove’s primer on the extraction dynamic
- Architecture and Accountability — how the same sovereignty primitives map onto SR 11-7, FFIEC, and OCC compliance demands
- Ramaswamy and Perault, “To Beat China, Embrace Open-Source AI” — the piece this responds to
- Thorbecke, “Why China Can’t Quit ‘Open’ AI” — Bloomberg Opinion, cross-spectrum consensus on concentration risk
- MIT Technology Review: What’s next for Chinese open-source AI — Qwen/DeepSeek adoption data
- MIT NANDA: State of AI in Business 2025 (The GenAI Divide) — enterprise AI ROI research cited in this piece