The Grove Foundation

Govern the Substrate.
Commoditize the Compute.

The Grove Foundation publishes the open standards that make AI governance an architectural property — not a vendor promise.

Apex inference is critical infrastructure receiving roughly $650 billion in capital commitments. The architectural layer that determines whether that investment compounds at sovereign nodes or evaporates as extractive dependency is receiving effectively no comparable attention.

When APIs deprecate on 90 days’ notice, the workflows built on them deprecate too. Architecture determines whether that risk lands at the operator’s substrate or in the operator’s institutional knowledge. The Grove Foundation publishes the open standards that make the architectural layer legible — so capital, institutions, and engineers can build against it.

CC BY 4.0 · Sovereignty Is All You Need →

Scroll
The Global Reality

Six Nations Hedge.
One Concentrates.

France, Germany, Japan, Canada, the UK, and Italy are investing in domestic infrastructure and open-weight model capacity alongside their relationships with frontier vendors. The strategic posture is hedged: maintain access to apex compute while building sovereign architectural capacity in parallel.

The United States is the only G7 nation consolidating its national AI strategy around four firms. The apex investment is appropriate; the absence of a parallel architectural-layer investment is the structural anomaly.

Concentrated architectures introduce single points of failure, latency at scale, and un-auditable outputs — well-understood tradeoffs that any serious deployment has to address. The question is whether the consumption layer addresses them or inherits them.

Read the full thesis: The Architectural Gap →

The Telemetry Trap

Default Consumption Patterns
Extract Judgment.

In a default centralized AI deployment, the telemetry the system generates — what your industry asks, where your workflows fail, what your most experienced people are trying to figure out — is the input to the model provider’s next training cycle. Grove names this structural condition judgment extraction: the cognitive patterns operators bring to evaluating model outputs flow back to the model layer, where they become inputs to the next product version.

Enterprise contracts protect content. They do not protect the aggregate behavioral signal that becomes the next product. The asymmetry is structural, not malicious — it is what default consumption patterns produce.

In a sovereign architecture, the substrate the system accumulates — context, telemetry, approved skills, configuration — is owned by the operator regardless of where compute happens. Queries compound at the operator’s substrate, not the vendor’s. The system gets smarter, a human sets the thresholds, and the circuit breaker is structural rather than negotiated.

CIO Alert: The Telemetry Trap →

State of the Architecture

The Subsidy Illusion.

Every major analyst framework measures how many people are using an AI platform today. The Grove Foundation measures whether they’d keep using it if nobody subsidized it.

We track the live standings of the industry’s architectural patterns. We don’t care about the hype; we care about the math. Which of these patterns actually has the structural viability to survive on its own?

#PatternΛTierTrend
1Mistral / DeepSeek
OPEN WEIGHT (INT’L)
0.0314Approaching Critical
2Apple Intelligence
ON-DEVICE
0.0090Sub-Critical
3Anthropic Claude
CENTRALIZED API
0.0058Sub-Critical
4Meta Llama
OPEN WEIGHT (US)
0.0031Structurally Inert
5OpenAI GPT
CENTRALIZED API
0.0014Structurally Inert
6Google Gemini
PLATFORM BUNDLE
0.0011Structurally Inert
7Microsoft Copilot
PLATFORM BUNDLE
0.0001Structurally Inert
8Autonomaton
SOVEREIGN OPEN
0.0001Structurally Inert

Scores move with deployment evidence. Grove publishes the Λ methodology and applies it across the AI architecture landscape — including against the Autonomaton pattern itself. Scores update as evidence accumulates. Public Standings refresh each quarter; members receive more frequent real-time market signal.

Click any row to see sub-scores and structural analysis.

Explore the full Λ Standings →

Your Ratchet Direction

We Measure the Industry.
Now Measure Yours.

The Λ standings track which AI architectural patterns have structural viability at industry scale. The Ratchet Test measures whether your AI deployment compounds in your favor or your provider’s.

Nine questions. Four minutes. The score is structural, not self-reported — each answer maps to a single architectural fact about your deployment. The result is a Ratchet Direction Index and a board-ready assessment a CIO can present as-is.

Take the Ratchet Test →
Production Reality

Autonomatonic Loops Run Everywhere.
The Polarity Doesn’t.

The architectural pattern this site describes is not theoretical. The Autonomaton — pronounced auto-NAHM-uh-tawn · /ɔːˈtɒnəmətɒn/ — names a loop that already runs in production AI systems shipping today: telemetry capture, intent recognition, tier-based routing, human-approved skill compilation, and deterministic execution. You see Autonomatonic loops in Claude Code, Claude Cowork, Cursor, and most serious agentic AI shipped in 2026. The pattern works. It compounds. It reduces inference cost as it accumulates validated patterns. It does, in those systems, what the Autonomaton specification describes.

There is one structural difference between those implementations and what GRV-001 specifies. In vendor implementations, the loop accumulates at the vendor’s node. The routing table, the validated patterns, the institutional knowledge of how your work actually gets done — all of it lives inside the vendor’s infrastructure. This is not a critique. Software companies instinctively build dependency ratchets because that is how their business compounds. The pattern is structural, not malicious.

The Autonomaton Pattern reverses the polarity. The same loop runs, but the substrate accumulates at the operator’s node. Routing tables, validated skills, telemetry, approval records — all owned by the institution that generated them. Think of the Autonomaton as smart circuit breakers in the hands of the user: they rebuild routing tables from established patterns and ratchet knowledge downward toward cheaper, more sovereign tiers — so human attention stays free to rise toward the work that still requires judgment.

Grove makes no claim that one polarity is better than the other. The vendor-side ratchet works for vendors. The operator-side ratchet works for institutions that need to accumulate cognitive capital inside themselves. Both are legitimate engineering choices. The architectural question is which direction your specific deployment ratchets toward — and whether you chose it deliberately.

Durable institutions cannot afford to build cognition on substrate they do not own or govern.

The Autonomaton Pattern exists as a freely referenceable architecture anyone can adopt — model-agnostic, domain-invariant, published under CC BY 4.0. Read the specification →

The Solution

TCP/IP Wasn’t Built
by AT&T.

The standardized shipping containers that dominate global trade weren’t designed by a shipping monopoly. We can’t expect the vendors panning for gold in the AI revolution to suddenly start building public infrastructure. The companies extracting rent from a closed system will never build the architecture that disrupts it.

Enter the Autonomaton Pattern (Open Standard 001). This is not a startup. This is not a product. It is a complete architectural specification for self-authoring software systems.

The cognitive frontier compounds at sovereign substrate. The Grove Foundation publishes the architecture, open and inspectable.

Explore the Autonomaton Specification →
Canonical Vocabulary

Naming the Conditions.

Grove names four structural conditions that recur across AI deployments. Naming a condition is the first step to measuring it. These terms appear throughout Grove standards, alerts, and the Λ Standings methodology. Canonical definitions live in GRV-001 §VIII Terms of Art.

Cognitive platforming. The architectural drift that concentrates judgment, telemetry, and decision-context at the platform tier rather than at the operator’s node. The consumption-layer analog of platform-side data lock-in.

Judgment extraction. The flow of operator decision patterns, approval cadences, and discrimination criteria from the consumption layer back to the model provider, where they become inputs to the next training cycle.

Lien on thinking. The accumulating dependency that results when an operator’s reasoning patterns are routed through a platform that retains them. Each interaction expands the lien; switching providers does not discharge it.

Cultivation architecture. The architectural posture in which structural commitments create the conditions for emergent properties such as composability and federation, rather than engineering those properties directly.

Go deeper