■   Stealth mode  —  Selective conversations welcome   ■

Inference Neocloud
Stranded Power × Open Weight AI
Est. 2026
Stealth

CATHE
DRAL

Stranded power.
Open weight models.
The infrastructure for
what comes next.

$255B Inference market
by 2030
10× Token volume
growth projected
~$0 Effective
fuel cost
Building  —  Conversations by introduction only
Scroll to explore ↓
Stranded power Inference compute Open weight models Token distribution at scale Agentic AI Exponential inference demand No grid required Deploy anywhere on earth Agents calling agents No ceiling on inference volume DeepSeek · Llama · GLM · Qwen Open, deployable today Carbon abatement Tokenised CRTs on-chain Weeks not years Modular deployment Stranded power Inference compute Open weight models Token distribution at scale Agentic AI Exponential inference demand No grid required Deploy anywhere on earth Agents calling agents No ceiling on inference volume DeepSeek · Llama · GLM · Qwen Open, deployable today Carbon abatement Tokenised CRTs on-chain Weeks not years Modular deployment
001 — Investment Thesis

ENERGY
WITH
NO
BUYER.
INTELLIGENCE
WITH NO
CEILING.

The world generates vast quantities of electricity that never find a market. Overbuilt hydroelectric systems. Curtailed solar and wind. Stranded gas at remote production sites. Industrial generators running idle. Electrons produced, destroyed, wasted — because the cost of transmitting them to demand exceeds their value at the destination.

Simultaneously, the global economy is building an infrastructure of autonomous AI agents that will generate inference demand with no historical precedent. Agents that plan, act, and call other agents. Agent hierarchies decomposing tasks across thousands of model queries. A single workflow triggering hundreds of inference requests, none of them requiring human-speed response times — all of them requiring cheap, reliable compute at scale.

Cathedral is the infrastructure that connects these two realities. We deploy modular, containerised AI inference data centres at the source of stranded power — no transmission infrastructure, no grid dependency, no multi-year construction cycle. We sell the resulting compute capacity to frontier AI laboratories, enterprise AI platforms, and the expanding agentic ecosystem at a structural price advantage that no grid-connected competitor can replicate.

The model is open. The compute is scarce. We own the compute — at near-zero fuel cost.

This is not a niche play. The AI inference market is projected to grow from $106 billion in 2025 to $255 billion by 2030. Cathedral is being built to serve the infrastructure layer of that growth — from the bottom up, at structural cost, at global scale.

002 — The Agentic Revolution

AGENTS
CALLING
AGENTS
CALLING
AGENTS.

The first generation of AI was a human typing a prompt. That era is closing. The next generation is fully autonomous agent hierarchies — orchestrators that plan and delegate, sub-agents that execute and call specialist models, model layers that process and return. A single user intent may propagate through fifty or five hundred inference requests before resolution.

Frontier labs are explicitly designing for this future. Enterprise software platforms will embed agents into every workflow by default. The token volume generated by agent-to-agent interaction will dwarf anything generated by human-initiated queries — and it will be structurally non-sensitive to latency in ways that consumer applications are not.

An orchestrator agent composing a research report does not need a 30ms response. It needs sustained throughput at minimum cost per token — exactly what Cathedral's stranded-power infrastructure provides, from locations that would never qualify as prime real estate for consumer-facing AI.

This is Cathedral's primary market. Not the human at the keyboard. The agent at the API.

5–10× token volume growth Agent-to-agent: no ceiling Latency: minutes acceptable Price / token: primary driver
Agent inference topology
Human / Enterprise
User Intent
↓ single request
Orchestration
Orchestrator Agent
↓ decomposes → delegates
Sub-agent layer
Planner
 → 
Coder
 → 
Researcher
↓ each calls model inference
Cathedral serves this layer ↓
DeepSeek V4
 + 
Llama 4
 + 
GLM-5
Inference multiplier
Human triggers1
Orchestrator calls1–5
Sub-agent calls5–50
Model requests50–500
Total multiplier50–500×
003 — The Neocloud

Cathedral is being built as a new category of infrastructure company — an inference neocloud. Not a hyperscaler. Not a colocation facility. Not a GPU rental marketplace. A dedicated, open-weight AI inference platform built on stranded power, designed from first principles for the economics of agentic token distribution.

01
OPEN WEIGHT FIRST

Cathedral will deploy the leading open-weight frontier models — DeepSeek V4, Llama 4, GLM-5, Qwen 3.5, MiniMax M2.7 — via OpenAI-compatible APIs. No proprietary model dependencies. No per-token royalties to model owners. The open-weight revolution has made frontier-class intelligence freely deployable. Cathedral will be the infrastructure that runs it at structural cost.

02
FRONTIER DISTRIBUTION

Frontier AI laboratories — OpenAI, Anthropic, Google DeepMind — will need distributed, overflow inference capacity as they scale globally. Cathedral is being designed as the structural partner for that demand: architecturally neutral, priced by the token, deployable rapidly into geographies and markets that traditional DC infrastructure cannot reach economically.

03
LATENCY AGNOSTIC

Agentic inference is not a latency-sensitive workload. Agent-to-agent calls, background reasoning, multi-step planning, batch synthesis — none of these require the sub-100ms response times that consumer applications demand. Cathedral's sites may be remote. That is structurally irrelevant. We price on tokens, not milliseconds. The market we serve is built exactly for that.

04
MODULAR DEPLOYMENT

Cathedral will deploy containerised compute infrastructure at the source of stranded power in weeks — not months, not years. No data centre construction programme. No civil engineering at scale. A site agreement, pad preparation, and container placement. The system arrives fully integrated: power conditioning, networking, compute. Redeployable when the asset moves.

004 — Stranded Power — Source Agnostic

Cathedral does not have a preferred fuel. It has a preferred economics: electricity with no buyer at the point of generation. Any stranded source qualifies. The compute infrastructure is identical. The margin structure is the same.

HYDROELECTRIC

Run-of-river and reservoir hydro in remote regions frequently generates surplus beyond what local demand can absorb and transmission economics cannot justify. Cathedral will co-locate at the generation source — converting curtailed output into inference revenue for the operator with zero transmission infrastructure.

SOLAR & WIND

Renewable projects in nascent-grid regions routinely curtail output — generating electrons at cost with no buyer. Cathedral's compute infrastructure absorbs variable generation as a flexible, dispatchable load. We operate at variable power levels, capturing output that would otherwise be clipped or spilled.

STRANDED GAS

Associated gas at remote production sites — where pipeline infrastructure does not exist and will not be built — is conditioned and used to generate power on-pad. Near-zero fuel cost. Significant carbon abatement relative to open flaring. The operator gains revenue from a waste stream; Cathedral gains the cheapest electrons on the market.

INDUSTRIAL SURPLUS

Mines, processing plants, refineries, and LNG facilities operate backup and standby generation that sits idle for significant portions of operating time. Cathedral deploys alongside existing generation assets — converting idle capacity into inference revenue with no new power plant required.

GRID ARBITRAGE

Markets with high renewable penetration increasingly experience negative or near-zero electricity pricing during off-peak hours. Cathedral modules will function as flexible industrial load — purchasing power at structural lows and monetising it as inference compute across the demand cycle.

+
ANY SOURCE

The Cathedral infrastructure layer is power-source agnostic. If the price per kWh at the point of use is structurally low, we will build there.

10–50MW per site
1–5 MW modular blocks
WksPad to first token
not years
$0.03Target power cost /kWh
structural margin floor
Revenue streams
compute · carbon · power
005 — Revenue Architecture
INFERENCE TOKENS

Token sales to frontier AI labs, enterprise AI platforms, and agentic AI operators via OpenAI-compatible APIs. Open-weight models pre-deployed. Priced per million tokens — structurally below any grid-connected alternative. Volume contracts available for anchor customers requiring sustained throughput.

CARBON CREDITS

Where Cathedral's power source generates verifiable GHG abatement, the installation produces high-integrity Verified Carbon Standard credits. Continuous MRV instrumentation. Tokenised 1:1 on-chain against Verra registry serial numbers. Institutional-grade, double-counting-impossible.

COMPUTE ARBITRAGE

During inference demand troughs, stranded power assets can be directed to alternative digital compute workloads. The architecture is modular and workload-switchable — ensuring no generated watt is wasted and revenue continues regardless of inference demand cycle.

POWER OFFTAKE

Surplus generation capacity is sold to adjacent facilities or exported to regional grids where connection is economical. The microgrid architecture enables seamless real-time switching between inference, compute, and power export — maximising the revenue yield of every kWh generated.

006 — Open Weight & Frontier Distribution

THE
MODEL
IS
FREE.
THE
COMPUTE
IS NOT.

The open-weight revolution has permanently restructured AI economics. DeepSeek V4 — a one-trillion-parameter frontier model — is freely deployable. Llama 4, GLM-5, Qwen 3.5, MiniMax M2.7: open, available, running at frontier capability. The model is no longer the scarce resource.

The compute to run it is. Cathedral will own that compute — at the lowest structural cost in the market — and will sell access to it by the token. No model royalties. No vendor lock-in. No proprietary dependency. Pure infrastructure economics.

Frontier labs will also need Cathedral as a distribution partner. When OpenAI scales into a new geography, when Anthropic needs overflow capacity, when a sovereign AI programme requires domestic inference infrastructure — Cathedral will be the structural alternative to building their own remote facility.

Planned open-weight stack
DeepSeek V4
1T params / 37B activeOpen weight
Llama 4 Maverick
400B+ paramsMeta · Open
GLM-5 / 5.1
744B total / 40B activeOpen weight
MiniMax M2.7
Frontier codingOpen weight
Qwen 3.5
Apache 2.0Open weight
gpt-oss-120B
OpenAI open sourceMoE
Frontier distribution
Bulk token contracts with OpenAI, Anthropic & Google
Frontier labs are committing $600B+ in AI infrastructure in 2026. Overflow capacity, geographic distribution, and pricing-competitive inference at scale are structural requirements. Cathedral will be designed to fulfil them.
No latency premium
Agentic workloads tolerate 200ms–2s response times
An agent calling a model for a research synthesis does not need 20ms latency. It needs cheap tokens and high sustained throughput. Cathedral's remote power sites are structurally built for exactly this workload — the fastest-growing category in the inference market.
007 — Carbon Integrity

MRV.
TOKENISED.
ON-CHAIN.

Where Cathedral's power source generates verifiable GHG abatement, each installation will produce a continuous stream of high-integrity carbon credits under Verra's Verified Carbon Standard.

Continuous instrumentation will achieve ±3–5% measurement accuracy — the threshold mandated by US EPA regulations and increasingly required by corporate buyers. Each site establishes a baseline emissions scenario and documents verified abatement against it.

Upon issuance, credits will be tokenised 1:1 on-chain. Each token carries a Verra registry serial number. Retirement is instant. Double-counting is structurally impossible. Corporate buyers retire on-chain with immediate registry reflection.

Cathedral is being structured to be among the first platforms using Verra's forthcoming dedicated flare gas methodology — targeting approval in 2026.

Standards & Compliance
Verra VCSStructured for
Flare Gas MethodologyTarget 2026
MRV ±3–5% AccuracyDesigned for
On-Chain Tokenisation1:1 Registry
World Bank ZRF 2030Aligned
Global Methane PledgeAligned
Third-Party AuditRequired
008 — We are building

YOUR
POWER.
OUR
COMPUTE.

Power source enquiry Investor conversations