The symptoms are everywhere: subscription prices are bifurcating into expensive “pro” tiers and ad-subsidized “basic” models, throttling has become commonplace, and the energy demands of data centers have emerged as a central geopolitical issue.

We’re facing a compute crisis—not merely a shortage of GPUs, but a fundamental economic flaw in how we build and consume AI. Surprisingly, the solution may not be a faster chip, but a boring, standardized protocol for how websites declare their intent: the Mark Semantic Protocol (MSP-1).

The Paradox of “Smarter” AI

To understand the crisis, we have to understand how AI has changed over the last two years. Until 2024, the primary cost of AI was training—the massive, one-time expense of teaching a model. Once trained, “running” it (inference) was relatively cheap.

But as we demanded smarter models capable of complex reasoning, coding, and research, the paradigm shifted to “System 2” thinking. Modern reasoning variants don’t just spit out an answer. They spend thousands of hidden compute cycles checking their work, simulating outcomes, and testing competing paths before responding.

Every complex prompt now kicks off a miniature simulation in a data center. We made AI smarter by making it far more compute-hungry at runtime.

And we’re witnessing the Jevons Paradox in real-time: as algorithmic efficiency improves, we don’t use less compute—we invent more ambitious ways to use it, leading to a net increase in demand. We’re trying to power a 2026 “Agentic Economy” with 2023 compute infrastructure, and the math no longer works.

The “Unstructured Data Tax”

The crisis is compounded by the environment where these AIs operate: the messy, human-centric web. Today’s web was built for eyeballs, not algorithms. It’s full of ambiguous layouts, marketing fluff, and unstructured text.

When you ask an AI agent to “find the cheapest flight to Denver next Tuesday with baggage included,” the agent must:

  • Load massive amounts of irrelevant HTML and CSS code
  • Use expensive reasoning tokens to figure out which text is a price, which is an ad, and where the baggage policy is hidden
  • Often hallucinate or fail because the UI is too complex

This is the “Unstructured Data Tax.” We’re burning energy and compute just so supercomputers can figure out basic website navigation. It’s inefficient, expensive, and unsustainable.

Enter MSP-1: A Semantic Handshake for the Web

If we can’t exponentially increase cheap energy, we must exponentially decrease the friction of the web. This is where MSP-1 enters the frame as a piece of practical infrastructure.

MSP-1 is a proposed standard that allows a website to explicitly declare intent, structure, and key data points in a machine-readable format—often via a lightweight /.well-known/msp.json discovery file—so an agent can resolve meaning deterministically before loading full pages.

Unlike schemas primarily aimed at search indexing, MSP-1 exists explicitly to reduce inference work during runtime evaluation—when autonomous agents are deciding what a page is for, what it contains, and how it should be interpreted.

It’s the difference between handing an AI a 300-page book and asking it to find a specific fact versus handing it a neatly organized index card with the exact answer.

How MSP-1 Addresses the Compute Crisis

Widespread adoption of MSP-1 attacks the compute crisis from three angles:

1) Bypassing the “Reasoning Tax”

Instead of using expensive “System 2” thinking just to parse a webpage, an agent checking a site with MSP-1 can immediately read the intent declaration.

Without MSP-1: The AI reads 5,000 tokens of messy HTML to infer if a product is in stock.

With MSP-1: The AI reads ~50 tokens of structured JSON that explicitly states availability and scope.

Result: a meaningful reduction in compute load per task.

2) Optimizing Context Windows

AI models have limited “memory” (context windows). When that memory fills up with boilerplate and clutter, the model gets slower and less effective. MSP-1 enables targeted extraction: the agent “peeks” at the declaration, identifies exactly which semantic chunk it needs, and loads only that chunk.

Result: more efficient use of expensive memory and bandwidth in data centers.

3) From Reading to Doing

The ultimate goal of the current AI wave is agents: software that can execute tasks on your behalf. Agents fail today because they get lost in complex UIs. MSP-1 acts as a map, providing clearer endpoints and fewer ambiguous steps.

Result: agents complete tasks in fewer steps with fewer errors, reducing wasted retry loops that burn compute.

The Path Forward

The compute crisis is a signal that the brute-force era of AI—throwing raw power at unstructured data—is ending. We’re entering a phase where efficiency requires coordination between the model and the data source.

MSP-1 isn’t a magic bullet that will make GPUs free, but it is a structural reform: by shifting the burden of organization from the AI’s “brain” to the website’s architecture, we can lower the energy floor required to run a smart internet.

And to be clear: MSP-1 doesn’t prevent dishonesty—but it makes misalignment between declaration and on-page reality easier for agents to detect at scale. That’s the point of explicit, scope-bound declarations: less guessing, less variance, and fewer places for ambiguity to hide.

If the web remains a mess, high-intelligence AI will remain a luxury product restricted by cost and energy availability. By adopting standards like MSP-1, we build the roads necessary for the agentic future to actually arrive.