March 26, 20269 min read

The Agentic Web Fork: Your Next Visitor Won't Have Eyes

The web is splitting into a human layer and an agent layer — and the entire monetisation model built on human attention is about to break.

A forking road splitting into two paths — one with a human silhouette browsing a glowing screen, one lined with circuit patterns and robot iconography — dark background with blue and teal accents
a forking road splitting into two paths — one with a human silhouette browsing on a screen, one with circuit/robot iconography — clean, minimal, dark background with blue/teal accents

The Agentic Web Fork: Your Next Visitor Won't Have Eyes

Last month I was debugging why Agent Mac — my personal AI agent — kept failing to extract structured data from a site it was supposed to summarise. The site loaded fine in a browser. Beautiful layout, clean design. But from the agent's perspective it was a wall of JavaScript-rendered HTML with no semantic structure, no JSON-LD, and a bot-blocking middleware that returned a 403 the moment it saw a non-browser user-agent.

The site had been built entirely for human eyes. There was no other kind of visitor.

That's changing. Fast.


The Fork Is Already Happening

We're at an inflection point that most developers are sleep-walking through. The web is splitting into two layers — and infrastructure providers are already building for both.

llms.txt is the first clear signal. Sites are now publishing a structured Markdown file at /llms.txt — a human-readable, machine-parseable overview of what the site contains and how to navigate it. It's basically a sitemap for AI agents. Anthropic publishes one. So does Vercel. The spec is unofficial but adoption is accelerating.

MCP (Model Context Protocol) is the second. Anthropic published MCP in late 2024 as a standard for connecting LLMs to external tools and services. What it's quietly becoming is a protocol for agent-to-service communication — a way for an agent to say "I want to take action on this site" and have the site respond in a structured, authenticated way. Several platforms have already shipped MCP servers. WebMCP is taking this further, embedding MCP directly into web contexts.

Cloudflare AI Audit is the third. Cloudflare — which sits in front of a meaningful percentage of the entire internet — now has dashboards that differentiate AI crawler traffic from human traffic. They're building rate limiting, routing, and monetisation primitives specifically for AI agents. This isn't experimental. This is infrastructure.

These aren't three isolated features. They're the foundations of a parallel web layer — one that runs alongside the human web but operates entirely differently.


Two Visitors, One Codebase

Here's the part most developers get wrong: the response to this isn't to build two separate sites. It's to recognise that you already have a data model and a presentation layer, and you need two presentation layers now.

LayerHumanAgent
DiscoverySEO, social, backlinksllms.txt, JSON-LD, MCP manifest
NavigationMenus, UI flows, scrollAPI routes, sitemap, tool definitions
AuthOAuth login, session cookieOAuth delegation, API key, scoped token
ActionForm submit, button clickPOST endpoint, MCP tool call
ResponseRendered HTML, animationsJSON, structured Markdown
FeedbackToast notification, redirectStatus code, result payload

The bridge between these two worlds already exists: semantic HTML. Proper heading hierarchy, ARIA labels, descriptive link text, structured data with JSON-LD — all the accessibility best practices you've been ignoring for years? They're also what makes your site legible to an agent crawling it without a rendering engine.

In other words: the same work that makes your site accessible to a screen reader makes it accessible to an AI agent. (If you needed another reason to care about accessibility, there it is.)

For ThinkWiser, the accounting SaaS I'm building, this looks concrete. The human layer is the dashboard — forms, charts, review queues. The agent layer is an MCP server wrapping the same underlying API. Same data model, same auth system, different access pattern. An accountant's AI assistant can call the MCP server to pull a client's P&L, flag anomalies, and draft a summary — without the accountant ever opening a browser.

The UI's job doesn't disappear. It shifts: from "how do I do X" to "confirm what the agent did, correct what went wrong." That's a genuinely different design problem.


What the Web Looks Like in 2029

I'll make some predictions. I'll probably be wrong on the timing. I don't think I'm wrong on the direction.

Search hollows out. Informational queries — "what is X", "how does Y work", "best Z for W" — increasingly get answered by AI without a click. Google's AI Overviews are already doing this. ChatGPT browsing, Perplexity, Claude — they all retrieve and synthesise before the human ever sees a blue link. The SEO industry spent thirty years optimising for page rank. The next ten years will be about citation share — getting your content cited by AI systems, not ranked by crawlers.

Agent-calls become a standard analytics dimension. Right now, if an AI agent reads your site, it looks like a weird spike in traffic from an unfamiliar user-agent. Within two years, every analytics platform will track agent-calls separately from human pageviews, with breakdowns by which AI system, which task type, what actions were taken. Cloudflare is already halfway there.

Auth gets delegated. The OAuth 2.0 spec has supported delegation for years — a user authorises an agent once, the agent acts on their behalf indefinitely. The missing piece was standardised consent flows and audit trails. That's being built now. The UX implication: your app needs to show users what their agents did, not just what they did themselves. Audit logs stop being a compliance feature and become a core product surface.

Content strategy inverts. The content farms that gamed Google with keyword density are about to hit a wall. AI systems prefer structured, factual, citable content. Short sections. Clear headings. Specific claims with sources. Long-form survives only if it offers genuine synthesis — original analysis, first-hand experience, opinions that can't be generated. The stuff I'm writing right now, basically. (We'll see if that's hubris.)

The web becomes infrastructure, not destination. The browser's cultural centrality declines. Humans still use it — but for an increasing number of tasks, the browser is what the human uses to review what an agent already did. The web itself becomes more like the phone network: essential, invisible, not something you consciously "visit."


The Monetisation Problem Nobody Is Solving

Here's the uncomfortable part that the llms.txt spec conveniently doesn't address.

The entire economics of the web was built on human attention. Banner ads, conversion funnels, email capture, sponsored content, affiliate links — every monetisation model assumes a human with eyes and impulses is looking at the page. Agents don't have impulses. They don't click banner ads. They don't abandon carts.

A site that gets one million agent-calls per month, each one extracting clean structured data and sending nothing in return, is getting scraped at scale with no compensation. The agent is adding value — aggregating, synthesising, acting on behalf of the user — while the site that produced the underlying content gets nothing.

This is not a hypothetical. It's happening now. News publishers, recipe sites, documentation pages — they're all seeing AI traffic spike while ad revenue declines because fewer humans are arriving to see the ads.

What replaces it? I have guesses, not answers.

Per-call API pricing is the obvious model — if agents are calling your service, charge per call. Stripe's API does this already. But it only works if you're providing a direct service, not just content.

Agent subscription tiers — a site could offer a "machine-readable tier" for a monthly fee, with structured data, guaranteed uptime, and schema stability. Your AI agent subscribes to the data sources it needs.

Data licensing — licensing your structured data directly to AI training pipelines or inference providers. The New York Times tried suing OpenAI. The industry is quietly moving toward negotiated licensing instead.

Attention at the oversight layer — if humans are primarily using the web to review what agents did, maybe monetisation moves to that review interface rather than the raw data access. The human moment of attention is still valuable; it's just happening later in the flow.

None of these are proven. The honest answer is that the web's business model for the agent era is an open question, and the people who figure it out will build the next Google-scale business.


What to Build Right Now

If you run a site and you want to be positioned for the agent layer, here's what's worth doing today:

Add llms.txt. It takes thirty minutes. Put it at /llms.txt. Write a structured Markdown overview of your site's content, sections, and how to navigate it. Include links to your API docs if you have them. It signals to AI systems that you're agent-ready and gives crawlers a map rather than forcing them to infer structure from HTML.

Add JSON-LD to your key pages. Schema.org structured data is already parsed by AI systems. If you run an ecommerce site, Product schema. If you're a publisher, Article schema. If you're a SaaS, SoftwareApplication. It takes an afternoon and it makes your pages machine-legible.

Design action endpoints. Think about what a user's agent might want to do on your site — not just read. Create authenticated API endpoints for those actions. Don't require a browser session. Rate limit them properly. Document them.

Start thinking about your auth story for delegated access. How does a user authorise an agent to act on their behalf? What's the scope? What gets logged? This is a design problem, not just an engineering problem. Build it before you need it.

The sites that win in the agent era won't necessarily be the ones humans search for. They'll be the ones agents recommend, call, and trust. That's a different optimisation target than anything the SEO playbook prepares you for.


I don't know exactly how this plays out. The web has bifurcated before — desktop vs. mobile, app vs. browser — and in each case the answer turned out to be "build for both, find the shared primitives." This is probably similar: not a replacement but a layer, sitting on top of everything we've built, serving a new kind of visitor.

The visitors just won't be looking at your carefully chosen font pairings. So maybe deprioritise the CSS and add an llms.txt first.

Comments

Leave a comment