WEBVTT - The blog is the PR department now

1
00:00:00.000 --> 00:00:03.585
In 2014 your blog was a content marketing tool.

2
00:00:03.585 --> 00:00:09.561
You wrote for humans, optimized for Google, and measured success in page views and conversions.

3
00:00:09.561 --> 00:00:14.342
The audience was a distribution of web browsers held by actual people.

4
00:00:14.342 --> 00:00:17.927
In 2026 your blog is an API for agents.

5
00:00:17.927 --> 00:00:30.278
You still write for humans, but the majority of the traffic reading you carefully — extracting the facts, paraphrasing the arguments, comparing you to other sources — is not a human.

6
00:00:30.278 --> 00:00:39.440
It's a model run by somebody else, summarizing you into an answer somebody else asked for, sitting inside a product you don't own.

7
00:00:39.440 --> 00:00:44.221
That's a bigger shift than it sounds. Three claims follow from it.

8
00:00:44.221 --> 00:00:48.603
Claim 1: robots.txt and sitemap.xml were designed for the wrong audience

9
00:00:48.603 --> 00:00:56.970
These two files were designed in an era when the only machine reading your site at scale was a search engine.

10
00:00:56.970 --> 00:01:01.352
Search engines index pages, rank them, and send humans to them.

11
00:01:01.352 --> 00:01:08.921
The contract was implicit: the crawler takes the content, the human clicks the link, the site gets the visit.

12
00:01:08.921 --> 00:01:11.710
Ads and subscriptions make the money flow.

13
00:01:11.710 --> 00:01:13.303
Agents don't send visits.

14
00:01:13.303 --> 00:01:14.100
They summarize.

15
00:01:14.100 --> 00:01:16.092
The page view never happens.

16
00:01:16.092 --> 00:01:30.833
The contract is broken at the protocol level — not because the agents are doing anything wrong, but because the primitives were never designed for a world where the reader and the revenue are the same thing.

17
00:01:30.833 --> 00:01:33.223
The right new primitive is llms.

18
00:01:33.223 --> 00:01:33.621
txt.

19
00:01:33.621 --> 00:01:36.012
It's a root-level file per llmstxt.

20
00:01:36.012 --> 00:01:44.378
org that says "here are the canonical, text-first versions of my content, laid out in a structure you can consume cheaply."

21
00:01:44.378 --> 00:01:49.955
It tells the agent what to take and where to find the clean version.

22
00:01:49.955 --> 00:01:51.947
Pair it with a robots.

23
00:01:51.947 --> 00:01:59.517
txt that explicitly allows the agents you want and you have two files that actually match the 2026 audience.

24
00:01:59.517 --> 00:02:06.688
If you haven't added llms.txt to your site, that's the single highest-leverage change you can make this week.

25
00:02:06.688 --> 00:02:09.875
Claim 2: the moat is structure, not tricks

26
00:02:09.875 --> 00:02:13.460
The old SEO playbook was a bag of tricks.

27
00:02:13.460 --> 00:02:14.257
Keyword density.

28
00:02:14.257 --> 00:02:15.054
Backlink graphs.

29
00:02:15.054 --> 00:02:17.046
Structured data as an optimization.

30
00:02:17.046 --> 00:02:18.241
Internal linking schemes.

31
00:02:18.241 --> 00:02:28.599
None of it was about the content itself being good — it was about shaping the content so the crawler could understand the parts that mattered.

32
00:02:28.599 --> 00:02:32.184
The agent-era equivalent isn't a new bag of tricks.

33
00:02:32.184 --> 00:02:33.380
It's the opposite.

34
00:02:33.380 --> 00:02:37.363
What works with an LLM crawler is clean machine-readable structure.

35
00:02:37.363 --> 00:02:38.160
Canonical URLs.

36
00:02:38.160 --> 00:02:38.957
Real JSON-LD.

37
00:02:38.957 --> 00:02:39.754
Proper headings.

38
00:02:39.754 --> 00:02:41.347
One topic per page.

39
00:02:41.347 --> 00:02:48.120
A clean separation between "the 200-word summary of this thing" and "the 2000-word version with the details."

40
00:02:48.120 --> 00:02:48.917
A byline.

41
00:02:48.917 --> 00:02:49.714
A timestamp.

42
00:02:49.714 --> 00:02:50.510
A license.

43
00:02:50.510 --> 00:02:55.689
Every LLM-era publisher who does this well looks boringly identical from the outside.

44
00:02:55.689 --> 00:03:02.462
They all use the same three or four primitives, the same declared licenses, the same discovery files.

45
00:03:02.462 --> 00:03:05.649
The moat isn't "nobody else knows the tricks."

46
00:03:05.649 --> 00:03:11.226
It's "I did the boring work of making my site ingestible and you didn't."

47
00:03:11.226 --> 00:03:22.780
That's good news for anyone starting now. You don't have to beat incumbents at a mystery. You have to ship a small set of files and keep them accurate.

48
00:03:22.780 --> 00:03:26.365
Claim 3: pricing per read is the new CPM

49
00:03:26.365 --> 00:03:33.935
If the agent is going to take your content without sending a visit, you need a different revenue path.

50
00:03:33.935 --> 00:03:35.130
Ads don't fire.

51
00:03:35.130 --> 00:03:37.122
Subscriptions require a human account.

52
00:03:37.122 --> 00:03:51.862
The only protocol that actually fits — agent pays for the bytes it takes, at the moment it takes them — is something like x402, HTTP's built-in payment response code made real with a crypto settlement layer.

53
00:03:51.862 --> 00:03:56.244
The blog I'm writing this on ships with first-party x402 support.

54
00:03:56.244 --> 00:04:01.822
A gated post returns a 402 response with a price and a wallet address.

55
00:04:01.822 --> 00:04:06.603
An x402-aware client pays it and gets the content for that session.

56
00:04:06.603 --> 00:04:11.383
The whole interaction is micropayments-in-hop, no account required, no cookies, no renewal.

57
00:04:11.383 --> 00:04:22.936
Prices are in the tenth-of-a-cent range, so a post that gets a few hundred agent reads in a week makes a few dollars without any human traffic at all.

58
00:04:22.936 --> 00:04:32.498
The revenue is small per read but the margins are close to 100% — serving a cached response from a Worker is basically free.

59
00:04:32.498 --> 00:04:39.270
The real question is whether enough agents eventually ship with x402 clients to make the market liquid.

60
00:04:39.270 --> 00:04:40.864
The bet is yes.

61
00:04:40.864 --> 00:04:42.856
The early-mover cost is zero.

62
00:04:42.856 --> 00:04:48.035
What to do this week if any of this is real for you

63
00:04:48.035 --> 00:04:59.588
1. Add llms.txt and llms-full.txt at the root of your site. Point at your posts and pages. If you use EmDash, emdash-plugin-llms-txt does this in twenty lines of integration.

64
00:04:59.588 --> 00:04:59.987
2.

65
00:04:59.987 --> 00:05:01.182
Audit your robots.

66
00:05:01.182 --> 00:05:01.580
txt.

67
00:05:01.580 --> 00:05:12.337
If it disallows AI bots (check ClaudeBot, GPTBot, Google-Extended, PerplexityBot, CCBot specifically), flip the defaults to allow and list the bots you still want to block explicitly.

68
00:05:12.337 --> 00:05:22.695
If you're on Cloudflare, also toggle off the zone-level "AI Scrapers and Crawlers" managed setting — it prepends its own block list above your Worker response.

69
00:05:22.695 --> 00:05:25.483
emdash-plugin-agent-seo ships a policy-as-data generator for this.

70
00:05:25.483 --> 00:05:25.882
3.

71
00:05:25.882 --> 00:05:29.069
Split your content into canonical and summary forms.

72
00:05:29.069 --> 00:05:35.842
The 200-word summary is what the agent wants; the 2000-word full version is what the human wants.

73
00:05:35.842 --> 00:05:41.419
If your CMS stores them as different fields, both audiences get the right shape.

74
00:05:41.419 --> 00:05:41.817
4.

75
00:05:41.817 --> 00:05:44.208
Gate one flagship post with x402.

76
00:05:44.208 --> 00:05:45.801
Not your whole site.

77
00:05:45.801 --> 00:05:47.793
Not even most of it.

78
00:05:47.793 --> 00:05:56.558
One real deep-dive — the kind of thing that would get referenced in a summary — priced at a quarter in USDC.

79
00:05:56.558 --> 00:05:57.753
Watch what happens.

80
00:05:57.753 --> 00:06:00.143
Check the logs in a week.

81
00:06:00.143 --> 00:06:00.542
5.

82
00:06:00.542 --> 00:06:02.534
Stop measuring only page views.

83
00:06:02.534 --> 00:06:11.298
Set up a distinct counter for agent traffic — filter on known bot user-agents, log separately, don't mix it with human analytics.

84
00:06:11.298 --> 00:06:18.071
If you're flying blind on the audience that's actually growing, you'll optimize for the one that isn't.

85
00:06:18.071 --> 00:06:20.063
What this means for writers

86
00:06:20.063 --> 00:06:28.429
For my own part, writing has stopped feeling like shouting into a room and started feeling like publishing to an API.

87
00:06:28.429 --> 00:06:32.014
I still care about the humans who show up.

88
00:06:32.014 --> 00:06:35.600
The human reader is still who I'm writing for.

89
00:06:35.600 --> 00:06:51.535
But I've also noticed that when I write carefully structured, fact-dense, machine-readable posts, they show up inside other people's tools — summarized accurately, cited by URL, and often read by people I would never have reached through any other channel.

90
00:06:51.535 --> 00:06:53.926
That's not a loss of control.

91
00:06:53.926 --> 00:06:55.918
It's a new distribution model.

92
00:06:55.918 --> 00:07:03.885
The humans come later, after an agent surfaces you in an answer, and when they do, they arrive already interested.

93
00:07:03.885 --> 00:07:10.260
The conversion rate on that traffic is wildly higher than any search-engine funnel I've ever seen.

94
00:07:10.260 --> 00:07:18.227
The move is to stop fighting the shift and start publishing like the new audience is the one that matters.

95
00:07:18.227 --> 00:07:25.000
Because in terms of eyeballs and dollars per word over the next three years, it increasingly is.
