← Musings

Perfect for Whom?

A 16-step framework for the perfect AI SEO landing page just walked into a pub. Nobody asked who the human was.

An SEO Manager, a Digital Campaign Manager, a Product Marketing Manager, a Product Manager, a Copywriter, a Comms Lead, an Account Executive, and a Social Media Specialist all walk into a pub.

They’re asked: What does the perfect landing page look like?

Unsurprisingly, everyone has a very different answer.

The Copywriter, who has been through enough of these cycles to know how it ends, continues drinking their Guinness.

In walks a newly minted GEO specialist.

Silence.

Then a brawl ensues.


There is a framework for the perfect landing page. It is sixteen steps long. It does not ask who the page is for.

Alfred Korzybski had a line that has aged well: the map is not the territory. A representation of a thing is not the thing itself. The more useful the map, the easier it is to forget that distinction.

A 16-step framework has been circulating on LinkedIn.[5] The perfectlanding page, reduced to an ordered list. It is a well-drawn map. Clear structure, sensible sequence, a coherent philosophy. On the surface, it makes perfect sense.

Hook the reader with an entity-based headline. Give them a reason to act above the fold. Build credibility with a citable asset and authority signals. Frame the problem as an AI-style query, answer it directly, back it with social proof and case studies. Close with transparent pricing, FAQs written for natural language search, a final CTA with urgency, and a human support signal. Make it fast on mobile.

The underlying philosophy: write for a skeptical buyer and an AI summariser simultaneously. Specificity beats superlatives. One page, one goal.

Below is that framework applied faithfully to Claude. Toggle the annotations to follow the map.

Specimen16-step framework applied to: Claude (Anthropic) — fictional
Claude.
ProductSolutionsPricingDocs
Log in
#1 · Headline
Generative AI for teams
Claude: AI that turns your best thinking into done work.
#2 · Subheadline

Trusted by 200,000+ teams at companies like Salesforce, GitLab, Shopify, and Notion to draft, reason, and decide — without the hallucination problem.

#3 · Above-Fold CTA

Free plan includes 50 messages/day · No setup required

#4 · Hero Resource (Citable Asset)
Live demo — try each use case
You
Write an executive summary for a B2B SaaS proposal targeting mid-market retail ops teams.
C
Claude 3.5 Sonnet · claude.ai↑ Share · ↗ Export
#5 · Capabilities & Outcomes
What Claude enables
Not a chatbot. A thinking partner.
✍️
Close deals faster without the blank page
Proposals, decks, and follow-ups that sound like your best rep wrote them — because you told Claude exactly who you're writing for.
⚙️
Ship code without the context-switch tax
Explain a bug, get a fix with reasoning. Review a PR in plain English. Write tests for code you've never touched before.
📊
Make decisions with less information anxiety
Hand Claude a messy document, a long thread, or a competing set of options. Get a structured take with risks flagged, not smoothed over.
📧
Clear your inbox in 20 minutes
Draft replies, summarise threads, and triage what actually needs your attention — all without copy-paste gymnastics.
🔍
Research without the five-tab spiral
Ask complex questions, get structured answers with competing perspectives surfaced — not just the most popular take.
🗣️
Prepare for the conversation before it happens
Sharpen your argument, anticipate objections, and stress-test your thinking before the meeting, the pitch, or the board deck.
#6 · Authority Signals
Trusted by teams at
Salesforce
GitLab
Shopify
Notion
HubSpot
Figma
Stripe
Intercom
Linear
Vercel
200K+
teams active
4.8 / 5
avg. user rating
73%
faster first draft
SOC 2
Type II certified
#7 · Problem (framed as AI query)
The real question teams are searching
🔍
"Why does my team still spend hours writing things that should take minutes — even though we have all the tools?"
124,000 searches/month · top variant of "AI productivity tools"
You have Slack, Notion, Google Docs, a dozen integrations. You've tried the general-purpose AI tools. And you're still staring at a blank page for 40 minutes before you can start.
The problem isn't capability. It's that most AI tools hand you raw output and leave the hard part — context, judgment, knowing when the answer is wrong — entirely to you.
#8 · Solution (framed as AI answer)
Claude answers directly

Claude holds context across your entire task — not just the last message. It understands what you're trying to accomplish, flags when something doesn't add up, and produces output specific enough to be immediately useful. Not a starting point. A finished first draft you can actually send.

Reads your documents
Knows your tone
Flags its own uncertainty
Works across tasks
#9 · Social Proof
Real teams, real outcomes
Not a testimonial page. A results page.
I used to spend Sunday evening writing my Monday client updates. Now I do it in 12 minutes. The output sounds like me — not like a robot trying to sound like me.
S
Sarah K.
Account Director · Salesforce
−3.5 hrs/week
We put Claude into our code review flow six weeks ago. Junior engineers are shipping code that used to require a senior sign-off on first pass. That's not a small thing.
M
Marcus T.
Engineering Lead · GitLab
40% faster reviews
It's the first AI tool I've used that tells me when it doesn't know something instead of making something up. For our compliance work, that's not a nice-to-have.
P
Priya M.
Head of Legal Ops · Notion
0 hallucination incidents
#10 · Case Studies & Reviews
Transformation stories
Problem → Solution → Measurable outcome
Shopify (Merchant Comms team)
Content Operations · 8-person team
↓ 87% draft time
Before
Manually drafting 40+ seller announcement emails per quarter. Each email took 2–3 hours through 4 rounds of revision. Tone inconsistency was a recurring complaint from merchants.
After
Claude trained on tone guide and past emails. First drafts now take 18 minutes. Revision rounds down to 1.2 on average. Merchant open rate up 11% after tone improvement.
HubSpot (RevOps)
Revenue Operations · 3-person team
↓ 88% prep time
Before
Monthly pipeline review required pulling data from 6 sources, formatting into a deck, and writing narrative commentary. Took two people a full day each month.
After
Claude connected to reporting exports. Commentary draft generated in 22 minutes. Human review and edit: 45 minutes. Total time reduced from 16 hours to under 2 hours.
#11 · Pricing
Transparent pricing
Start free. Scale when it makes sense.
Free
$0forever
Claude 3 Haiku — fast, capable
50 messages per day
Document upload & analysis
No credit card required
Pro
$20/ month
Claude 3.5 Sonnet — full capability
Unlimited messages
Projects with persistent context
Priority access during peak hours
Advanced reasoning mode
Team
$30/ user / month
Everything in Pro
Shared team projects & context
Admin controls & usage analytics
SSO + SCIM provisioning
SOC 2 Type II compliance docs
#12 · Value Anchoring
What inaction costs you
$8,400 / month
Average agency retainer for copywriting + research tasks Claude handles in a Teams plan
Claude Teams (5 users)$150 / mo
Content agency retainer$4,500 / mo
Part-time research contractor$3,900 / mo
#13 · FAQs (natural language AI queries)
Common questions
Answered directly, no preamble.
#14 · Final CTA Block
14,200 teams signed up this week
Your next 3-hour task is waiting.

Every week you wait is another week of proposals, reports, and replies that take longer than they should. Start free. No card. No setup.

#15 · Human Support Signal
Still have questions?

Our team is available Monday–Friday, 9am–6pm EST. Typical first response: under 2 hours.

JL
James L.
Customer Success · Claude
JL

Hi there 👋 I'm James from the Claude team. Have questions about pricing or getting your team set up?

JL

Most teams are up and running same day. Happy to walk you through it.

Typical reply · < 2 hours
Claude.
© 2026 Anthropic, PBC · Privacy · Terms

Walk through the specimen section by section. The same absence appears in every one.

The headline

Generative AI for teams

Claude: AI that turns your best thinking into done work.

Trusted by 200,000+ teams at companies like Salesforce, GitLab, and Notion to draft, reason, and decide.

Start thinking free — no credit cardSee how teams use Claude →
#1–3 · Headline, Subheadline, Above-Fold CTA

The framework says: entity-based, outcome-driven, AI-readable. Fine. But whose outcome? The SEO Manager wants a keyword. The Product Marketer wants a positioning statement. The AE wants an objection handled before the call.Claude: AI that turns your best thinking into done work is a reasonable sentence. It is also three different briefs collapsed into one, with the conflict edited out.

The capabilities section

What Claude enables
Not a chatbot. A thinking partner.
✍️
Close deals faster without the blank page
Sales
⚙️
Ship code without the context-switch tax
Engineering
📊
Make decisions with less information anxiety
Leadership
📧
Clear your inbox in 20 minutes
Operations
🔍
Research without the five-tab spiral
Research
🗣️
Prepare for the conversation before it happens
Strategy
#5 · Capabilities & Outcomes

Six cards, six different jobs-to-be-done, implicitly six different people. The framework says to group by user goal, not by product module — sensible advice — but never says to pick one user. The result is a section that addresses a developer, a salesperson, a researcher, and a manager simultaneously. Each one sees themselves for one card out of six.

The right version of this section is persona-driven. One audience, one problem statement, all messaging in service of that context. That specificity also compounds outside the page: the landing page becomes the natural destination for persona-led campaigns on Search and Social where you control who arrives and with what intent.

Worth critiquing: persona-driven pages are not always the right call. If the product has genuinely horizontal value — a developer, a marketer, and an ops lead all arriving from different channels to the same self-serve trial — broad messaging is correct. Segmentation before value is established creates friction, not conversion. Persona specificity earns its cost when the acquisition channel is controlled, the buyer’s role shapes both the problem and the objection, and the sales motion is long enough for context to matter. When all three are true, a generic capabilities grid is not inclusive — it is a missed opportunity.

The social proof section

Real teams, real outcomes
Not a testimonial page. A results page.
I used to spend Sunday evening writing my Monday client updates. Now I do it in 12 minutes.
S
Sarah K.
Account Director · Salesforce
−3.5 hrs/week
Junior engineers are shipping code that used to require a senior sign-off on first pass.
M
Marcus T.
Engineering Lead · GitLab
40% faster reviews
It's the first AI tool that tells me when it doesn't know something instead of making something up.
P
Priya M.
Head of Legal Ops · Notion
0 hallucination incidents
#9 · Social Proof

This is where the contradiction is sharpest, because it is written into the framework’s own instructions: vary personas — include different roles, company sizes, and use cases. That is the problem presented as a feature. If your testimonials cover an Account Director, an Engineering Lead, and a Head of Legal Ops, none of them sees themselves primarily reflected. You have made everyone feel partially seen. And one page, one goal is right there in the same document.

The problem and solution sections

The real question teams are searching
🔍
“Why does my team still spend hours writing things that should take minutes — even though we have all the tools?”
124,000 searches/month · top variant of “AI productivity tools”
Claude answers directly

Claude holds context across your entire task — not just the last message. It produces output specific enough to be immediately useful. Not a starting point. A finished first draft you can actually send.

#7–8 · Problem (framed as AI query) & Solution (framed as AI answer)

Framed as an AI query and answer. This is where the GEO specialist walks in. The section is no longer written for the person reading it. It is written for the model that might summarise it. The human buyer’s actual problem — the one that got them to this page in the first place — is now a secondary concern, restructured to satisfy a retrieval algorithm.

The pattern repeats. The pricing section assumes transparency is always the right strategy — it is not, and any enterprise AE will tell you why. The FAQs are written in natural language so an AI can index them; real buyer objections live in sales call transcripts, not search queries. The final CTA assumes anyone who scrolled to the bottom is high-intent. Some are. Others were sent the link. High-intent is not a property of position on a page.


The framework is internally coherent. That is exactly what makes it dangerous. It answers every question except the one that matters first: who is this page for?

We try to make a single page work for everyone. In doing so, it works for no one.

Beneath that question, there is another: whether AI citation delivers a human to the page at all.

Nobody actually knows what AI looks for

The framework presents GEO as a layer of established optimisation logic: write directly, use natural language, answer questions, add schema, signal entities. The implication is that we know what generative engines respond to, and that following these rules improves your chances of being cited.

The researchers who coined the term are not that confident. The original GEO paper, published at ACM SIGKDD in 2024 by a Princeton-led team, is explicit: there remains no principled understanding of the underlying preferences of generative engines.[1]

A September 2025 large-scale study across multiple verticals found that AI search has a systematic and overwhelming bias towards earned media — third-party, authoritative sources — over brand-owned content.[2] A landing page is brand-owned by definition. It starts at a structural disadvantage in AI retrieval regardless of how it is written.

There is also the retrieval question. Analysis of ChatGPT behaviour finds that roughly 60% of queries are answered from parametric knowledge — the model’s training data — without triggering a live web fetch at all.[3] Optimising the structure of a page for a retrieval event that may never happen is a speculative act dressed as a strategy.

Even when retrieval does happen, the citation is not a reliable signal. Language models generate citations the same way they generate everything else: token by token, from learned probability distributions, with no ground-truth check. The result looks authoritative. It may not exist. Citation fabrication is one of the most routine expressions of AI hallucination — plausible author names, real-sounding journal titles, confident URLs that return 404. Optimising to be cited by a system that sometimes invents its sources is a peculiar goal.

When retrieval does occur, there is no verify step. Standard retrieval-augmented pipelines retrieve, augment, generate. They do not confirm that the retrieved source says what the model claims it says, or that it is the most authoritative available reference on the point. The citation arrives in the output with the same confidence as a sentence the model fabricated entirely. There is no structural difference from the outside.

Retrieval-augmented pipeline

Retrieve
Augment
Generate
Verify
not present

What the output looks like

AI search shows a systematic and overwhelming bias toward earned media — third-party, authoritative sources — over brand-owned content, regardless of how the page is structured.

— Chen et al., arXiv:2509.08919, 2025

Pages structured according to GEO best practices are cited in AI search results 2.4× more frequently than unoptimised equivalents, with entity-based headlines accounting for the largest share of the lift.

— Chen & Varma, Journal of Search Engine Strategy, 2024

One of these citations was retrieved. One was fabricated. Which is which?

The verify step is not present. From the output, neither is the difference.

And for that 60% answered from training data: any citation produced is frozen at the training cutoff. A market figure, a published study, a product claim — if the model did not retrieve it live, it is citing a document it last encountered before the world moved on. The framework calls this channel AI citation. A more accurate label is a reference to a past state of knowledge presented as current evidence.

None of this means GEO thinking is useless. It means the framework presents evolving, contested, partially-evidenced practice as received wisdom — and that the people most likely to be harmed by that framing are the ones who follow it most faithfully.

Traditional SEO and landing pages are often in conflict

The framework positions itself as building on traditional SEO. That framing obscures a structural problem that predates AI entirely.

SEO optimises a page to be found. A landing page is built to convert someone who has already found you. These are different jobs. The buyer who arrives from a paid search ad, a sales email, a LinkedIn post, or a referral link has already passed the informational stage. They are not searching. They have arrived. The content written to rank — keyword-dense, question-answering, structured for crawlers — is written for a person who is one step earlier in the journey than the person now reading the page.

The documented consequence is intent mismatch: research on SEO landing page performance consistently finds that pages optimised for search terms rather than buyer stage show significantly higher bounce rates and lower conversion, because the content answers a question the visitor stopped asking before they clicked.[4]

Writing FAQs exactly as a user would ask an AI assistantis a version of this problem. The user is on the page. They are past the query stage. They do not need the page to answer the question that brought them here — they need it to answer the question that determines whether they act.


None of these sections are wrong in isolation. Most of them are good practice, for the right product, sold to the right buyer, arriving from the right channel.

That qualifier — for the right— is doing all the work.

The framework omits it entirely. It treats the landing page as a genre with fixed rules, the way a sonnet has fourteen lines. But a landing page is not a form. It is a conversation with a specific person who arrived from a specific place with a specific thing they want to resolve. The rules change every time.

Back to the pub. The brawl does not start because the GEO specialist is wrong. It starts because every person in that room has been optimising for their own metric without anyone first agreeing what the page is supposed to do — and for whom. The framework gave them sixteen things to argue about instead of one question to answer together.

The map is not the territory. The framework is a map drawn before anyone decided where they were going.

The 16 steps are not the problem. The order of operations is. Agree on the audience, the acquisition channel, and the one thing you need them to believe before they act. Then open the framework. Most of the sections will sort themselves out.


References

  1. Aggarwal, P., Murahari, V., Rajpurohit, T., Kalyan, A., Narasimhan, K., & Deshpande, A. (2024). GEO: Generative engine optimization. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM. https://arxiv.org/abs/2311.09735
  2. Chen, M., Wang, X., Chen, K., & Koudas, N. (2025). Generative engine optimization: How to dominate AI search. arXiv preprint arXiv:2509.08919. https://arxiv.org/abs/2509.08919
  3. Digital Bloom. (2025). 2025 AI citation & LLM visibility report. https://thedigitalbloom.com/learn/2025-ai-citation-llm-visibility-report/
  4. Hashmeta. (n.d.). Why most SEO landing pages fail to convert: The hidden performance gaps. https://hashmeta.com/blog/why-most-seo-landing-pages-fail-to-convert-the-hidden-performance-gaps/
  5. York, A. (n.d.). Your landing page ranks on Google, but it still doesn’t convert [LinkedIn post]. LinkedIn. https://www.linkedin.com/posts/anna-york-seo_your-landing-page-ranks-on-google-but-it-activity-7442544592496799744-UIKn