← Musings

Aleyda’s AI Search Framework Is Right About Everything — And That’s the Problem

The framework is correct. The problem is sequencing — and who goes and tells whom.

Aleyda Solis published a piece recently that identifies 10 characteristics of brands that win in AI search: Accessible, Useful, Recognizable, Extractable, Consistent, Corroborated, Credible, Differentiated, Fresh, Transactable.

Characteristic map · drag nodes · hover to explore
Node size encodes constraint tier · dashed links show meaningful relationships · drag to rearrange

She’s right about all of them. The thinking is clear, the taxonomy is well-constructed, and it’s one of the most coherent attempts I’ve seen to define what AI search success looks like. I can’t find a characteristic that doesn’t belong.

She’s also published a companion assessment checklist — 53 scored validation items across all 10 characteristics, each with a rationale, a verification method, an applicability toggle, and a 0–10 scoring scale. It’s genuinely well-constructed. This isn’t a vague conceptual exercise.

And yet, if you handed this to a team on Monday morning and asked them to act on it, most of them would be stuck by Wednesday. Not because the framework is wrong, and not because the checklist is thin — but because neither can answer the two questions that actually determine whether anything gets fixed: where do we start, and who’s responsible?


The gap between inspiration and instruction

When practitioners try to operationalise a flat list of 10, they tend to do one of two things: start at item one and work down, or pick the items that feel most familiar. Neither approach is likely to fix the actual problem.

Characteristic hierarchy

All 10 characteristics

Structured sequence

Gate

Accessible

Foundation

Useful
Extractable
Consistent
Recognizable
Corroborated

Amplifiers

Credible
Differentiated
Fresh
Transactable
Click any characteristic to see its position in the sequence

Consider a mid-size SaaS brand that’s been through two rebrands and three major product iterations in five years — not unusual, just what product-led growth looks like. The core product launched under one name, absorbed an acquisition, spun out a sub-brand, and was eventually consolidated back under a new positioning. Across G2, Capterra, LinkedIn, their own documentation, third-party reviews, and years of earned press, the product is now referenced under four or five different names. No sameAs schema. No unified entity structure. The homepage is clean but the long tail of the web still refers to the old names.

AI systems building an entity model for this brand encounter a fragmented, contradictory signal set. When a competitor with simpler, more consistent naming exists in the same category, AI systems default to the cleaner signal — not because the SaaS brand’s content is worse, but because entity confidence is lower.

Their primary constraint is Visibility. The fix isn’t a content programme or a thought leadership push. It’s an unglamorous audit of how the brand is named and linked across every surface AI systems can reach. That work doesn’t show up in a traffic report. It doesn’t feel like marketing. But nothing else moves until it’s addressed.

A flat list doesn’t tell you this. It implies all 10 items are roughly comparable in urgency. They aren’t. Accessible isn’t characteristic number one — it’s a prerequisite that makes the other nine either relevant or irrelevant. A brand can be Useful, Credible, Differentiated, and Corroborated and still be completely absent from AI-generated responses if AI retrieval systems can’t parse its content. The most common failure mode isn’t a deliberate block — it’s a site architecture never designed with AI retrieval in mind. Heavy client-side rendering, content behind authentication, documentation siloed in platforms AI systems can’t reach. These aren’t edge cases. They’re the default state of most modern web builds.


Why the framework’s structure causes the problem

The deeper issue is that several of the 10 characteristics aren’t truly independent inputs. Some are byproducts of other characteristics done well — and treating them as separate levers implies you can work on them directly, which you largely can’t.

Credible is the clearest example. Some of its items are directly actionable — adding author bios, publishing an editorial policy, making sure claims are sourced. Do those. But the items that move the needle most — incoming citations from other sources, positive third-party sentiment, editorial references — are downstream of Useful and Corroborated. They don’t get fixed by a Credible workstream. They emerge when you’ve consistently published content worth citing and built enough third-party presence that others reference you. A brand that invests in Credible surface signals without fixing those upstream inputs is rearranging furniture.

Consistent and Recognizable address the same underlying problem from two angles — entity clarity. Consistent is about unified signals across your own touchpoints. Recognizable is about AI systems being able to identify and distinguish your entity across the web. The checklist separates them into 11 combined items. In practice, the remediation is one coordinated workstream: audit how your brand is named and described everywhere, align it, connect it with schema. Treating them as independent efforts doubles the perceived scope without doubling the actual work.

Fresh is real, but conditional. For a financial news brand or a health information site, recency is a meaningful signal. For a professional services firm or a B2B software company with evergreen content, freshness is rarely the binding constraint — and prioritising content refresh cycles when the actual problem is thin third-party corroboration is a costly misdirection.

Characteristic dependencies · hover to explore
GateFoundationAmplifiersAccessibleUsefulExtractableConsistentRecognizableCorroboratedDifferentiatedCredibleFreshTransactable
Hover any node to explore its dependencies
Blue lines = inseparable pairs · dashed = dependency direction · amber = gate relationship

This compounds at scale. For a complex, multi-product organisation — multiple divisions, properties, audiences — a single framework score becomes close to meaningless. The brand entity might score well on Corroborated and Credible while a specific product line has a severe Citability constraint. One subdomain is accessible while another isn’t. A product competing in a crowded category has a critical Differentiation gap the overall score obscures. The question for these brands isn’t “how do we score?” It’s “which product, on which property, for which audience, has the most significant constraint right now?”

Then there’s the harder problem underneath all of this: AI search visibility has no clear owner. Look at the checklist’s verification methods and the cross-functional reality becomes unavoidable. Crawling the site for blocked pages is an engineering or technical SEO task. Reviewing six months of PR outreach is a comms task. Checking whether a centralised brand messaging guide exists is a brand strategy task. Auditing author bios across content is an editorial task. These aren’t things one person does. They’re not even things one team does. A framework that produces 53 equally-surfaced items doesn’t just create a prioritisation problem — it creates a coordination problem. Everyone can point to the items in their lane. Nobody has to reconcile them. And the person most likely to be holding the checklist — an SEO or digital strategist — often doesn’t have the organisational standing to convene the functions needed to fix the binding constraint, whatever it turns out to be.

This is the question the framework leaves unanswered: not just what to fix, but who goes and tells whom, and why they should care.


A more useful frame

The 10 characteristics are the right variables. What’s missing is a way to sequence them — and a language that travels across functions.

AI search visibility is a constraint problem, not a checklist problem. In any system with multiple inputs, there is one binding constraint — one thing limiting the output more than anything else. Improving non-constraints doesn’t improve the system. Only fixing the binding constraint moves the needle. For almost every brand I’ve looked at, that constraint falls into one of three categories:

Visibility— AI systems can’t reliably find or identify you. Entity signals are fragmented, content isn’t retrievable, or the brand doesn’t exist clearly enough as a defined entity for AI systems to confidently represent it.

Citability— AI systems can find you but don’t pull from you. Content isn’t structured for retrieval, isn’t deep enough to cite, or isn’t distinct enough to select over competitors covering the same ground.

Authority— AI systems can find you and can cite you, but don’t trust you enough to recommend you prominently. Third-party validation is thin, or the brand hasn’t established the kind of external corroboration AI systems use as a trust signal.

Constraint model · hover a symptom to diagnose
Visibility
AccessibleConsistentRecognizable

AI systems cannot reliably find or identify you. Fix entity signals and retrieval architecture before investing anywhere else.

Citability
UsefulExtractableDifferentiated

AI finds you but doesn't pull from your content. Depth and structure are one workstream — address together, not sequentially.

Authority
CorroboratedCredibleFreshTransactable

AI can find and cite you, but doesn't trust you enough to recommend you prominently. This is a long-term programme — Corroborated is the primary lever.

Hover a symptom to identify your binding constraint and surface priority actions

This framing does something a 53-item checklist can’t: it gives cross-functional teams a shared diagnosis. “Our primary constraint is Authority” is a conversation an SEO can have with a CMO. It maps cleanly to a PR and comms brief. It explains why a technically sound, well-structured site with good content isn’t showing up — and why the fix isn’t more content. “We need to improve our Corroborated score” is not that conversation.

If your constraint is Visibility, audit Accessible first — rendering architecture, authentication walls, structured data presence — then address entity clarity through Recognizable and Consistent as a single workstream. Don’t invest in content quality until AI systems can reliably reach and identify you. If your constraint is Citability, Useful and Extractable are inseparable — content depth and structure are one workstream, not two. Address Differentiated only once content is worth citing. If your constraint is Authority, this is a long-term programme. Corroborated is the primary lever. The Credible items that matter most follow as a consequence.

The 10 characteristics don’t disappear from this model — they’re all still relevant. What changes is the sequence. And sequence is where most AI search programmes fail: doing the right work in the wrong order, improving non-constraints while the binding constraint goes unaddressed, with no shared language to explain to anyone else why it matters.


Aleyda’s framework tells you what the destination looks like. The constraint model tells you which road to take — and who needs to be in the car.