See whether your content is retrievable and citable
Paste your page HTML and Lexi audits it using a structured-language framework built on Eijkemans’ utility-writing model. It evaluates content as semantic chunks with explicit entities, relationships, conditions, and self-contained claims, then returns clear suggestions to improve retrievability and citability.
Client-side extraction. Direct browser-to-Anthropic API calls. No backend. No stored API keys.
Built on a real methodology
Lexi is grounded in Eijkemans’ structured-language approach to utility-writing.
Informed by the original lineage
The framework draws on Duane Forrester, Olaf Kopp, and Dan Petrovic.
Chunk-aware, not generic
Lexi evaluates whether each chunk can survive extraction, retrieval, and citation pressure.
Actionable output
The audit returns a scored diagnostic, improvement suggestions, structural flags, and realistic uplift estimates.
Why Lexi exists
Most content is written for humans first and machines second. Lexi helps close that gap.
Eijkemans’ article argues that the opportunity is not only structured data, but structured language: writing in a way machines can use. He defines utility-writing as language with named entities, explicit relationships, preserved conditions, and self-contained statements. That is the foundation Lexi builds on.
How it works
Paste your HTML
Lexi strips non-content elements and converts the page into semantic chunks based on headings and paragraphs.
Run the audit
Lexi identifies the page’s parent topic, subtopic angle, likely Schema.org reference types, and expected entities, then scores it across Entity Coverage, Relationship Clarity, Messaging Clarity, and Neutrality.
Review the output
The recommendations panel returns issue summaries, suggested fixes, cross-chunk dependency flags, structural decisions, and a realistic potential score.
What Lexi evaluates
Intent and angle
Lexi starts with an unscored intent analysis so the audit is anchored to what the page is actually trying to cover.
Entity coverage
It shows whether expected entities are present, absent, or mentioned without useful connection.
Relationship clarity
It checks whether entities are tied together with explicit predicates inside a single chunk.
Messaging clarity
It flags unresolved references, buried claims, stripped conditions, and assumed knowledge at chunk boundaries.
Neutrality
It identifies promotional framing that weakens factual extractability.
What you get back
Lexi streams the analysis in sequence: intent analysis, scored evaluation, score panel, then recommendations.
The recommendations are grouped into issues summary, criterion-level suggestions, and potential score summary, with a clear distinction between sentence-level suggestions, cross-chunk issues, and structural decisions.
See the issue. See the suggestion. See the likely upside.
Frequently Asked Questions
Things you will need
- An Anthropic API key. You can create one at console.anthropic.com. Lexi uses Claude Sonnet and costs approximately $0.10–$0.15 per audit depending on content length.
- The raw HTML source of the page you want to evaluate. In most browsers: open the page, then use Ctrl+U (Windows) or Cmd+U (Mac) to view source, select all, and copy.
- A desktop browser. Lexi is not optimised for mobile.
What do I paste into Lexi?
What is Lexi based on?
What does Lexi score?
Lexi scores four criteria, anchored to a prior intent-and-angle analysis:
| Criterion | What it measures |
|---|---|
| Entity Coverage | How completely the content addresses the key entities expected for this topic. Missing entities reduce a retrieval system’s ability to connect this content to related queries. |
| Relationship Clarity | Whether entities are connected with explicit subject-verb-object relationships within a single chunk. Implied or cross-chunk relationships may not survive extraction. |
| Messaging Clarity | Whether each chunk is self-contained and unambiguous. Pronoun chains, missing referents, and assumed context reduce extractability. |
| Neutrality | Whether the content is free from unsupported superlatives, persuasive framing, and endorsement language that signals promotional rather than factual intent to retrieval systems. |
What does the score mean?
Does Lexi rewrite the page for me?
Is my data private?
Is my API key stored?
Is a perfect score possible?
No. The framework is intentionally capped. Exceptional content falls in the 75–85 range.
Content structure is also just one factor in LLM retrieval. Authority, freshness, indexing coverage, and how a model was trained all play a role. Chasing a high Lexi score is not the goal. The goal is to remove the structural barriers that make otherwise good content harder to retrieve and cite. Lexi consolidates several repetitive and subjective editing tasks into a single, structured process.
Find the structural gaps that make good content hard to retrieve
Paste your HTML, validate the page’s semantic angle, and see what is blocking citability.
Try Lexi