Lexi

See whether your content is retrievable and citable

Paste your page HTML and Lexi audits it using a structured-language framework built on Eijkemans’ utility-writing model. It evaluates content as semantic chunks with explicit entities, relationships, conditions, and self-contained claims, then returns clear suggestions to improve retrievability and citability.

Client-side extraction. Direct browser-to-Anthropic API calls. No backend. No stored API keys.

Built on a real methodology

Lexi is grounded in Eijkemans’ structured-language approach to utility-writing.

Informed by the original lineage

The framework draws on Duane Forrester, Olaf Kopp, and Dan Petrovic.

Chunk-aware, not generic

Lexi evaluates whether each chunk can survive extraction, retrieval, and citation pressure.

Actionable output

The audit returns a scored diagnostic, improvement suggestions, structural flags, and realistic uplift estimates.

Most content is written for humans first and machines second. Lexi helps close that gap.

Eijkemans’ article argues that the opportunity is not only structured data, but structured language: writing in a way machines can use. He defines utility-writing as language with named entities, explicit relationships, preserved conditions, and self-contained statements. That is the foundation Lexi builds on.

01 —

Paste your HTML

Lexi strips non-content elements and converts the page into semantic chunks based on headings and paragraphs.

02 —

Run the audit

Lexi identifies the page’s parent topic, subtopic angle, likely Schema.org reference types, and expected entities, then scores it across Entity Coverage, Relationship Clarity, Messaging Clarity, and Neutrality.

03 —

Review the output

The recommendations panel returns issue summaries, suggested fixes, cross-chunk dependency flags, structural decisions, and a realistic potential score.

Intent and angle

Lexi starts with an unscored intent analysis so the audit is anchored to what the page is actually trying to cover.

Entity coverage

It shows whether expected entities are present, absent, or mentioned without useful connection.

Relationship clarity

It checks whether entities are tied together with explicit predicates inside a single chunk.

Messaging clarity

It flags unresolved references, buried claims, stripped conditions, and assumed knowledge at chunk boundaries.

Neutrality

It identifies promotional framing that weakens factual extractability.

Lexi streams the analysis in sequence: intent analysis, scored evaluation, score panel, then recommendations.

The recommendations are grouped into issues summary, criterion-level suggestions, and potential score summary, with a clear distinction between sentence-level suggestions, cross-chunk issues, and structural decisions.

See the issue. See the suggestion. See the likely upside.

Things you will need

  1. An Anthropic API key. You can create one at console.anthropic.com. Lexi uses Claude Sonnet and costs approximately $0.10–$0.15 per audit depending on content length.
  2. The raw HTML source of the page you want to evaluate. In most browsers: open the page, then use Ctrl+U (Windows) or Cmd+U (Mac) to view source, select all, and copy.
  3. A desktop browser. Lexi is not optimised for mobile.

What do I paste into Lexi?

Paste the raw HTML source of the page to audit. Lexi strips non-content elements, extracts semantic chunks, and previews the content before evaluation.

What is Lexi based on?

Lexi is built on Eijkemans’ structured-language / utility-writing framework, with influence from the referenced work of Duane Forrester, Olaf Kopp, and Dan Petrovic. You can read the original article at eikhart.com.

What does Lexi score?

Lexi scores four criteria, anchored to a prior intent-and-angle analysis:

CriterionWhat it measures
Entity CoverageHow completely the content addresses the key entities expected for this topic. Missing entities reduce a retrieval system’s ability to connect this content to related queries.
Relationship ClarityWhether entities are connected with explicit subject-verb-object relationships within a single chunk. Implied or cross-chunk relationships may not survive extraction.
Messaging ClarityWhether each chunk is self-contained and unambiguous. Pronoun chains, missing referents, and assumed context reduce extractability.
NeutralityWhether the content is free from unsupported superlatives, persuasive framing, and endorsement language that signals promotional rather than factual intent to retrieval systems.

What does the score mean?

The overall score reflects how well the page is structured for retrieval and citability across the four criteria. In Lexi’s framework, 75–85 means exceptionally well-optimised, 60–74 means well-structured with minor gaps, 45–59 means retrievable but with significant gaps, and anything below 45 indicates systematic issues affecting citability.

Does Lexi rewrite the page for me?

No. Lexi evaluates the content, identifies issues, and offers specific suggestions for improvement, including suggested fixes, cross-chunk dependency flags, and structural recommendations where relevant. You decide what to change and how to implement it.

Is my data private?

Lexi extracts and previews content in the browser before evaluation, and it sends API requests directly from the browser to Anthropic rather than through a Lexi backend. Your API key is stored only for the current browser session and cleared when the session ends.

Is my API key stored?

Lexi stores the key in sessionStorage for the current browser session only and sends requests directly to Anthropic, with no Lexi backend or proxy layer.

Is a perfect score possible?

No. The framework is intentionally capped. Exceptional content falls in the 75–85 range.

Content structure is also just one factor in LLM retrieval. Authority, freshness, indexing coverage, and how a model was trained all play a role. Chasing a high Lexi score is not the goal. The goal is to remove the structural barriers that make otherwise good content harder to retrieve and cite. Lexi consolidates several repetitive and subjective editing tasks into a single, structured process.

Find the structural gaps that make good content hard to retrieve

Paste your HTML, validate the page’s semantic angle, and see what is blocking citability.

Try Lexi