ContentGraph

Map the concept structure your content is missing

Paste your article and ContentGraph maps every concept, relationship, and coverage gap against an ideal explanation framework — surfacing exactly where retrieval systems will fail to find your content, and what to do about it.

Client-side extraction. Direct browser-to-Anthropic API calls. No backend. No stored API keys.

Graph-native analysis

ContentGraph treats entities and their relationships — not prose — as the primary unit of content signal.

Fan-out aligned

The framework models the query sub-topics retrieval systems generate — concept by concept, not page by page.

Dual-graph output

Observed graph shows what exists. Framework graph shows what should. The gap is the diagnostic.

Editorial guidance

Four categories of actionable writing instructions — not a score, but a structured prioritised to-do list.

Modern retrieval systems do not read pages — they decompose queries into sub-queries and match each one against individual facets of a topic. A page that covers five of eight expected concepts answers five sub-queries and fails the rest, regardless of how well the covered sections are written.

The design is grounded in the same principle behind schema.org: entities and the relationships between them are the primary unit of content signal, not prose alone. Where schema.org achieves this through structured markup for crawlers, ContentGraph achieves it through natural language — analyzing the relational structure already present in the writing, surfacing where it is thin, missing, or inconsistent, and producing actionable guidance for closing those gaps.

ContentGraph runs in two sequential phases. Phase 1 maps what your content contains. Phase 2 generates what it should contain and tells you how to close the gap.

01 —

Paste your content

ContentGraph accepts a URL or raw text. It strips HTML, splits into numbered sentences, and builds an extraction-ready view of the content before analysis begins.

02 —

Phase 1: observed map

One LLM call identifies the anchor concept, extracts all observed concepts with integration states, maps explicit and implied relationships, and assesses coverage across eight diagnostic questions.

03 —

Phase 2: framework and guidance

Two LLM calls generate the explanation framework — an ideal coverage model for the anchor topic — and translate the gap into four categories of writing guidance: concepts to add, concepts to clarify, relationships to make explicit, and sentence-level directives.

04 —

Review both graphs and act

The observed and framework graphs render side-by-side. The writing guidance panel exports as markdown. Verify the anchor first, then work down the prioritised guidance list.

Extracted Graph
usesrequiresrelies onvalidatesissued bygeneratesHTTPSTLS handshakeCertificateCert. AuthorityPublic-key cryptoSession key
Cert. AuthorityUnderexplained
Role in explanation

Validates authenticity of the server’s public key via certificate signing.

Frequency

Mentioned in content

Also called
"CA""certificate issuer"
Relationships

Certificate → issued by → Cert. Authority

Well integratedWeakly integratedUnderexplainedNaming inconsistentImplied
Each node is sized by mention count and colored by integration state. Hovering a node surfaces its role, naming variants, and extracted relationships. Cert. Authority is marked Underexplained — present in the content but never given enough context for a retrieval system to use it confidently.

Phase 1 produces a structured reading of the content across five analytical dimensions. Phase 2 compares that reading against an ideal framework and produces targeted guidance.

Anchor concept

The primary subject of the content. All other analysis is relative to the anchor — verify it first before acting on any other output.

Integration states

Each concept is assigned well_integrated, weakly_integrated, underexplained, or naming_inconsistent — a direct signal of retrieval reliability per concept.

Relationship directionality

Relationships are extracted as subject-verb-object triples, classified as explicit or implied, and rendered with directional arrows. Implied relationships are the primary driver of the toMakeExplicit guidance category.

Question coverage

Eight diagnostic questions — what it is, how it works, what it depends on, what it produces, when it applies, what contrasts with it, what goes wrong, and what evidence supports it.

Framework gap

Framework concepts are assigned essential, important, or useful priority, and coverage status of yes, partial, or no against the observed content.

Explanatory roles

Each concept is labelled with its explanatory function — mechanism, prerequisite, outcome, context, component, or contrast — so you understand why a concept appears in the framework.

ContentGraph streams results as events complete. The observed graph renders after Phase 1. The framework and writing guidance render as Phase 2 finishes — no waiting for the full run.

LayerWhat it contains
Observed graphInteractive force-directed graph of all concepts in the content, colored by integration state. Hover a node to see its role, naming variants, mention count, and extracted SVO triples.
Framework graphThe expected fan-out space for the anchor topic. Amber and violet nodes mark concepts absent from or thin in the current content. Compare side-by-side with the observed graph to see the gap visually.
Writing guidanceFour-category editorial table: toAdd (missing or partial concepts), toClarify (underexplained or inconsistently named), toMakeExplicit (implied relationships), and sentenceGuidance (concept-anchored directives). Exports as markdown.
Content findingsAnchor concept, inferred audience and goal, question coverage results across all eight dimensions, and SVO triples for each extracted relationship.
Extracted Graph
usesrequiresrelies onvalidatesissued bygeneratesHTTPSTLS handshakeCertificateCert. AuthorityPublic-key cryptoSession key
Well integratedWeakly integratedUnderexplainedNaming inconsistentImplied
Proposed Graph
usesrequiresrelies onenforcescreatesissued byHTTPSTLS handshakeCertificateCert. AuthorityPublic-key cryptoSession keyHSTSForward secrecy
In contentInferredOptionalImplied
The Extracted Graph (left) shows concepts as they appear in the content, colored by integration state. The Proposed Graph (right) shows the expected fan-out space for the anchor topic, colored by basis — amber and violet nodes are absent from or thin in the current content.
Writing Guidance
Export .md
What to add

Concepts missing or underdeveloped in your current content.

InstructionExample phrasing
Add a section explaining HTTP Strict Transport Security. HSTS headers instruct browsers to use HTTPS exclusively for the specified max-age period, preventing protocol downgrade attacks.HTTP Strict Transport Security (HSTS) is a browser directive that refuses all subsequent HTTP requests for the specified max-age duration.
Add forward secrecy as a property of modern TLS configurations. It ensures session keys are not recoverable even if the server's private key is later compromised.Forward secrecy is achieved through ephemeral key exchange — each session generates a fresh key pair that is discarded after use.
Writing guidance translates the gap between the Extracted and Proposed graphs into a two-column table: what to write, and an example sentence to seed it. Each toAdd item names a concept from the framework that is absent or undercovered.

What is ContentGraph?

ContentGraph is a concept and relationship analysis tool for explanatory content. It maps the semantic structure of a piece of writing as a graph of concepts and relationships, compares that structure against an ideal explanation framework for the identified topic, and produces prioritised editorial guidance for closing the gap. It is designed for content teams, SEO practitioners, and writers who need to understand how retrieval systems will process their content at the concept level.

Things you will need

  1. An Anthropic API key. Create one at console.anthropic.com. ContentGraph uses Claude Sonnet and costs approximately $0.10–$0.20 per analysis depending on content length.
  2. The content you want to analyze — either a URL or plain text. ContentGraph handles HTML parsing and sentence splitting automatically.
  3. A desktop browser. ContentGraph is not optimised for mobile.

What types of content work best?

ContentGraph is built for explanatory content — articles, guides, documentation, and technical pages where the goal is to explain a concept, process, or system clearly. The eight diagnostic questions (what it is, how it works, what it depends on, and so on) are well-suited to this content type. It is less reliable for narrative, opinion, or persuasive content where structural completeness is not the primary goal.

What do the integration states mean?

Integration states describe how well each concept is developed and connected within the content.

StateWhat it means
well_integratedPresent, defined, and meaningfully connected through explicit relationships.
weakly_integratedPresent but with few or thin connections — mentioned but not woven in.
underexplainedAppears in the content but lacks adequate definition or development.
naming_inconsistentReferred to by multiple names without normalization — may not match retrieval queries.

What is the explanation framework?

The explanation framework is ContentGraph’s model of what concepts and relationships should be present for a complete explanation of the anchor topic. It is generated by a separate LLM call operating from the model’s training knowledge of the topic — not a curated ontology or domain expert. It is an informed editorial reference, not a specification. Treat it as a starting point and apply your own judgment about what your specific audience and purpose require.

Does ContentGraph rewrite the content for me?

No. ContentGraph identifies what is missing, thin, implied, or inconsistently named, and provides example sentences and editorial instructions to help address each issue. You decide what to change and how. The writing guidance exports as markdown so it can be taken into any editing workflow.

Is my content and API key kept private?

ContentGraph extracts and analyzes content in the browser before sending it to Anthropic. API requests go directly from the browser to the Anthropic API — there is no ContentGraph backend or proxy layer. Your API key is stored only in sessionStorage for the current browser session and is cleared when the tab closes.

How repeatable are the results?

Analysis is model-mediated, not rule-based. The same content may produce slightly different results across runs or after model updates. You should expect directional consistency — the same anchor, similar integration state assignments, and comparable framework recommendations — but not exact reproducibility. Display the model version used for each run when comparing analyses over time.

Find the concept gaps that make good explanations hard to retrieve

Paste your content, verify the anchor, and see where your explanation framework breaks down.

Try ContentGraph