Map the concept structure your content is missing
Paste your article and ContentGraph maps every concept, relationship, and coverage gap against an ideal explanation framework — surfacing exactly where retrieval systems will fail to find your content, and what to do about it.
Client-side extraction. Direct browser-to-Anthropic API calls. No backend. No stored API keys.
Graph-native analysis
ContentGraph treats entities and their relationships — not prose — as the primary unit of content signal.
Fan-out aligned
The framework models the query sub-topics retrieval systems generate — concept by concept, not page by page.
Dual-graph output
Observed graph shows what exists. Framework graph shows what should. The gap is the diagnostic.
Editorial guidance
Four categories of actionable writing instructions — not a score, but a structured prioritised to-do list.
Why ContentGraph exists
Modern retrieval systems do not read pages — they decompose queries into sub-queries and match each one against individual facets of a topic. A page that covers five of eight expected concepts answers five sub-queries and fails the rest, regardless of how well the covered sections are written.
The design is grounded in the same principle behind schema.org: entities and the relationships between them are the primary unit of content signal, not prose alone. Where schema.org achieves this through structured markup for crawlers, ContentGraph achieves it through natural language — analyzing the relational structure already present in the writing, surfacing where it is thin, missing, or inconsistent, and producing actionable guidance for closing those gaps.
How it works
ContentGraph runs in two sequential phases. Phase 1 maps what your content contains. Phase 2 generates what it should contain and tells you how to close the gap.
Paste your content
ContentGraph accepts a URL or raw text. It strips HTML, splits into numbered sentences, and builds an extraction-ready view of the content before analysis begins.
Phase 1: observed map
One LLM call identifies the anchor concept, extracts all observed concepts with integration states, maps explicit and implied relationships, and assesses coverage across eight diagnostic questions.
Phase 2: framework and guidance
Two LLM calls generate the explanation framework — an ideal coverage model for the anchor topic — and translate the gap into four categories of writing guidance: concepts to add, concepts to clarify, relationships to make explicit, and sentence-level directives.
Review both graphs and act
The observed and framework graphs render side-by-side. The writing guidance panel exports as markdown. Verify the anchor first, then work down the prioritised guidance list.
What ContentGraph evaluates
Phase 1 produces a structured reading of the content across five analytical dimensions. Phase 2 compares that reading against an ideal framework and produces targeted guidance.
Anchor concept
The primary subject of the content. All other analysis is relative to the anchor — verify it first before acting on any other output.
Integration states
Each concept is assigned well_integrated, weakly_integrated, underexplained, or naming_inconsistent — a direct signal of retrieval reliability per concept.
Relationship directionality
Relationships are extracted as subject-verb-object triples, classified as explicit or implied, and rendered with directional arrows. Implied relationships are the primary driver of the toMakeExplicit guidance category.
Question coverage
Eight diagnostic questions — what it is, how it works, what it depends on, what it produces, when it applies, what contrasts with it, what goes wrong, and what evidence supports it.
Framework gap
Framework concepts are assigned essential, important, or useful priority, and coverage status of yes, partial, or no against the observed content.
Explanatory roles
Each concept is labelled with its explanatory function — mechanism, prerequisite, outcome, context, component, or contrast — so you understand why a concept appears in the framework.
What you get back
ContentGraph streams results as events complete. The observed graph renders after Phase 1. The framework and writing guidance render as Phase 2 finishes — no waiting for the full run.
| Layer | What it contains |
|---|---|
| Observed graph | Interactive force-directed graph of all concepts in the content, colored by integration state. Hover a node to see its role, naming variants, mention count, and extracted SVO triples. |
| Framework graph | The expected fan-out space for the anchor topic. Amber and violet nodes mark concepts absent from or thin in the current content. Compare side-by-side with the observed graph to see the gap visually. |
| Writing guidance | Four-category editorial table: toAdd (missing or partial concepts), toClarify (underexplained or inconsistently named), toMakeExplicit (implied relationships), and sentenceGuidance (concept-anchored directives). Exports as markdown. |
| Content findings | Anchor concept, inferred audience and goal, question coverage results across all eight dimensions, and SVO triples for each extracted relationship. |
Frequently asked questions
What is ContentGraph?
Things you will need
- An Anthropic API key. Create one at console.anthropic.com. ContentGraph uses Claude Sonnet and costs approximately $0.10–$0.20 per analysis depending on content length.
- The content you want to analyze — either a URL or plain text. ContentGraph handles HTML parsing and sentence splitting automatically.
- A desktop browser. ContentGraph is not optimised for mobile.
What types of content work best?
What do the integration states mean?
Integration states describe how well each concept is developed and connected within the content.
| State | What it means |
|---|---|
| well_integrated | Present, defined, and meaningfully connected through explicit relationships. |
| weakly_integrated | Present but with few or thin connections — mentioned but not woven in. |
| underexplained | Appears in the content but lacks adequate definition or development. |
| naming_inconsistent | Referred to by multiple names without normalization — may not match retrieval queries. |
What is the explanation framework?
Does ContentGraph rewrite the content for me?
Is my content and API key kept private?
How repeatable are the results?
Find the concept gaps that make good explanations hard to retrieve
Paste your content, verify the anchor, and see where your explanation framework breaks down.
Try ContentGraph