How BrandCited measures AI visibility. The approach, versioned.
BrandCited measures two things. First, how 9 AI engines cite, mention, and represent your brand. Second, whether your site makes it easy for those engines to do so. Every scan runs both layers.
The output is a 0-to-100 visibility score, a prioritized fix list, and per-engine citation data. Every number in your report traces back to a real observation from a real engine response or a real HTTP request against your site.
Two layers, run in parallel.
Layer 1
Citation testing
We send real user-intent queries to 9 AI engines and parse every response for brand mentions, citation type, position, sentiment, and competitor appearances.
Layer 2
Site audit
We crawl your site the way AI bots do and score it against our AI-specific rubric covering crawler access, structured data, content, trust, technical foundation, and entity recognition.
Every engine we test, chosen to cover the mainstream AI assistant and AI search surfaces of 2026.
ChatGPT
OpenAI
Claude
Anthropic
Gemini
Perplexity
Perplexity
Grok
xAI
DeepSeek
DeepSeek
Llama
Meta
Google AI Overviews
Microsoft Copilot
Microsoft
The methodology is versioned with semantic versioning. Every scan record stores the version it ran under. Two scans are directly comparable only when both stamps match. When a breaking change ships, trend charts render a marker on the changeover date so score shifts are never mistaken for real movement on your site.
Current: v2.2.0
2026-04-18
Shallow-sitemap crawl replaces single-URL fetch. Fetches up to 20 representative pages, aggregates JSON-LD across all of them, and scores schemas against the page types where they semantically belong.
View changelog →Different platforms measure different things. BrandCited tests citation behavior across 9 AI engines plus our site audit. A tool that scores SEO alone, or monitors only ChatGPT, will produce different numbers. Our scores are not directly comparable to them.
Three possibilities. Your site changed. An AI engine's behavior drifted. Or the methodology itself updated. Every scan is stamped with the methodology version it ran under, and version changes appear as markers on your trend charts.
The scoring combines site-level signals with real engine responses you don't control. Gaming one surface rarely moves the overall score. Fixing real citation gaps does.
We monitor the engine fleet against a fixed canary query set and track response stability over time. Small drift is noted in the changelog. Larger drift triggers a version update with clear customer communication.
Patches ship as needed. Minor updates ship roughly monthly. Major updates ship at most quarterly, with advance notice for score-moving changes.
Ready to run your first audit?
Every scan is stamped with the current methodology version so you can compare results over time.
Start free scan