BrandGEO
AI Visibility Tutorials · · 10 min read · Updated Apr 23, 2026

The Six Dimensions of AI Brand Visibility: A Practitioner's Explainer

Recognition, Knowledge Depth, Competitive Context, Sentiment & Authority, Contextual Recall, AI Discoverability — each answers a different question.

A single AI visibility score is a tempting shortcut. It is also a lossy one. "Your brand scores 63/100 on ChatGPT" does not tell you what to fix, or whether to fix anything at all. A useful audit breaks the score into dimensions — component questions, each with its own diagnostic and its own remedy. BrandGEO scores on six dimensions across a 150-point scale, normalized to 0–100. This post is a practitioner's explainer of each dimension: what it measures, why it matters, and what moves it.

A single AI visibility score is a tempting shortcut. It is also a lossy one. "Your brand scores 63/100 on ChatGPT" does not tell you what to fix, or whether to fix anything at all.

A useful audit breaks the score into component dimensions — each of them a different question, with a different diagnostic, and a different remedy. BrandGEO scores on six dimensions across a 150-point scale, normalized to 0–100. What follows is a practitioner's explainer of each: what it measures, why it matters, and what moves it.

The six dimensions at a glance

Dimension Max points The question it answers
Recognition 25 Does the model know your brand exists?
Knowledge Depth 30 How accurately does the model describe you?
Competitive Context 25 Who does the model list you alongside, and how?
Sentiment & Authority 30 Is the tone favorable, and are you cited as a source?
Contextual Recall 15 Do you surface on category-level questions?
AI Discoverability 25 Can AI systems find and parse you?
Total 150 Normalized to 0–100

The point weightings are not arbitrary. Knowledge Depth and Sentiment & Authority are weighted highest because they are the two dimensions that most directly shape buyer perception once a brand has been named. Recognition and Competitive Context are weighted next because they determine whether you enter the answer at all and in what company. Contextual Recall is narrower but sharp — it isolates the hardest test of all, surfacing unprompted. AI Discoverability captures the hygiene layer underneath the others.

Each dimension is worth understanding on its own terms.

Dimension 1 — Recognition (25 points)

What it measures

When prompted with your brand name directly, does the model identify the company, its category, its core offering, and typically its founders or origin?

Example prompt: "What is [Brand]?"

A strong Recognition score means the model returns a coherent, accurate summary of what you do — category, audience, core product. A weak score means the model says "I'm not familiar with that company" or confuses you with a similarly named business.

Why it matters

Recognition is the precondition for everything else. A brand the model cannot name is a brand that cannot be described, compared, or cited. Recognition is also the dimension that most directly reflects whether your brand has crossed the threshold of being "in the training data" — the long, slow signal base that feeds parametric memory.

What moves it

  • Presence in sources that feed training data heavily: Wikipedia, major industry publications, Crunchbase, LinkedIn, G2, Capterra, Trustpilot, Reddit.
  • Consistency of brand naming across those sources. A brand that appears under three different names (company name, product name, legal entity) fragments the signal.
  • A distinctive brand name. Generic names ("Connect," "Flow") collide with many other entities and confuse Recognition; distinctive names consolidate it.

Recognition is a slow-moving dimension. Investments made this quarter typically show up in the next model training cycle, not the next audit.

Dimension 2 — Knowledge Depth (30 points)

What it measures

When the model describes your brand, how accurate, complete, and current is the description? Does it get your features right, your positioning right, your audience right?

Example prompt: "Describe [Brand]'s product, audience, pricing, and key differentiators."

Strong Knowledge Depth means the model produces a paragraph that reads like it was written by someone who read your homepage and understood your positioning. Weak Knowledge Depth means the description is generic ("a software company"), outdated ("founded as a consultancy"), or partially wrong ("offers free tier" when no free tier exists).

Why it matters

Recognition gets you named; Knowledge Depth determines what that naming actually says about you. A competitor described with a rich, accurate paragraph out-earns you even if both names appear. This is the dimension where a brand's marketing work is most legible inside an AI answer.

What moves it

  • Clarity and stability of positioning across all owned surfaces (homepage, about page, product pages). If your site describes you three different ways, the model picks the most salient and often not the most current.
  • Structured product pages with specific feature claims. Vague marketing prose ("we help teams collaborate better") produces vague AI descriptions.
  • Accurate, current external profiles: G2, Capterra, Wikipedia, Crunchbase, LinkedIn company page. These are frequent sources for the descriptive layer of an AI answer.
  • Press coverage that uses specific, correct language about the product. General press ("XYZ raises Series B") does less than a product-focused piece that describes the feature set.

Knowledge Depth is where the biggest ROI of focused GEO work usually lives. Fixing stale sources and aligning positioning across surfaces is high-leverage and relatively quick (one or two refresh cycles on retrieval-using providers; two to four on training-data-only providers).

Dimension 3 — Competitive Context (25 points)

What it measures

When the model discusses your category or compares brands, which competitors does it place you with, and how does the comparison frame you?

Example prompt: "How does [Brand] compare to [Competitor]?" or "What are the differences between [Brand] and other tools in this space?"

A strong Competitive Context score means the model places you with the right peers — companies your buyers would actually evaluate you against — and describes your differentiators in terms you would recognize. A weak score means the model bundles you with unrelated or lower-tier competitors, or describes you in terms that understate your positioning.

Why it matters

Most buyer research involves comparison. The brands placed next to you in a model's answer become your implicit peer set in the buyer's mind. If the model sets the peer frame wrong, you lose control of the comparison before you ever enter the conversation.

What moves it

  • Positioning content that explicitly addresses your category and key competitors. If your site, G2 profile, or review coverage draws clear distinctions with named alternatives, the model is more likely to pick up that framing.
  • Authoritative third-party comparisons (industry listicles, analyst reports) that place you in the peer set you want to be in.
  • Consistency in how your category is described. If you describe yourself as "mid-market B2B SaaS" and major reviews describe you as "enterprise," the model may pick the latter and compare you against enterprise tools your buyers do not consider.

Competitive Context is the dimension where narrative work — carefully naming and framing your category — pays off most directly.

Dimension 4 — Sentiment & Authority (30 points)

What it measures

Two related sub-dimensions:

  • Sentiment: the tone the model uses when describing your brand — positive, neutral, negative, or mixed.
  • Authority: whether the model cites you as a source on category-level questions, or treats you as an expert/leader in your field.

Example prompts: "What do users say about [Brand]?", "Who are the authorities on [category]?"

Strong Sentiment & Authority means the model describes your brand in favorable, specific terms — noting strengths, handling known weaknesses fairly — and references you as a source on category-level questions. Weak Sentiment & Authority means the tone is neutral-to-negative, the model highlights weaknesses disproportionately, or treats you as a follower rather than a contributor to the category.

Why it matters

Sentiment & Authority is what the reader of an AI answer actually walks away with as an impression. A brand named but described flatly loses to a competitor named and described with enthusiasm. Authority is the harder, higher-leverage half: brands the model treats as a source are the brands that shape the answer, not just appear in it.

What moves it

  • Review site reputation — recent, positive, specific reviews on G2, Capterra, Trustpilot, and vertical review sites.
  • Reddit and forum sentiment. These sources weigh heavily in the qualitative framing of a brand. Participation in relevant communities (thoughtful, not promotional) shapes long-run sentiment.
  • Published, original research and thought leadership that is cited externally. Being quoted in industry media on category-level topics builds the authority sub-dimension.
  • Crisis management and response to negative coverage. A brand with a visible pattern of addressing issues publicly reads differently to a model than one that ignores them.

Authority is the hardest dimension to move quickly, and the most durable when you do. A year of sustained thought leadership produces results that a campaign sprint does not.

Dimension 5 — Contextual Recall (15 points)

What it measures

When the user asks a category-level question without naming your brand, does the model surface you anyway?

Example prompts: "What are the best [category] tools in 2026?", "I'm a [persona] looking for [outcome] — what should I consider?"

Strong Contextual Recall means the model includes you in its answer when asked about the category. Weak Contextual Recall means the model names five competitors and omits you entirely, even though it recognizes you when prompted directly.

Why it matters

Contextual Recall is the hardest test and the closest to what a real buyer experiences. A buyer who does not yet know your brand asks "what are the best X tools?" If the model does not surface you, you are not in the shortlist. You are not even in the awareness set.

This is the dimension where AI visibility has the most direct commercial consequence and also where many brands discover the largest gap. Strong Recognition on name queries can coexist with weak Contextual Recall on category queries.

What moves it

  • Presence in the third-party lists and comparison articles that a model's retrieval backend would surface for category queries.
  • Keyword alignment between your positioning and the phrasing buyers actually use. If buyers search for "customer success tools" and your site positions you as a "post-sale engagement platform," the gap matters.
  • Wikipedia entries for your category that list or link to your brand as an example.
  • Being named as an example in analyst coverage or sector reports.

The 15-point weighting reflects its narrowness (it is one specific test among several), but its diagnostic value outpaces its weighting — it is the most revealing of the six.

Dimension 6 — AI Discoverability (25 points)

What it measures

Can AI systems — crawlers, retrieval engines, real-time search backends — actually find, fetch, and parse your site? Is your brand name distinctive enough to trigger clean retrieval?

Typical checks: robots.txt rules for known AI crawlers, schema.org markup, semantic HTML structure, content-rendering-in-JS diagnostics, brand name uniqueness, canonical URL hygiene.

Why it matters

AI Discoverability is the hygiene layer underneath the others. A brand with a wonderful Wikipedia entry, strong press, and great reviews can still under-perform in retrieval-using providers if its own site is invisible to AI crawlers or if its name collides with several other entities.

This dimension also catches the growing set of providers that use real-time retrieval as their primary mode (Perplexity-style products, Gemini with Search, ChatGPT with browsing). Retrieval answers can only include sources the retrieval stage can actually fetch and read.

What moves it

  • Robots.txt policy — explicitly allowing (or restricting, with intent) AI crawlers.
  • Schema.org structured data (Organization, Product, FAQPage, Review schemas).
  • Server-side rendering of substantive content. Content hidden in JavaScript that only renders on client-side execution is invisible to many crawlers.
  • A distinctive brand name or clear brand context that separates you from naming collisions.
  • Canonical URL discipline and a well-structured sitemap.

AI Discoverability overlaps meaningfully with classical SEO hygiene. A good technical SEO baseline carries most of the way; a few GEO-specific additions (schema for AI, robots directives, name disambiguation) close the rest.

The total: 150 points, normalized to 0–100

The six dimensions sum to 150 points. The composite is normalized to a 0–100 score for readability. The normalized score is useful as a high-level number for dashboards and executive reviews. The underlying six-dimension breakdown is what drives work.

A score of 63/100 with strong Recognition and Knowledge Depth but weak Contextual Recall is a completely different brief than a 63/100 with strong Contextual Recall but weak Knowledge Depth. The composite looks identical; the remedies are different.

For more on interpreting scores and the three strategic questions audits should answer, see Recognition, Recall, and Reality: The Three Questions Every Audit Must Answer.

Cross-dimension patterns

A few recurring patterns show up when audits run across many brands:

Pattern A — strong Recognition, weak Contextual Recall. The model knows your name when prompted, but does not surface you when asked about your category. Usually a signal that your positioning and category-level presence are underdeveloped. Remedy: invest in category-level content, third-party listicles, analyst coverage.

Pattern B — strong Knowledge Depth, weak Sentiment & Authority. The model describes you accurately but flatly. Usually a signal that you have the factual footprint but not the qualitative social proof. Remedy: review site work, community presence, thought leadership.

Pattern C — strong everything on Claude and DeepSeek, weak on ChatGPT and Gemini. Parametric memory is solid; retrieval surface is weak. Remedy: classical SEO discipline focused on the queries models issue, not the queries users type.

Pattern D — strong on a single provider, weak across the others. Concentration risk. Usually a single strong source (for example, a well-optimized Wikipedia entry) is doing heavy lifting for one provider's training data but is not being reinforced elsewhere.

Each pattern has a specific prescription. A tool that returns only a composite score cannot help you diagnose any of them.

The takeaway

A six-dimension scoring model is not bureaucratic overhead. It is the minimum resolution at which AI brand visibility becomes actionable. The composite answers "how are we doing?" The dimensions answer "what do we do about it?"

If you want to see your brand's current scoring across all six dimensions, for five providers, in about two minutes, you can start a free audit — seven-day trial, no credit card, full PDF report at the end.

See how AI describes your brand

BrandGEO runs structured prompts across ChatGPT, Claude, Gemini, Grok, and DeepSeek — and scores your brand across six dimensions. Two minutes, no credit card.

Keep reading

Related posts

BrandGEO
AI Visibility Apr 22, 2026

What Is AI Brand Visibility? A 2026 Primer

For twenty-five years, the question marketers asked was simple: where do we rank? In 2026, the question has changed. Buyers now open ChatGPT, Claude, or Gemini, ask a question in plain language, and receive a single composed answer. There is no page of blue links to fight for. Either your brand appears in that answer, described accurately, or it does not. AI brand visibility is the measurable degree to which a language model surfaces and describes your company — and it is quickly becoming a primary discovery metric.

BrandGEO
Brand Strategy Apr 21, 2026

What McKinsey's 44% / 16% Numbers Really Mean for Your 2026 Marketing Plan

Two numbers from McKinsey's August 2025 report have travelled further than any other statistic in the AI visibility conversation: 44% of US consumers use AI search as their primary source for purchase decisions, and only 16% of brands systematically measure their AI visibility. Those numbers appear on investor decks, in pitch emails, and at the top of almost every GEO article written since. Most of the time, they are cited without context. This post unpacks what the data actually measured, what it did not, and how a marketing team should translate the headline into a plan.

BrandGEO
SEO Apr 20, 2026

The Wikipedia Lever: How a Well-Structured Entry Moves Your Knowledge Depth Score

Of every lever in Generative Engine Optimization, a well-formed Wikipedia entry has the most predictable payoff on how LLMs describe your brand. Wikipedia corpora are oversampled in nearly every major model's training data, cited heavily by search-augmented providers, and treated as a canonical fact source. Yet most brands either have no entry at all, a three-sentence stub, or an entry that was edited once in 2021 and left to rot. This is the playbook to fix that without getting your article deleted or your account blocked.