BrandGEO   Share of Model: The SOV Metric for the LLM Era — BrandGEO            A Markdown version of this page is available at https://brandgeo.co/blog/share-of-model-share-of-voice-llm-era.md, optimized for AI and LLM tools.

 [ AI Visibility ](https://brandgeo.co/blog/category/ai-visibility) [ Brand Strategy ](https://brandgeo.co/blog/category/brand-strategy) ·  March 8, 2026  ·     9 min read  · Updated Apr 23, 2026

 Share of Model: What Share of Voice Becomes in the LLM Era
============================================================

 SOV used to mean mentions in media. Today it means presence in AI answers. Here's how to measure and grow it.

   Share of Voice has been a marketing fixture for thirty years. It measured your brand's share of media mentions, press coverage, or paid impressions against competitors. It was crude, it was useful, and it gave boards a number to argue over. The underlying channel has shifted — media coverage and paid impressions are no longer where most buyers first hear your brand named. The channel that matters most today is the composed answer of a language model, and the right analog for SOV in that channel has a different name: Share of Model.

Share of Voice has been a marketing fixture for three decades. It measured your brand's share of media mentions, press coverage, or paid impressions against competitors. It was crude, it was useful, and it gave boards a number to argue over.

The underlying channel has shifted. Media coverage and paid impressions are no longer where most buyers first hear your brand named. The channel that matters most today is the composed answer of a language model, and the right analog for SOV in that channel has a different name: Share of Model.

This post walks through what Share of Model is, how to measure it credibly, and the tactical consequences of putting it on your dashboard.

What Share of Model actually measures
-------------------------------------

**Share of Model is the percentage of category-relevant AI answers in which your brand is named, relative to the set of all brands named across the same answers.**

That definition has three important clauses.

**"Category-relevant AI answers."** Share of Model is not measured on direct name queries (*"What is Brand X?"*). Those are Recognition. Share of Model is measured on category queries (*"What are the best tools for X?"* or *"Who should I consider if I'm a \[persona\] looking for \[outcome\]?"*). The point of the metric is to capture how often, when the question is about your category, your brand shows up at all.

**"Brand is named."** The unit is mention presence, not citation count. A single mention of your brand in an answer counts once; mentioning your brand twice in the same answer does not double-count. This keeps the metric comparable across answers of different lengths.

**"Relative to the set of all brands named."** Share of Model is inherently relative. Your brand's absolute appearance rate matters, but the metric frames it against your competitive set. If three brands each appear in 70% of answers and you appear in 35%, the 35% is defensible only in a category with four players; in a category with ten, it is much weaker.

Why this metric and not just "mention count"
--------------------------------------------

A naive approach to AI visibility is to count how often each brand is mentioned across a prompt set. That count is informative, but the Share of Model framing adds two things that a raw count does not.

**It normalizes across categories.** A niche B2B software category may produce answers with three brands named; a broader consumer category may produce answers with eight. Raw counts are not comparable across categories. Share (percentage of answers in which brand appears, relative to all named brands) is.

**It forces a defined competitive set.** To calculate Share of Model, you have to decide who "all brands" means. That is useful discipline. It prevents the illusion of progress ("we're mentioned 40% of the time!") when the model is naming you alongside a dozen irrelevant competitors.

Raw mention counts are fine for month-to-month internal trending. Share of Model is the metric that belongs on a board deck.

How to calculate it
-------------------

The calculation is straightforward once the sampling is set up.

1. **Define the prompt set.** A stable set of 20–50 category-level prompts, covering the ways a buyer would actually ask about your category. BrandGEO uses 30 structured checks across six categories (direct brand, product discovery, competitor comparison, industry expertise, geographic relevance, recommendation scenarios). Direct brand prompts contribute to Recognition, not Share of Model. The category prompts are where Share of Model lives.
2. **Define the competitive set.** List the brands that genuinely compete for the answer. Between 3 and 20 is a typical range. Include the obvious direct competitors and one or two adjacent or aspirational peers. Exclude distant adjacencies that clutter the count.
3. **Run the prompts across providers.** Three to five runs per prompt per provider per day, across OpenAI, Anthropic, Google, xAI, and DeepSeek. Fewer runs produce unstable numbers; more produce marginal improvement at higher cost.
4. **Count brand presence per answer.** For each answer, record which brands from the competitive set were named (at least once). Presence is binary per answer.
5. **Compute the share.** For each brand, Share of Model = (number of answers brand was named in) / (total answers in the sample).

Optionally, compute **Share of Voice Weighted**, where each mention is weighted by the sentiment of the framing (positive, neutral, negative). This adds a second dimension — share and tone — that a raw presence count does not capture.

The two most common mistakes
----------------------------

Two errors reliably undermine early Share of Model measurement.

**Mistake one: including only direct-query prompts.** If your prompt set is dominated by *"What is Brand X?"* and *"Describe Brand X,"* the model is named 100% of the time by definition. You have measured Recognition under another name. Real Share of Model lives in category prompts where your brand has to compete for a slot.

**Mistake two: letting the competitive set drift.** If the brands in the competitive set change between measurements, the denominator changes, and the percentages are not comparable across time. Lock the competitive set, version-control it, and revisit it quarterly. When you change it, note the change in the dashboard.

What good looks like
--------------------

There is no universal "good" number for Share of Model, because categories vary in how many brands a model typically names per answer. A few reference points:

- In a tightly consolidated category (3–5 dominant brands), expect the top brands to sit at 60–85% Share of Model. The leader is often near 80%.
- In a broad, fragmented category (10+ plausible brands), the distribution is flatter. Leaders may sit at 40–60%; mid-tier brands at 15–30%.
- For an emerging or niche category, models may struggle to name more than two or three brands consistently. The top brands achieve high Share of Model (50–70%) but the long tail is near zero.

A useful internal read is not the absolute number but the **distance from the leader** and the **trajectory over time**. If you are consistently at 15% in a category where the leader is at 80%, that is a different problem than being at 15% in a category where no one is above 25%.

The diagnostic value of the gap
-------------------------------

Share of Model is most useful not as a scoreboard but as a diagnostic.

**If your Share of Model is very high but your revenue is not matching,** the gap is somewhere other than visibility. You are being named, but something in the downstream funnel — positioning, pricing, product-market fit — is preventing the mention from converting.

**If your Share of Model is very low and the leader's is very high,** the question is whether the leader's advantage is earned through signals you can replicate or whether it is structural (network effects, install base). Share of Model cannot answer that question alone; it just surfaces it for investigation.

**If your Share of Model is moderate and volatile week-to-week,** you are likely near the boundary of "model considers your brand when composing an answer" versus "model omits your brand." Work on the signals that shift you from uncertain inclusion to reliable inclusion — specifically, category-level Wikipedia coverage, third-party listicles, and analyst mentions.

**If your Share of Model is stable but drops sharply on a specific provider,** that is often a model event (new version, training cutoff shift) or a retrieval event. Cross-reference the drop date with known model releases before concluding your brand lost ground with the category.

How to grow it
--------------

Six specific moves reliably raise Share of Model over 2–4 quarters, in rough order of impact.

### 1. Get listed in category-defining third-party content

The third-party "Top 10 tools for X" articles and analyst reports that rank highly for category queries disproportionately feed AI answers for those same queries. When retrieval-using providers (ChatGPT with browsing, Gemini, Perplexity) answer a category query, they often read several of these articles. Being named in the top five of the most-ranked such articles is one of the highest-leverage moves available.

This is earned, not bought. Outreach to the publishers writing these articles, with a clear pitch on why you belong in the list, is standard practice.

### 2. Ensure category-level Wikipedia coverage

If your category has a Wikipedia entry, check whether your brand is mentioned as a notable example. If the category does not have an entry but clearly merits one, a high-quality, well-sourced category entry that references your brand is a durable parametric signal.

### 3. Publish category-positioning content on your own site

A clear, specific, defensible statement of *what your category is, how it differs from adjacent categories, and which players sit inside it* does three things at once: it establishes your voice as an authority, gives the model a coherent framing to cite, and creates a retrievable surface for category queries that names your brand alongside competitors you are comfortable being compared to.

### 4. Earn analyst coverage in the category

Analyst reports (from Gartner, Forrester, IDC, and vertical analyst firms) carry weight both in training data and in the way the industry discusses the category. Being named even as a "challenger" or "visionary" in a published analyst quadrant has long-tail effects on Share of Model.

### 5. Build sustained community presence in the right venues

Reddit, Hacker News, category-specific Slacks and Discords, and vertical forums all feed qualitative signal into model training and retrieval. A brand that is discussed regularly (and fairly) in these communities accumulates the kind of long-tail mentions that models pick up when composing category answers.

### 6. Run a retrievable owned asset for each major category query

Identify the 10–20 category queries that matter most for your business. For each, ensure that your site has a page that ranks for (or is at least retrievable for) that query, with structured, citable content that names your brand in context. This turns retrieval into another path to Share of Model.

For more on what a retrievable asset looks like, see [Citation Is the New Ranking: The Unit of Success in AI Answers](/blog/citation-is-the-new-ranking-ai-answers).

What Share of Model does not tell you
-------------------------------------

Two honest limits.

**It does not tell you about framing.** A 60% Share of Model with consistently flat or dismissive framing is worse for the business than a 40% Share of Model with enthusiastic, specific framing. Share of Model is a presence metric; it does not capture sentiment. Pair it with Sentiment &amp; Authority scoring (one of the six BrandGEO dimensions) for a fuller picture.

**It does not tell you about audience fit.** A brand can achieve high Share of Model in the answers to generic category prompts but near-zero Share of Model in the answers to the specific buyer persona prompts that actually represent your target market. The composition of the prompt set — especially the persona-and-scenario prompts — matters enormously.

The takeaway
------------

Share of Voice migrated. The channel that used to be media coverage and paid impressions is increasingly the composed answer of a language model. Share of Model is the analog metric — percentage of category-relevant AI answers in which your brand appears, measured against a stable competitive set on a stable prompt set, tracked over time across providers.

It is not the only metric that matters. Paired with Recognition, Knowledge Depth, Competitive Context, Sentiment &amp; Authority, Contextual Recall, and AI Discoverability (the [six dimensions of AI brand visibility](/blog/six-dimensions-ai-brand-visibility-explainer)), it gives a marketing team something close to the visibility picture SOV once provided — adapted for the discovery channel that actually exists now.

If you want to see your Share of Model across five providers, benchmarked against your chosen competitive set, you can [start a free audit](/register) in about two minutes. Seven-day trial, no credit card.

### Keywords

 [ #For CMOs ](https://brandgeo.co/blog/tag/for-cmos) [ #AI Visibility ](https://brandgeo.co/blog/tag/ai-visibility) [ #Share of Model ](https://brandgeo.co/blog/tag/share-of-model) [ #Competitive Context ](https://brandgeo.co/blog/tag/competitive-context) [ #Framework ](https://brandgeo.co/blog/tag/framework)

 [ View all tags → ](https://brandgeo.co/blog/tags)

### See how AI describes your brand

 BrandGEO runs structured prompts across ChatGPT, Claude, Gemini, Grok, and DeepSeek — and scores your brand across six dimensions. Two minutes, no credit card.

 [ Run a free audit  ](https://brandgeo.co/register) [ See plans ](https://brandgeo.co/pricing)

  On this page

Topics

- [ AI Visibility 21 ](https://brandgeo.co/blog/category/ai-visibility)
- [ Brand Strategy 11 ](https://brandgeo.co/blog/category/brand-strategy)
- [ SEO 15 ](https://brandgeo.co/blog/category/seo)
- [ Tutorials 15 ](https://brandgeo.co/blog/category/tutorials)
- [ Industry Insights 10 ](https://brandgeo.co/blog/category/industry-insights)
- [ Market Research 7 ](https://brandgeo.co/blog/category/market-research)
- [ Strategy &amp; ROI 8 ](https://brandgeo.co/blog/category/strategy-roi)
- [ For Agencies 2 ](https://brandgeo.co/blog/category/for-agencies)

  Keep reading

Related posts
-------------

 [ Browse all posts  ](https://brandgeo.co/blog)

  [ ![BrandGEO](/brandgeo-transparent-on-black-926x268.png)

 ](https://brandgeo.co/blog/what-is-ai-brand-visibility-2026-primer) AI Visibility Apr 22, 2026

###  [What Is AI Brand Visibility? A 2026 Primer](https://brandgeo.co/blog/what-is-ai-brand-visibility-2026-primer)

For twenty-five years, the question marketers asked was simple: where do we rank? In 2026, the question has changed. Buyers now open ChatGPT, Claude, or Gemini, ask a question in plain language, and receive a single composed answer. There is no page of blue links to fight for. Either your brand appears in that answer, described accurately, or it does not. AI brand visibility is the measurable degree to which a language model surfaces and describes your company — and it is quickly becoming a primary discovery metric.

   [ ![BrandGEO](/brandgeo-transparent-on-black-926x268.png)

 ](https://brandgeo.co/blog/mckinsey-44-16-numbers-2026-marketing-plan) Brand Strategy Apr 21, 2026

###  [What McKinsey's 44% / 16% Numbers Really Mean for Your 2026 Marketing Plan](https://brandgeo.co/blog/mckinsey-44-16-numbers-2026-marketing-plan)

Two numbers from McKinsey's August 2025 report have travelled further than any other statistic in the AI visibility conversation: 44% of US consumers use AI search as their primary source for purchase decisions, and only 16% of brands systematically measure their AI visibility. Those numbers appear on investor decks, in pitch emails, and at the top of almost every GEO article written since. Most of the time, they are cited without context. This post unpacks what the data actually measured, what it did not, and how a marketing team should translate the headline into a plan.

   [ ![BrandGEO](/brandgeo-transparent-on-black-926x268.png)

 ](https://brandgeo.co/blog/authority-waterfall-ai-visibility-upstream-credibility) AI Visibility Apr 19, 2026

###  [The Authority Waterfall: Why AI Visibility Flows From Upstream Credibility](https://brandgeo.co/blog/authority-waterfall-ai-visibility-upstream-credibility)

The first time a marketing team runs an AI visibility audit and sees a disappointing score, the reflex is almost always the same: what do we change on our site to fix this? Schema markup, structured data, better on-page content, a clearer about page. All of those are reasonable instincts. Most of them are also wrong — not because they do not matter, but because they operate downstream of the actual cause. This post introduces a framework we call the Authority Waterfall: the model that explains where AI visibility actually comes from, and why the fix is rarely on the page that fails the audit.
