Share of Voice has been a marketing fixture for three decades. It measured your brand's share of media mentions, press coverage, or paid impressions against competitors. It was crude, it was useful, and it gave boards a number to argue over.
The underlying channel has shifted. Media coverage and paid impressions are no longer where most buyers first hear your brand named. The channel that matters most today is the composed answer of a language model, and the right analog for SOV in that channel has a different name: Share of Model.
This post walks through what Share of Model is, how to measure it credibly, and the tactical consequences of putting it on your dashboard.
What Share of Model actually measures
Share of Model is the percentage of category-relevant AI answers in which your brand is named, relative to the set of all brands named across the same answers.
That definition has three important clauses.
"Category-relevant AI answers." Share of Model is not measured on direct name queries ("What is Brand X?"). Those are Recognition. Share of Model is measured on category queries ("What are the best tools for X?" or "Who should I consider if I'm a [persona] looking for [outcome]?"). The point of the metric is to capture how often, when the question is about your category, your brand shows up at all.
"Brand is named." The unit is mention presence, not citation count. A single mention of your brand in an answer counts once; mentioning your brand twice in the same answer does not double-count. This keeps the metric comparable across answers of different lengths.
"Relative to the set of all brands named." Share of Model is inherently relative. Your brand's absolute appearance rate matters, but the metric frames it against your competitive set. If three brands each appear in 70% of answers and you appear in 35%, the 35% is defensible only in a category with four players; in a category with ten, it is much weaker.
Why this metric and not just "mention count"
A naive approach to AI visibility is to count how often each brand is mentioned across a prompt set. That count is informative, but the Share of Model framing adds two things that a raw count does not.
It normalizes across categories. A niche B2B software category may produce answers with three brands named; a broader consumer category may produce answers with eight. Raw counts are not comparable across categories. Share (percentage of answers in which brand appears, relative to all named brands) is.
It forces a defined competitive set. To calculate Share of Model, you have to decide who "all brands" means. That is useful discipline. It prevents the illusion of progress ("we're mentioned 40% of the time!") when the model is naming you alongside a dozen irrelevant competitors.
Raw mention counts are fine for month-to-month internal trending. Share of Model is the metric that belongs on a board deck.
How to calculate it
The calculation is straightforward once the sampling is set up.
- Define the prompt set. A stable set of 20–50 category-level prompts, covering the ways a buyer would actually ask about your category. BrandGEO uses 30 structured checks across six categories (direct brand, product discovery, competitor comparison, industry expertise, geographic relevance, recommendation scenarios). Direct brand prompts contribute to Recognition, not Share of Model. The category prompts are where Share of Model lives.
- Define the competitive set. List the brands that genuinely compete for the answer. Between 3 and 20 is a typical range. Include the obvious direct competitors and one or two adjacent or aspirational peers. Exclude distant adjacencies that clutter the count.
- Run the prompts across providers. Three to five runs per prompt per provider per day, across OpenAI, Anthropic, Google, xAI, and DeepSeek. Fewer runs produce unstable numbers; more produce marginal improvement at higher cost.
- Count brand presence per answer. For each answer, record which brands from the competitive set were named (at least once). Presence is binary per answer.
- Compute the share. For each brand, Share of Model = (number of answers brand was named in) / (total answers in the sample).
Optionally, compute Share of Voice Weighted, where each mention is weighted by the sentiment of the framing (positive, neutral, negative). This adds a second dimension — share and tone — that a raw presence count does not capture.
The two most common mistakes
Two errors reliably undermine early Share of Model measurement.
Mistake one: including only direct-query prompts. If your prompt set is dominated by "What is Brand X?" and "Describe Brand X," the model is named 100% of the time by definition. You have measured Recognition under another name. Real Share of Model lives in category prompts where your brand has to compete for a slot.
Mistake two: letting the competitive set drift. If the brands in the competitive set change between measurements, the denominator changes, and the percentages are not comparable across time. Lock the competitive set, version-control it, and revisit it quarterly. When you change it, note the change in the dashboard.
What good looks like
There is no universal "good" number for Share of Model, because categories vary in how many brands a model typically names per answer. A few reference points:
- In a tightly consolidated category (3–5 dominant brands), expect the top brands to sit at 60–85% Share of Model. The leader is often near 80%.
- In a broad, fragmented category (10+ plausible brands), the distribution is flatter. Leaders may sit at 40–60%; mid-tier brands at 15–30%.
- For an emerging or niche category, models may struggle to name more than two or three brands consistently. The top brands achieve high Share of Model (50–70%) but the long tail is near zero.
A useful internal read is not the absolute number but the distance from the leader and the trajectory over time. If you are consistently at 15% in a category where the leader is at 80%, that is a different problem than being at 15% in a category where no one is above 25%.
The diagnostic value of the gap
Share of Model is most useful not as a scoreboard but as a diagnostic.
If your Share of Model is very high but your revenue is not matching, the gap is somewhere other than visibility. You are being named, but something in the downstream funnel — positioning, pricing, product-market fit — is preventing the mention from converting.
If your Share of Model is very low and the leader's is very high, the question is whether the leader's advantage is earned through signals you can replicate or whether it is structural (network effects, install base). Share of Model cannot answer that question alone; it just surfaces it for investigation.
If your Share of Model is moderate and volatile week-to-week, you are likely near the boundary of "model considers your brand when composing an answer" versus "model omits your brand." Work on the signals that shift you from uncertain inclusion to reliable inclusion — specifically, category-level Wikipedia coverage, third-party listicles, and analyst mentions.
If your Share of Model is stable but drops sharply on a specific provider, that is often a model event (new version, training cutoff shift) or a retrieval event. Cross-reference the drop date with known model releases before concluding your brand lost ground with the category.
How to grow it
Six specific moves reliably raise Share of Model over 2–4 quarters, in rough order of impact.
1. Get listed in category-defining third-party content
The third-party "Top 10 tools for X" articles and analyst reports that rank highly for category queries disproportionately feed AI answers for those same queries. When retrieval-using providers (ChatGPT with browsing, Gemini, Perplexity) answer a category query, they often read several of these articles. Being named in the top five of the most-ranked such articles is one of the highest-leverage moves available.
This is earned, not bought. Outreach to the publishers writing these articles, with a clear pitch on why you belong in the list, is standard practice.
2. Ensure category-level Wikipedia coverage
If your category has a Wikipedia entry, check whether your brand is mentioned as a notable example. If the category does not have an entry but clearly merits one, a high-quality, well-sourced category entry that references your brand is a durable parametric signal.
3. Publish category-positioning content on your own site
A clear, specific, defensible statement of what your category is, how it differs from adjacent categories, and which players sit inside it does three things at once: it establishes your voice as an authority, gives the model a coherent framing to cite, and creates a retrievable surface for category queries that names your brand alongside competitors you are comfortable being compared to.
4. Earn analyst coverage in the category
Analyst reports (from Gartner, Forrester, IDC, and vertical analyst firms) carry weight both in training data and in the way the industry discusses the category. Being named even as a "challenger" or "visionary" in a published analyst quadrant has long-tail effects on Share of Model.
5. Build sustained community presence in the right venues
Reddit, Hacker News, category-specific Slacks and Discords, and vertical forums all feed qualitative signal into model training and retrieval. A brand that is discussed regularly (and fairly) in these communities accumulates the kind of long-tail mentions that models pick up when composing category answers.
6. Run a retrievable owned asset for each major category query
Identify the 10–20 category queries that matter most for your business. For each, ensure that your site has a page that ranks for (or is at least retrievable for) that query, with structured, citable content that names your brand in context. This turns retrieval into another path to Share of Model.
For more on what a retrievable asset looks like, see Citation Is the New Ranking: The Unit of Success in AI Answers.
What Share of Model does not tell you
Two honest limits.
It does not tell you about framing. A 60% Share of Model with consistently flat or dismissive framing is worse for the business than a 40% Share of Model with enthusiastic, specific framing. Share of Model is a presence metric; it does not capture sentiment. Pair it with Sentiment & Authority scoring (one of the six BrandGEO dimensions) for a fuller picture.
It does not tell you about audience fit. A brand can achieve high Share of Model in the answers to generic category prompts but near-zero Share of Model in the answers to the specific buyer persona prompts that actually represent your target market. The composition of the prompt set — especially the persona-and-scenario prompts — matters enormously.
The takeaway
Share of Voice migrated. The channel that used to be media coverage and paid impressions is increasingly the composed answer of a language model. Share of Model is the analog metric — percentage of category-relevant AI answers in which your brand appears, measured against a stable competitive set on a stable prompt set, tracked over time across providers.
It is not the only metric that matters. Paired with Recognition, Knowledge Depth, Competitive Context, Sentiment & Authority, Contextual Recall, and AI Discoverability (the six dimensions of AI brand visibility), it gives a marketing team something close to the visibility picture SOV once provided — adapted for the discovery channel that actually exists now.
If you want to see your Share of Model across five providers, benchmarked against your chosen competitive set, you can start a free audit in about two minutes. Seven-day trial, no credit card.
See how AI describes your brand
BrandGEO runs structured prompts across ChatGPT, Claude, Gemini, Grok, and DeepSeek — and scores your brand across six dimensions. Two minutes, no credit card.