BrandGEO   McKinsey's 44%/16% Data: A Practical Marketing Read — BrandGEO            A Markdown version of this page is available at https://brandgeo.co/blog/mckinsey-44-16-numbers-2026-marketing-plan.md, optimized for AI and LLM tools.

 [ Brand Strategy ](https://brandgeo.co/blog/category/brand-strategy) [ Market Research ](https://brandgeo.co/blog/category/market-research) ·  April 21, 2026  ·     8 min read  · Updated Apr 23, 2026

 What McKinsey's 44% / 16% Numbers Really Mean for Your 2026 Marketing Plan
============================================================================

 The most-cited stat in AI marketing is also the most misunderstood. Here's what the McKinsey data actually says — and what to do about it.

   Two numbers from McKinsey's August 2025 report have travelled further than any other statistic in the AI visibility conversation: 44% of US consumers use AI search as their primary source for purchase decisions, and only 16% of brands systematically measure their AI visibility. Those numbers appear on investor decks, in pitch emails, and at the top of almost every GEO article written since. Most of the time, they are cited without context. This post unpacks what the data actually measured, what it did not, and how a marketing team should translate the headline into a plan.

Two numbers from [McKinsey's "New Front Door to the Internet" report](https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/new-front-door-to-the-internet-winning-in-the-age-of-ai-search) (August 2025) have travelled further than any other statistic in the AI visibility conversation. You have seen them quoted on LinkedIn, on investor decks, in analyst notes, and at the top of a great many GEO articles:

- **44%** of US consumers now cite AI search as their primary source for purchase decisions.
- Only **16%** of brands systematically measure their AI visibility.

Most of the time, those two numbers get pasted together, followed by a headline such as "the gap is the opportunity." That framing is not wrong. It is also not sufficient. The numbers deserve a more careful read than a LinkedIn hook allows — particularly if your marketing plan has a dollar figure attached to it.

This post does the careful read.

What the numbers actually measured
----------------------------------

The McKinsey report surveyed US consumers about how they research purchase decisions. The 44% figure refers to buyers who say AI search — ChatGPT, Gemini, Claude, Perplexity, and peers — is their *primary* source when investigating a purchase. Primary, not exclusive. Consumers still use Google, still read reviews, still ask friends. But the category of "first thing I do" now has AI in it more often than not, for nearly half the survey sample.

The 16% figure refers to the share of brands with a defined, repeatable process for measuring how AI systems describe them. Not "brands who care about AI" — which is much higher. Not "brands who have asked ChatGPT about themselves once" — which is close to universal. Brands with a *process*: a defined set of queries, a cadence, a rubric, a dashboard, someone whose job it is to own the number.

Two additional data points from the same report tend to get dropped when the headline travels:

- **40–55%** of consumers use AI search as a primary source *by sector*. Travel and consumer electronics sit higher; commodity categories lower. Your industry probably does not track the 44% average.
- Unprepared brands — those without an AI visibility baseline — are projected to lose **20–50%** of their organic traffic as AI search adoption compounds.

The 20–50% projection is the tail-risk number. It is also the one most likely to get cut from an executive summary because it sounds alarmist. It is not alarmist; it is a range, grounded in share-shift modelling, and it belongs in the planning conversation.

What the numbers did not measure
--------------------------------

Three clarifications that matter when you use these figures internally.

**First, the 44% is self-reported intent, not attributed conversion.** McKinsey asked buyers where they *start*. They did not track where those buyers *bought*. The causal chain between "I asked ChatGPT" and "I purchased Brand X" is still being mapped by analytics teams, and the attribution windows are messy. Treat the 44% as evidence of a channel shift, not as a direct conversion-to-revenue ratio.

**Second, the 16% is a point estimate.** It will move. Every quarter, more brands stand up basic AI visibility tracking — some with dedicated tools, some with a spreadsheet and a weekly ritual. By the time you are reading the number, it is probably 20% or 22%. The gap is closing, which is the second-order reason the land-grab framing exists.

**Third, "measures AI visibility" is not a standardized definition.** Some of the 16% are running rigorous structured-prompt audits across five providers. Others are asking ChatGPT a few questions on a Friday and noting the result in a shared doc. The variance inside that 16% is substantial. The quality bar matters as much as the count.

The load-bearing finding is actually a different one
----------------------------------------------------

If you read the full report rather than the headline, the most actionable insight is not the 44%. It is this: **the gap between consumer AI-search adoption and brand measurement of AI-search is the largest measurement-to-channel gap McKinsey has recorded in a decade of tracking marketing channels.**

That framing is what a CMO should internalize. Historically, when a channel accumulated serious consumer adoption, brand measurement followed within two to four quarters. Social media, mobile, voice search — in each case, measurement caught up because buyer behaviour forced the question. AI search is the first channel in recent memory where the adoption curve has significantly outpaced the measurement curve, and the delay is measured in years, not quarters.

That is the strategic window. Not "44% of buyers use AI," which will be obvious to your board without a research citation. **The asymmetry between buyer behaviour and brand instrumentation** — that is the insight worth briefing a planning meeting on.

How to translate the numbers into a plan
----------------------------------------

Four practical moves if you are building a 2026 marketing plan around this data.

### 1. Calibrate the 44% to your category

Do not plan around the headline. Plan around the sector-level figure. If you sell enterprise B2B SaaS to CFOs, the applicable number is not 44% — it is closer to Forrester's B2B-specific research, which suggests B2B buyers are adopting AI search roughly **three times faster** than consumers. If you sell consumer electronics, the figure is probably above 44%. If you sell a commodity category with no meaningful online research cycle, it may be below 30%.

A simple planning matrix:

- High-consideration, long sales cycle (enterprise SaaS, capital equipment, professional services): model the channel shift aggressively.
- Mid-consideration (mid-market SaaS, ecommerce above commodity): model at or above the 44% average.
- Commodity or impulse: the 44% probably does not yet bite. Monitor, do not yet reallocate budget.

### 2. Set a baseline, not a moonshot

The gap between 44% and 16% suggests the action is to move into the 16%, not to leap past it. That means establishing a baseline before committing to a target. Run an audit across the major providers. Record the number. Pick two dimensions — Recognition and Contextual Recall are the obvious starters — and set a trailing thirty-day improvement goal.

The mistake most teams make at this point is to commit to a number before they know what drives it. "Move our ChatGPT visibility score to 75" is not a plan. "Reduce the gap between our Contextual Recall and our nearest two competitors by end of Q2" is.

### 3. Budget for the measurement, separately from the optimization

A common early mistake is to bundle AI visibility measurement and AI visibility optimization into a single line item. They have different budget profiles. Measurement is a relatively fixed cost — the price of a tool, or the time of one analyst — and it scales linearly with the number of brands or monitors you run. Optimization is a variable, campaign-driven cost: content production, digital PR, schema work, Wikipedia editing, category citations.

If you cannot separate these two, you cannot tell your CFO what you are paying for. The measurement line should be modest and defended on the basis of instrumentation value. The optimization line should be justified on the basis of the gap the measurement revealed.

### 4. Put a name on the number

Every metric that survives more than two quarters has an owner. Rankings had an SEO manager. Paid acquisition had a performance marketer. Brand lift had a brand lead. AI visibility needs a name. Not necessarily a new hire — most mid-market teams attach it to an existing SEO or content manager — but a person whose quarterly review includes the number.

When no one owns it, the number drifts. When one person owns it, the number gets defended in the same QBR that everything else is defended.

A word on the 84%
-----------------

The rhetorical use of the 84% figure ("the other 84% of brands are missing this") is tempting and mostly reasonable. Two cautions.

The 84% is not uniformly uninformed. A meaningful chunk of it is composed of brands in categories where AI visibility genuinely does not yet matter — local services, some B2B commodity categories, brands whose buyers research offline. "Not measuring" in those segments is a rational allocation of scarce marketing attention, not a failure.

The other chunk of the 84% is composed of brands who *do* care, have run ad-hoc audits, and have concluded they do not yet have the process in place to operationalize the measurement. That is different from apathy. It is a starting condition.

Which means: the marketing value of "being in the 16%" is partly defensive (you are not surprised by a channel shift) and partly offensive (you can run experiments and see them move the needle). The offensive value is the more interesting one, and it is why a serious baseline matters more than the headline stat.

What the board should hear
--------------------------

If you are briefing a board or exec team on the data, the three takeaways that translate well:

1. AI search is an adoption-first, measurement-second channel. Consumer behaviour is moving faster than brand tooling. That is rare and worth naming.
2. The right response is measurement discipline, not a wholesale budget reallocation. Baseline, instrument, then reallocate in Q3 or Q4 based on what the baseline reveals.
3. The 20–50% organic-traffic tail risk is real, and brands with no visibility instrumentation will discover the shift through a revenue forecast miss. The cost of instrumentation is small relative to the cost of that surprise.

None of this is revolutionary — to use a word we try to avoid. It is the same discipline applied to every prior channel. The difference is timing: the channel is new, the tooling is new, and the 16%-vs-44% gap will not stay open for long.

Where to start
--------------

If you do not yet have a baseline, the first step is an audit across the major providers. BrandGEO runs structured prompts across five providers (OpenAI, Anthropic, Gemini, xAI, DeepSeek), scores six dimensions on a 150-point scale normalized to 0–100, and returns a PDF report with industry-aware key findings per provider. It takes about two minutes to run and seven days to trial with no credit card required.

See related reading:

- [What Is AI Brand Visibility? A 2026 Primer](/blog/what-is-ai-brand-visibility-2026-primer)
- [The Authority Waterfall: Why AI Visibility Flows From Upstream Credibility](/blog/authority-waterfall-ai-visibility-upstream-credibility)
- [Gartner's 25% Search-Volume Drop by End of 2026: What to Model For](/blog/gartner-25-percent-search-drop-what-to-model)

You can [start a free audit](/register) or review the [pricing page](/pricing) to see where your team fits.

### Keywords

 [ #For CMOs ](https://brandgeo.co/blog/tag/for-cmos) [ #AI Visibility ](https://brandgeo.co/blog/tag/ai-visibility) [ #Framework ](https://brandgeo.co/blog/tag/framework) [ #McKinsey ](https://brandgeo.co/blog/tag/mckinsey)

 [ View all tags → ](https://brandgeo.co/blog/tags)

### See how AI describes your brand

 BrandGEO runs structured prompts across ChatGPT, Claude, Gemini, Grok, and DeepSeek — and scores your brand across six dimensions. Two minutes, no credit card.

 [ Run a free audit  ](https://brandgeo.co/register) [ See plans ](https://brandgeo.co/pricing)

  On this page

Topics

- [ AI Visibility 21 ](https://brandgeo.co/blog/category/ai-visibility)
- [ Brand Strategy 11 ](https://brandgeo.co/blog/category/brand-strategy)
- [ SEO 15 ](https://brandgeo.co/blog/category/seo)
- [ Tutorials 15 ](https://brandgeo.co/blog/category/tutorials)
- [ Industry Insights 10 ](https://brandgeo.co/blog/category/industry-insights)
- [ Market Research 7 ](https://brandgeo.co/blog/category/market-research)
- [ Strategy &amp; ROI 8 ](https://brandgeo.co/blog/category/strategy-roi)
- [ For Agencies 2 ](https://brandgeo.co/blog/category/for-agencies)

  Keep reading

Related posts
-------------

 [ Browse all posts  ](https://brandgeo.co/blog)

  [ ![BrandGEO](/brandgeo-transparent-on-black-926x268.png)

 ](https://brandgeo.co/blog/what-is-ai-brand-visibility-2026-primer) AI Visibility Apr 22, 2026

###  [What Is AI Brand Visibility? A 2026 Primer](https://brandgeo.co/blog/what-is-ai-brand-visibility-2026-primer)

For twenty-five years, the question marketers asked was simple: where do we rank? In 2026, the question has changed. Buyers now open ChatGPT, Claude, or Gemini, ask a question in plain language, and receive a single composed answer. There is no page of blue links to fight for. Either your brand appears in that answer, described accurately, or it does not. AI brand visibility is the measurable degree to which a language model surfaces and describes your company — and it is quickly becoming a primary discovery metric.

   [ ![BrandGEO](/brandgeo-transparent-on-black-926x268.png)

 ](https://brandgeo.co/blog/authority-waterfall-ai-visibility-upstream-credibility) AI Visibility Apr 19, 2026

###  [The Authority Waterfall: Why AI Visibility Flows From Upstream Credibility](https://brandgeo.co/blog/authority-waterfall-ai-visibility-upstream-credibility)

The first time a marketing team runs an AI visibility audit and sees a disappointing score, the reflex is almost always the same: what do we change on our site to fix this? Schema markup, structured data, better on-page content, a clearer about page. All of those are reasonable instincts. Most of them are also wrong — not because they do not matter, but because they operate downstream of the actual cause. This post introduces a framework we call the Authority Waterfall: the model that explains where AI visibility actually comes from, and why the fix is rarely on the page that fails the audit.

   [ ![BrandGEO](/brandgeo-transparent-on-black-926x268.png)

 ](https://brandgeo.co/blog/cost-of-ai-invisibility-modelling-pipeline-impact) Brand Strategy Apr 18, 2026

###  [The Cost of AI Invisibility: Modelling the Pipeline Impact of Being Missing](https://brandgeo.co/blog/cost-of-ai-invisibility-modelling-pipeline-impact)

"What does it cost us to be invisible in ChatGPT?" is the question every CMO eventually asks, and the one most tools refuse to answer. The honest answer is that the model is straightforward — TAM, research-channel share, mention rate, and a conversion coefficient — but the inputs require work to defend. This post builds the model in full, runs a worked example for a mid-market B2B SaaS, and shows where the numbers turn brittle. You can copy the structure into a spreadsheet in about twenty minutes.
