In February 2024, Gartner published a forecast that has been cited more often than almost any other single data point in the AI search conversation:
"Gartner predicts search engine volume will drop 25% by 2026, due to AI chatbots and other virtual agents."
Two years later, the quote still circulates — at board meetings, on analyst calls, in opening slides of vendor decks. Most of the time, it is deployed as a scare quote. Sometimes as a justification for buying an AI visibility tool. Rarely, however, does anyone actually model the number. That is the interesting omission. A 25% contraction in your largest discovery channel is not a headline. It is a planning input. If you do not convert it into a spreadsheet, the number bounces off your strategy without landing anywhere useful.
This post is the spreadsheet conversion.
What the forecast actually says
Three details tend to get lost in the retelling.
The number is a market-level forecast, not a per-brand forecast. Gartner's 25% applies to aggregate traditional search volume across Google, Bing, and their peers. It does not say your organic traffic will fall 25%. It does not say every keyword will see equivalent decline. The contraction will be uneven — concentrated in informational queries where AI answers the question directly, much smaller on transactional and navigational queries where users still click through to a site.
It is a 2026 endpoint, not a 2026 realization. The forecast is cumulative through the end of 2026. The decline is a curve, not a cliff. Early-adopting categories (travel, consumer electronics, how-to content) are already well past 10–15% year-over-year declines on many query types. Late-adopting categories are barely down.
It is not exclusive to AI Overviews. The 25% forecast includes AI chatbots (ChatGPT, Claude, Perplexity, Gemini) that intercept queries before they ever reach a search engine, as well as embedded AI answers within search engines themselves. In most B2B SaaS categories, the larger effect today is interception — buyers asking ChatGPT directly and never opening google.com — rather than AI Overviews eating the click.
Why the forecast has not produced more panic
A reasonable question: if a respected analyst firm told you a quarter of a major channel would disappear in two years, why is the response not louder?
Three reasons, each of which matters for how you plan.
The decline is absorbed by compounding effects. Most organic traffic portfolios are growing on content, improving on technical SEO, expanding on international markets, or picking up from PR tailwinds at the same time as the AI-driven contraction is eating at them. The net effect in a given quarter is often flat or mildly positive growth that masks a meaningful structural decline underneath. The CMO sees "organic traffic up 3%" and does not notice that the underlying trend is "would have been up 14% without the AI drag."
Attribution is murky. When a buyer researches on ChatGPT, clicks through to the site via a different path a week later, and converts on a branded search, the revenue gets attributed to brand. The AI-driven research is invisible in the attribution stack. This makes the channel contraction systematically underreported in the reports that reach the marketing team's desk.
The forecast is non-catastrophic at the portfolio level. A 25% drop in search volume does not mean a 25% drop in revenue. It means a shift in where the research phase happens. Brands who show up in AI answers capture the same demand through a different path. Brands who do not, lose twice — both on the organic click that no longer happens and on the absence from the AI answer that replaced it.
That third point is the one that translates directly into the modelling exercise.
A four-variable model
To convert Gartner's headline into a plan, you need four inputs and one equation.
Variable 1: Baseline organic traffic. The number you are modelling from. Use trailing-twelve-month sessions attributable to organic search.
Variable 2: Informational-query share. The percentage of your organic traffic driven by informational queries (research-phase, not transactional). This is the slice most exposed to AI answer interception. For most B2B SaaS sites, this is 40–70%. For ecommerce product pages, much lower. For publisher sites, much higher.
Variable 3: Category adoption pace. How fast AI search is being adopted in your category, relative to the Gartner market average. Use a multiplier. If your buyers are early adopters (tech-forward B2B), use 1.2–1.5×. If your category is typical (mid-market services), use 1.0×. If your buyers are laggards (local, regulated, analog), use 0.5–0.8×.
Variable 4: AI visibility capture rate. The share of the diverted research-phase demand you are currently capturing through AI answers. For most brands who have not instrumented this, the honest starting answer is "we don't know, probably below 50% of fair share."
The equation, simplified:
2026 organic traffic loss =
Baseline × Informational share × (25% × Category pace) × (1 − AI capture rate)
Worked example. A Series B B2B SaaS with 400,000 monthly organic sessions, 60% informational share, a category pace of 1.2, and an honest AI capture rate of 30%:
400,000 × 0.60 × (0.25 × 1.2) × (1 − 0.30)
= 400,000 × 0.60 × 0.30 × 0.70
= 50,400 sessions/month at risk by end of 2026
That is a 12.6% hit to total organic traffic, or about 605,000 sessions over the year, assuming even distribution. At a 2% landing-to-opportunity conversion rate and a $15k ACV, that is roughly $180k of pipeline at risk per monthly session loss annualized — a number large enough to warrant board-level attention, small enough to be manageable with a funded response.
Run the model with your own numbers. The shape of the answer — a meaningful mid-to-high single-digit to low double-digit percentage of organic traffic at risk, concentrated in informational queries — will be stable across most B2B portfolios.
What the model changes about planning
Three concrete planning consequences follow from running the math.
Shift budget from informational SEO to informational AI visibility. If you have a content budget dedicated to ranking on research-phase queries, a portion of that budget is being spent on traffic that will not show up. The content is still useful — it feeds the AI answers that replace the clicks — but the success metric shifts from "ranked #1 for X" to "cited in the AI answer for X." Reallocate budget toward content that is structured to be cited, not just to rank.
Instrument the AI capture rate. The fourth variable in the model is the one most teams cannot fill in. Fixing that is the highest-leverage measurement investment of 2026. A baseline AI visibility audit, re-run monthly, turns that variable from a guess into a number. The number moves; the movement is what you manage.
Stop modelling as if 2025 SEO conditions persist. Any plan that projects next year's organic traffic by applying a flat growth multiplier to trailing-twelve-month performance is modelling a world that will not exist. Every major SEO forecasting tool needs a Gartner-adjusted overlay. Most do not have one by default.
The counter-argument
It would be fair to object: forecasts are not reality, and Gartner has overshot before. Why weight this one?
Three reasons to weight it despite that caution.
The directional signal is corroborated by multiple independent data sources. Ahrefs' measurement of ChatGPT query volume relative to Google — approximately 12% as of February 2026 — implies meaningful substitution. McKinsey's 44% consumer adoption implies the substitution is not niche. The aggregate picture across Gartner, McKinsey, Forrester, and Ahrefs is coherent, not divergent.
The risk is asymmetric. If Gartner overshoots by half — the real decline is 12% rather than 25% — a brand that planned for 25% has over-indexed on AI visibility slightly and holds a defensible lead. If Gartner undershoots — the real decline is 35% — a brand that planned for 0% has a hole in the budget. The planning cost of overshooting is much smaller than the cost of undershooting.
The forecast assumes no new product shocks. If OpenAI, Anthropic, or Google ships a consumer feature in 2026 that accelerates AI adoption meaningfully (a plausible outcome given the launch cadence of the last eighteen months), the 25% becomes a floor, not a ceiling.
A short summary the CFO will accept
If you are presenting the implication to finance, the single-paragraph version:
Based on Gartner's forecast of a 25% contraction in traditional search volume by end of 2026, calibrated to our category adoption pace and informational-query share, we model approximately X% of current organic traffic as at risk over the planning horizon. A modest instrumentation investment — baseline AI visibility audit, monthly re-measurement, two optimization sprints against identified gaps — allows us to defend a disproportionate share of that exposure, at a cost that is trivial relative to the pipeline impact.
Swap in your X. The number from the model earlier in this post is how you generate it.
Where to start
The first honest input to the model is the AI capture rate. Until you measure it, the fourth variable is a guess, and the entire model is a guess with a decimal point. BrandGEO runs structured prompts across five AI providers and returns a 150-point score normalized to 0–100, broken into six dimensions per provider, with industry-aware key findings. It takes about two minutes.
Related reading:
- What McKinsey's 44% / 16% Numbers Really Mean for Your 2026 Marketing Plan
- Forrester on B2B: Why Buyers Adopt AI Search 3× Faster Than Consumers
- Measure → Fix → Track: An Operating System for AI Visibility
Start a free audit or see the pricing page if you are ready to instrument the fourth variable.
See how AI describes your brand
BrandGEO runs structured prompts across ChatGPT, Claude, Gemini, Grok, and DeepSeek — and scores your brand across six dimensions. Two minutes, no credit card.