BrandGEO
SEO Market Research · · 7 min read · Updated Apr 23, 2026

How Google's AI Overviews Changed CTR Curves — What Published Data Tells Us

The data is in. The CTR curve changed shape. Here's what that means for content strategy — and for anyone still planning 2026 around the old curve.

For twenty years, the SEO click-through-rate curve was stable enough to plan against. Position one got roughly 28% of clicks. Position two got 14%. Positions three through ten declined in a predictable pattern. Content and SEO teams built campaign models on top of that curve and, broadly, the curve held. Then Google launched AI Overviews, and the curve changed shape. The published research from Ahrefs, Similarweb, and several independent SEO teams lets us look at the new curve with reasonable confidence. The new curve is not a small deviation from the old one. It is a different curve.

For two decades, the SEO click-through-rate curve was stable enough to plan against. Position one on a standard informational query captured somewhere between 25% and 32% of clicks, depending on the study. Position two captured about half of that. Positions three through ten declined in a predictable pattern — the long tail that every content calendar and backlink budget was implicitly modeled on.

Then Google launched AI Overviews, rolled it out to informational queries, and the curve changed shape. Published research from Ahrefs, Similarweb, and independent SEO teams now lets us look at the new curve with reasonable confidence.

It is not a small deviation from the old curve. It is a different curve.

What the old curve looked like

Recapping briefly, because the baseline matters:

  • Position 1: roughly 27–32% click-through rate
  • Position 2: roughly 14–18%
  • Position 3: roughly 9–11%
  • Position 4: roughly 6–8%
  • Positions 5–10: a long declining tail, summing to 15–20% of total clicks
  • Below position 10: negligible

Studies varied (Ahrefs, Advanced Web Ranking, Sistrix, Backlinko), but all converged on the same general shape: a heavy concentration on positions one and two, and a rapidly thinning tail.

The planning implication was also stable: win position one if you can, position two is almost as good, positions three and four justify the investment, below that the math gets hard.

What the new curve looks like

Since AI Overviews rolled out to English-market informational queries through 2024 and 2025, Ahrefs' research across a large keyword set has shown a consistent pattern. When an AI Overview appears above the organic results:

  • Position 1 CTR declines by somewhere between 30% and 40% relative to queries without an AI Overview.
  • The decline is concentrated on informational queries (how-to, what-is, comparison, research-phase). Transactional queries (buying intent, brand searches, navigational) are much less affected.
  • The decline is not evenly distributed across the ten blue links. Position one loses the most; position two loses nearly as much. The long tail loses proportionally less — but from a smaller base.

Similarweb and independent studies from several large publisher networks corroborate the pattern. A 2025 meta-analysis cited in multiple SEO publications placed the average CTR compression at around 34% for position one on AI Overview-enabled queries.

The shape of the new curve, in broad terms:

  • Position 1 (with AI Overview above): 16–20% CTR, down from 27–32%.
  • Position 2: 9–12%, down from 14–18%.
  • Positions 3–10: slightly compressed, but not dramatically.
  • AI Overview itself: captures the click-that-did-not-happen — the user closes the search without clicking anything.

That last point is the one with the largest planning implication. The clicks are not shifting to lower positions. They are being absorbed by the AI answer. The AI Overview satisfies the query in-place. The buyer reads the summary and does not feel the need to visit any of the ten blue links.

Where the click actually goes

If the user does click through from an AI Overview, the distribution of those clicks is a separate question. Ahrefs' tracking of AI Overview citation behaviour suggests:

  • Pages cited inside the AI Overview itself capture a meaningful share of the clicks that do occur — often more than the organic #1 result for the same query.
  • The citation pattern does not perfectly mirror organic ranking. Many AI Overview citations go to pages that do not rank in the top three for the same query; some go to pages ranked 5–15. The correlation is real but not tight.
  • Domain authority, page freshness, and entity clarity appear to influence citation selection more strongly than pure link-based ranking signals.

The net: being cited in the AI Overview matters more than ranking first on many queries. The mechanics of becoming cited are adjacent to, but not identical to, the mechanics of ranking.

What Ahrefs' broader research showed

Two other findings from published Ahrefs research deserve calling out, because they anchor the strategic conversation.

Brand mention correlation. An Ahrefs study of 75,000 brands (Brand Radar methodology, 2025) found a correlation of approximately 0.664 between a brand's mention volume across the web and its appearance rate in AI Overviews. That is a strong correlation. It suggests AI Overview citation behaviour is meaningfully driven by how widely and credibly a brand is talked about online — not just by the brand's own pages.

The Wikipedia and review-site signal. Pages cited in AI Overviews disproportionately trace back to Wikipedia entries, G2 / Capterra / Trustpilot pages, and a handful of credible industry publications for a given category. The diversity-of-source signal is strong. Brands with a well-built upstream citation profile (Wikipedia, reviews, industry media) see a compounding advantage in AI Overview presence.

These two findings together reshape how content strategy should be read. Ranking on Google remains a useful proxy signal, but upstream credibility — mention volume, citation by authoritative third parties — is a stronger predictor of AI Overview inclusion than on-page optimization alone.

The implications for content strategy

Four practical implications.

1. The "informational content" playbook has weakened, not died

For a decade, producing high-quality informational content was a reliable path to organic traffic. The content would rank; the clicks would arrive; the conversion funnel would work. On AI Overview-affected queries, that playbook now delivers compressed returns. The content still ranks; the AI Overview captures the click; the traffic does not materialize.

This is not a reason to stop producing informational content. That content now does double duty: it ranks, and it feeds the AI Overview itself. But the traffic outcome has to be re-expected. If your business case for a content campaign was predicated on the old CTR curve, the business case needs rewriting.

2. The definition of "won" has to expand

On informational queries, "won" used to mean "ranked #1." On AI Overview-affected queries, "won" means one of three things: cited in the AI Overview; ranked #1 and visibly cited; or mentioned as a category leader in the AI summary even if not directly cited.

The first two are measurable in tools. The third — being named in the AI summary paragraph — requires tracking that is adjacent to but distinct from classic rank tracking. This is the bridge between traditional SEO measurement and AI visibility measurement.

3. Transactional and branded queries remain the most defensible

Queries where the buyer has intent to act — "pricing," "demo," "login," "[your brand]" — are much less affected by AI Overviews. The AI summary is less useful for a user with transactional intent; they want to get to the page and do the action. Content strategy that concentrates on those queries has weathered the shift better than content aimed at research-phase queries.

This implies a re-balancing: less volume on long-tail informational content (which still serves AI Overview signal but delivers less traffic), more depth on transactional and decision-phase content (which retains the old CTR curve more fully).

4. Upstream signals deserve a content-equivalent budget

If mention volume correlates at 0.664 with AI Overview inclusion, a marketing team that allocates 95% of its content-and-earned budget to owned content is mis-allocated. A rebalancing toward digital PR, analyst citation, Wikipedia editorial, G2 / Capterra review acquisition, and vertical community presence (Reddit, industry forums) is consistent with the data.

"Earn citations" is a different skill set from "publish content." Most B2B teams have the first capability thinly staffed.

The one number to anchor on

If you take one number from this post, take this one: on informational queries with an AI Overview above the organic results, position-one CTR declines by roughly 30–40%.

Most SEO forecasting tools have not yet updated their default CTR curves to reflect this. Many content plans for 2026 are still implicitly using the old curve. If your plan projects traffic for research-phase content using the 27–32% position-one CTR, it is overstating the expected return by something like a third.

Rebuilding the plan against the new curve is not complicated. It is tedious. It requires going through your top informational targets, checking AI Overview incidence, and applying a discount factor to the expected traffic. It takes a content operations analyst a week. The revised plan is a better instrument than the unrevised one.

What does not change

Three things the data does not support, even as the curve has moved.

SEO is not dead. Transactional and branded queries remain high-value, high-click, high-converting. Technical SEO remains the entry ticket to any serious visibility work. The argument "AI killed SEO" overshoots the data.

Content quality is still the underlying driver. The content that gets cited in AI Overviews is, on average, better content. Depth, clarity, entity-explicit writing, credible sourcing — these attributes improve both ranking and citation odds. The investment thesis on quality content has strengthened, not weakened.

Rank tracking is still valuable. Positions continue to correlate with AI citation probability, even if imperfectly. A brand that ranks well across a portfolio of keywords has better AI visibility, on average, than a brand that does not. The correlation is looser than it used to be, but it remains real.

Where to start

If you have not yet overlaid AI visibility measurement onto your SEO dashboard, the simplest start is a baseline audit: which queries relevant to your brand trigger AI Overviews, how often is your brand cited, and how does your citation rate across the major providers compare with your Google rank. BrandGEO runs structured prompts across five AI providers in about two minutes; the multi-provider score complements, rather than replaces, classic rank tracking.

Related reading:

Start a free audit or see the pricing page.

See how AI describes your brand

BrandGEO runs structured prompts across ChatGPT, Claude, Gemini, Grok, and DeepSeek — and scores your brand across six dimensions. Two minutes, no credit card.

Keep reading

Related posts

BrandGEO
SEO Apr 20, 2026

The Wikipedia Lever: How a Well-Structured Entry Moves Your Knowledge Depth Score

Of every lever in Generative Engine Optimization, a well-formed Wikipedia entry has the most predictable payoff on how LLMs describe your brand. Wikipedia corpora are oversampled in nearly every major model's training data, cited heavily by search-augmented providers, and treated as a canonical fact source. Yet most brands either have no entry at all, a three-sentence stub, or an entry that was edited once in 2021 and left to rot. This is the playbook to fix that without getting your article deleted or your account blocked.

BrandGEO
AI Visibility Apr 15, 2026

The Shift From Search to Answer: Four Years That Redefined Discovery

In late 2022, a buyer researching a product opened Google, scanned ten blue links, clicked two or three, and formed an opinion across several tabs. In 2026, the same buyer opens ChatGPT, types a question in a sentence, and reads one composed paragraph. The channel has not widened — it has compressed. This is the most consequential shift in discovery since the launch of Google itself, and it breaks several things marketers have treated as stable for two decades.

BrandGEO
SEO Apr 13, 2026

Schema Markup for LLMs: 7 Elements That Matter, 12 That Don't

Schema markup is the single most over-prescribed piece of tactical advice in GEO. Every checklist tells you to add it. Few tell you which parts actually affect how LLMs describe your brand, which parts only help Google's rich snippets, and which parts have become decorative. This post is the triage: the seven schema elements worth implementing properly in 2026 for AI visibility, the twelve you can safely deprioritize, and the one that matters more than all the rest combined.