The familiar world is not disappearing, but it is shrinking

When the conversation turns to AI visibility, the first question a business owner or marketer almost always asks is the same: “We have strong SEO—does that help or not?” The answer is not as simple as one might like. Some skills and infrastructures from classical search genuinely continue to work in the new environment. Some are losing significance. And some habits accumulated over years of optimization are capable not merely of failing to help, but of actively getting in the way.

To understand why, it is enough to look at the change in user behavior. According to Similarweb, the share of search queries that end without a single click to an external site rose from 56% in May 2024 to 69% by May 2025 [1]. A Pew Research Center study based on 68,000 real search queries showed that when an AI Overview was present, users clicked on results in 8% of cases, compared with 15% when it was absent [2]. Seer Interactive, using a sample of 25 million organic impressions, found that organic CTR for queries with AI Overview fell to 0.61%, compared with 1.76% without it [3]. This is not noise or statistical coincidence. More and more often, the user gets an answer without leaving the interface of the search system or answer system.

Does that mean SEO is dead? No. Google Search Central states explicitly that there are no additional requirements for appearing in AI Overviews and AI Mode—the same fundamentals of search optimization still matter [4]. A page must be indexed and eligible to appear in ordinary search. But “still matter” and “are sufficient” are two different things. SEO fundamentals open the door; they do not guarantee that the brand will make it inside the answer.

What carries over from SEO and continues to work

Technical site accessibility. If a search crawler cannot crawl the pages, they will not enter the index and therefore cannot become a source for an AI summary. A valid robots.txt, an XML sitemap, properly functioning canonical URLs, and fast load times all remain basic requirements for entry. Google emphasizes that for a page to become a supporting link in AI Overviews or AI Mode, it must be indexed and allowed to appear with a text snippet [4].

Content quality and expertise. E-E-A-T (experience, expertise, authoritativeness, trustworthiness)—the set of criteria Google uses to evaluate the usefulness of content—not only has not lost relevance, but has become even more important. Answer systems prefer sources that can be verified and formulations that read like expert judgment rather than marketing copy. In the academic GEO study published by Princeton, Georgia Tech, and IIT Delhi, three of the nine tested optimization strategies performed best: adding specific statistical data, citing authoritative sources, and including expert quotations [5]. This is a direct continuation of what strong SEO has taught for years.

Structured data and markup. Schema.org, Open Graph, and machine-readable descriptions of products and services all help the system identify an entity more quickly and more accurately. In the answer environment, this layer becomes not merely a useful addition, but part of the language through which the brand speaks to the machine.

Link profile and external authority. External links still signal trust. But—and this is where the differences begin—the key issue is no longer link density so much as which sources those links come from. Independent reviews, industry media, and analytical publications carry more weight than links from directories and aggregators.

What stops working or works differently

Keyword optimization. In classical search, the marketer built a semantic core and worked to make a page maximally relevant to a particular phrase. In the answer environment, the user does not enter a keyword, but asks a question. And that question may be long, nuanced, and full of context and constraints. Google describes the technique used by AI Mode as “query fan-out”: the system breaks a single user question into subtopics and simultaneously looks for information on each of them [4][6]. That means a page perfectly tuned to one phrase may fail to appear in any of the fan-out subqueries if it does not cover adjacent aspects of the topic.

The fight for position in a list of links. In the world of ten blue links, position one was the ultimate goal. In the answer environment, there is no position as such. There is the fact of being present inside the synthesized answer—and there is the role the brand plays inside it: the brand may simply be mentioned, may be cited, or may define the frame of comparison itself. The data shows that there is a correlation between classical ranking and citation in AI answers, but it is far from linear. According to AirOps, pages in Google’s first position are cited by ChatGPT in 43% of cases—3.5 times more often than pages outside the top 20 [7]. But that also means that 57% of first-position pages are not cited at all. The relationship exists, but it is not automatic.

Traffic as the primary metric of success. If 69% of search queries end without a click, and among queries with AI Overview CTR falls to 0.6%, measuring success only through site visits means seeing an ever smaller portion of the picture. In the answer environment, a brand can shape a user’s decision without receiving a single visit to its site. That does not mean traffic has become unimportant. It means traffic has ceased to be the only currency. Seer Interactive found an important asymmetry: when a brand is cited inside an AI Overview, organic CTR is 35% higher than for uncited competitors on the same queries [3]. So citation inside the answer does not kill traffic—it redistributes it in favor of those the system regards as sufficiently trustworthy sources.

Content for volume. Many SEO strategies in recent years were built on large-scale content production: the more pages covered the semantic core, the broader the reach. In the answer environment, that logic breaks down. An answer system does not sift through hundreds of site pages looking for the answer; it selects the best fragment from the best source. Ten weak articles on one topic lose to one strong article—not because the algorithm “penalizes” quantity, but because when synthesizing an answer, the system selects the most convincing and verifiable source.

Where familiar optimization can backfire

The academic GEO study directly showed that traditional SEO tactics such as keyword stuffing performed weakly in a generative context [5]. But the harm can be less obvious than that.

The first risk is the language of self-presentation instead of the language of the task. Companies accustomed to SEO often describe themselves in the language of their own marketing categories: “leading platform,” “comprehensive ecosystem,” “innovative solution.” An answer system operates in the user’s language, and the user asks differently: “what should I choose for a small store,” “how is one solution different from another,” “what is better if the budget is limited.” If a brand has spent years optimizing content around its own terminology rather than the language of real demand, AI may simply fail to connect it to the user’s task.

The second risk is an excess of self-description without external confirmation. In the SEO world, a strong site could dominate branded queries while relying mainly on its own content. In the answer environment, the system looks for external confirmation. If a brand claims certain advantages but no independent source confirms them in other words, the answer system will be more cautious in recommending it. A strong SEO site without an external trust contour is a vulnerable structure.

The third risk is technical barriers for AI crawlers. Some sites that are used to finely managing crawl budget block certain crawlers through robots.txt. According to Press Gazette, around 80% of major news publishers already block at least one crawler from an AI system [8]. For media companies protecting their content, that is a deliberate choice. But for a commercial brand that wants to be visible in answers, blocking can amount to voluntarily leaving the field.

The new task: not ranking position, but the right to be cited

Taken together, this produces a picture in which SEO is not dying, but its task is being transformed. The goal used to be to reach the first page of results. The goal now is to become a source an AI system will want to rely on when forming an answer. That is a more difficult position, but also a more valuable one.

To do that, a company has to do several things that classical SEO did not always require. First, write in the language of the task rather than the language of the brand. If the user asks, “which service is suitable for a small team without an analyst,” and the brand’s site says “modular environment for intelligent data management,” the connection will not form. Second, support claims with concrete data and external links. The Princeton study showed that adding statistics increases the probability of citation by an AI system by 30–40% [5]. Third, build not only the site, but the entire source contour: external reviews, case studies, comparison materials, industry mentions, and presence in knowledge graphs. Fourth, rethink the metrics: alongside traffic, there should be citation share, frequency of appearing on the shortlist, and the quality of the brand’s role in the answer.

For a small business owner, this may sound intimidating. But in practice, many of these actions do not require a huge budget. They require clarity: who you are, what you do better than others, who can confirm it, and what language your customer uses to describe the task. This is not a question of technical optimization, but of intellectual honesty in describing your own brand.


What seems well established

It seems well established that the technical foundation of SEO (indexability, speed, structured data) remains a necessary condition for appearing in AI answers. It is also well established that traditional tactics such as keyword stuffing do not work in a generative context, while specificity, verifiability, and external authority meaningfully increase the probability of citation.

What still remains uncertain

What is less firmly established is the precise degree of correlation between classical ranking and citation in answer systems beyond Google AI Overviews. For ChatGPT, Perplexity, and Copilot, that relationship has been studied less and, according to preliminary evidence, appears to work differently.

What this changes in practice

For a company, this means the SEO team needs to broaden its field of view: continue building the technical foundation, but stop treating position in a list of links as the end goal. The new goal is to become a source AI relies on when forming an answer. And that requires not only a site, but the full contour of confirmation around it.

Sources

[1] Similarweb. Zero-Click Search Research: 56% to 69% Growth (May 2024 – May 2025). 2025
[2] Pew Research Center. Google users are less likely to click on links when an AI summary appears in the results. 2025
[3] Seer Interactive. AIO Impact on Google CTR: September 2025 Update. 2025
[4] Google Search Central. AI Features and Your Website. 2026
[5] Aggarwal P., Murahari V., Rajpurohit T., Kalyan A., Narasimhan K., Deshpande A. GEO: Generative Engine Optimization. KDD '24, ACM, 2024
[6] Google Search Help. Get AI-Powered Responses with AI Mode in Google Search. 2026
[7] AirOps. Citation Analysis: SERP Position vs. ChatGPT Citations. 2026
[8] Press Gazette. Nearly 80% of top news publishers now block at least one AI training crawler. 2025

Related materials

Foundational text 7 min

Why a strong brand can still be invisible to AI systems

Explains the central paradox: a brand can be well known to people and yet poorly distinguishable for AI at the moment of real choice.

Open the material →
Foundational text 7 min

Which sources AI uses to form an opinion about a brand — and why the site is not the only hero

The layers from which AI assembles its opinion of a brand: the brand's own site, search context, independent reviews, user platforms — and why the site is no longer the sole arbiter.

Open the material →
Research article 7 min

Machine-readable commercial infrastructure: markup, product feeds, and catalogs as a language AI can understand

The data and markup layer that makes a brand and its products understandable to machines: catalogs, product feeds, structured descriptions, and their synchronization.

Open the material →
Research article 8 min

Mention, citation, and influence: three levels of brand presence in AI answers

Three levels of brand presence in AI answers — mention, citation, and influence — and why a single metric is not enough for diagnostics.

Open the material →
Next step

How AI100 measures the gap between SEO rank and AI visibility

A good Google position does not guarantee the brand will appear inside the answer. AI100 tests exactly that: whether the model names the company in neutral choice scenarios where the brand should appear on its own.

See how neutral scenarios work →
Or run your own study →