A brand can lose before it is ever compared with a competitor
A brand sells a loyalty management platform for restaurant chains. A marketer types into AI Mode: "How do I retain regular guests across a 30-location restaurant group?" — expecting to see the brand alongside its direct competitors. Instead, the system reframes the task as "CRM automation for HoReCa" — and returns a completely different set of players, half of whom sell not loyalty tools but accounting systems with a mailing module. In the AI answer environment, that model turns out to be too narrow. Here a brand can lose before it ever meets a competitor—simply because the system frames the question differently. The user asks not about a brand, but about a task; the machine translates the question into the language of a category; the category then breaks down into a set of criteria; and only within those criteria do other players appear, including ones the user did not initially have in mind. This is category drift: defeat not in a comparison between brands, but in the frame within which the system decides who counts as relevant.
That mechanism is built into the very nature of modern AI search systems. Google writes that AI Mode is especially useful for complex comparisons and nuanced questions, and that AI Overviews and AI Mode can use query fan-out, breaking a query into subtopics and additional searches across related aspects [1]. In commercial choice terms, this means the following: if a user asks a question about solving a task, the system is under no obligation to limit itself to the brands the user already knows. It can first determine the latent category, then identify the criteria, and only afterward assemble a set of alternatives that fit those criteria. In that logic, a brand can disappear long before any direct comparison with competitors begins.
Empirical evidence confirms that query type is decisive here. In The Rise of AI Search, researchers showed that AI answers occur far more often for questions than for navigational queries, where the user already knows where they want to go [2]. In other words, a brand is relatively safe when demand has already formed in its favor: a person types the company name, the product name, or something very close to a direct path to it. But in the early and middle stages of choice—where the user asks “what is best for this task,” “where should I start,” or “which tool fits these constraints”—the mediator has maximum freedom to define the frame. And that is precisely where the most painful loss of attention share occurs.
The significance of this mechanism is growing not just in theory, but in mass user behavior. McKinsey writes that roughly half of consumers already use AI-assisted search, and 44% of those users call it their primary source of information when making purchase decisions [5]. If that is the case, then category drift is no longer a rare interface error. It becomes a systematic point at which demand is redistributed. More and more often, the user receives the first frame of choice not from the brand, and not from their own path through search results, but from a mediator that decides what language the task will be described in at all.
Four forms of category drift
Category drift can occur in at least four forms. The first is a change in the name of the task. The brand believes it operates in one category, while the system sees the user need in another. A company may sell, for example, an “intelligent analytics environment for commerce,” while AI translates that into the simpler phrase “a reporting tool for online stores.” The second is a change in criteria. The brand built its positioning around accuracy, depth, and integration, while the machine decides that in this question the decisive criteria are ease of launch and a low barrier to entry. The third is a narrowing or widening of the set of alternatives. The system may unexpectedly mix in services from adjacent categories if, in its view, they answer the user’s request better. The fourth is the displacement of the brand by a description of a solution class. In that case, the answer may omit company names altogether and remain at the level of “it is better to choose tools with such-and-such properties.” Formally, the brand did not lose to a direct competitor. But in practice, it has already been pushed out of the moment of choice.
In the answer environment, this is especially dangerous for companies with complex or overly technical self-descriptions. A user rarely comes to an answer system with the brand’s ready-made terminology. They describe the problem in ordinary language: “I want something faster to implement,” “I’m not ready for a heavy integration,” “I need a tool the team can understand without a separate analyst,” “I’m looking for a solution for a midsize business, not a huge corporation.” If a brand has spent years speaking about itself in the language of internal categories, without connecting that language to the real language of demand, the system will easily place it in someone else’s bucket—or fail to see any reason to place it anywhere at all.
In its materials on the new metrics of AI search, Microsoft writes that the user path becomes shorter, but at the same time more deeply integrated into the answer environment itself: intent is refined at every turn of the dialogue, and part of the choice happens before the site is ever visited [3]. That detail has direct bearing on category drift. The more of the decision is made inside the conversation, the greater the influence of the criteria and comparisons that the system itself brings to the surface. A user may begin the conversation with a relatively neutral task, but by the second or third turn end up inside a frame where entirely different classes of solutions are being considered.
The problem is compounded by the fact that brands often measure themselves through branded demand and draw false conclusions from it. If users who already know the company continue to find it by name, that creates an impression of resilience. But it is no accident that Google introduced a separate filter for branded and unbranded queries in Search Console [4]. In doing so, the platform effectively acknowledges that these are two different worlds with two different growth logics. A branded query shows the strength of knowledge about the company that has already formed. An unbranded query shows the brand’s ability to appear where the user has not yet decided whom they are asking about. In the AI answer environment, it is this second world that becomes the main battlefield.
Why the shorter user path is more dangerous
For ai100, category drift could become an especially strong research series. For each industry, one can assemble a corpus of real user phrasings and see which categories different systems translate them into. How often does the brand remain inside the original category? How often does the system substitute a different set of criteria? What kinds of adjacent solutions get mixed into the comparison? How much changes when one or two phrases in the task wording are altered? Studies of this kind would quickly show that losing in AI rarely looks like “a competitor took our place.” Much more often, it is a quiet loss of the right to be included in the choice frame at all.
The practical response to this problem begins with rebuilding the brand’s own language. A brand needs to anchor itself not only in the category it considers correct, but also in adjacent user phrasings of the task. That means content, documentation, external reviews, comparison materials, and machine-readable descriptions need to carry not only the brand’s internal positioning, but also bridges to the real language of demand. If the product is suitable “for a quick start without lengthy implementation,” that should be stated as clearly as its architectural strengths. If the company wants to be associated not only with the large enterprise segment, that needs to be confirmed by external cases and descriptions, rather than remaining an internal brand aspiration.
How to rebuild brand language and measure drift
There is also a broader strategic conclusion. In classic search, a brand could afford to live, at least in part, inside its own category and wait for the user to arrive there on their own. In the answer environment, the mediator does not wait. It builds the bridge from problem to solution itself—and therefore decides which categories count as relevant. Consequently, the modern battle for visibility is a battle not only for mention of the brand, but also for the right to define the vocabulary of the task itself. Whoever loses in the vocabulary begins losing before products are even compared.
That is why category drift is one of the most important themes for a mature understanding of the AI market. It shows that a brand’s new kind of defeat can be almost invisible to classical analytics. The site has not lost positions for its own name. A competitor does not appear to have beaten the brand head-on. But demand has already leaked into a different frame, where choices are made according to someone else’s criteria and among someone else’s players. In this new environment, the winner is not only the one who is known, but above all the one who has managed to become the natural answer to the user’s task before the machine renames that task in its own way.
It seems well established that answer systems actively decompose complex questions into sub-tasks and can thereby change the frame of choice. That increases the risk that a brand will be compared with the wrong alternatives—or not named at all.
What is less well understood is how stable this effect is across industries and languages. Across different tasks, locales, and user phrasings, category drift may vary substantially.
The practical conclusion is that a brand needs to establish itself not only in its own self-description, but also in the language of the user’s tasks. Otherwise, the mediator will define the market frame on the brand’s behalf.
Sources
Related materials
The “answer bubble”: why the same brand looks different in ChatGPT, Google, Copilot, and other systems
Why there is no single AI visibility: the same brand can look noticeably different across ChatGPT, Google AI Overviews, Copilot, and Perplexity.
Open the material →What AI really “knows” about a company: the brand’s internal representation
Examines how a language model holds a brand internally: not as a card with a description, but as a probabilistic network of categories, attributes, and associations.
Open the material →How the report shows whether the choice frame has shifted
Category substitution is one of the least obvious losses. In the AI100 report, growth zones by question storylines let you see whether the brand is losing specifically in the first category choice, where substitution risk is highest.
See the 'Claim the main explanatory question' case →