Concepts
Machine distinctness
An answer system's ability to consistently resolve a brand as the correct entity in the relevant scenario.
Practical meaning: Stronger than simple name recognition.
Functional visibility
A brand's participation in the actual moment of choice: in a shortlist, a recommendation, or a comparison.
Practical meaning: A brand may be well known yet functionally invisible.
Source contour
The set of owned, external, user-generated, and structured sources from which the system forms an opinion about the brand.
Practical meaning: The key object of audit.
Mention
The fact that the brand name or entity appears in the answer.
Practical meaning: The weakest level of presence.
Citation
A situation in which the brand or a source associated with it becomes evidentiary support for the answer.
Practical meaning: Stronger than a simple mention.
Influence
A brand's ability to define the frame of the answer: the criteria, the category, the list of alternatives, and the language of comparison.
Practical meaning: The most valuable and the rarest level.
Update lag
The delay between a change in a fact about the brand and its stable appearance in a machine-generated answer.
Practical meaning: It is made up of several stages.
Machine readability
The degree to which the properties of a brand and product can be reliably extracted from structured and synchronized data.
Practical meaning: Especially important in commercial scenarios.
Category drift
The displacement of a brand into someone else's choice frame, or the substitution of its market with another task category.
Practical meaning: It often happens before direct brand comparison.
Loss before the click
The brand participation that is lost in shaping choice before the user even clicks through to the site.
Practical meaning: A new economic unit of analysis.
Answer environment
An interaction mode in which the answer system does not return a list of documents, but immediately produces a synthesized answer to the user's question.
Practical meaning: It is in the answer environment that a brand competes not for rank in the results, but for a place inside the finished answer.
GEO (Generative Engine Optimization)
A set of practices aimed at making a brand appear more often and more accurately in the answers of generative answer systems. It differs from classic SEO in that it optimizes not for position in a list of links, but for participation in a synthesized answer.
Practical meaning: A market term already used by competitors and clients. Useful to know, but important to remember that behind the acronym stands the same task — machine distinctness.
LLMO (Large Language Model Optimization)
A synonym for GEO that emphasizes optimization specifically for language models, rather than for search systems with an AI layer.
Practical meaning: It appears in market materials less often than GEO, but it describes the same task.
RAG (Retrieval-Augmented Generation)
An architectural approach in which a language model retrieves relevant documents from an external source (web search, knowledge base, catalog) before generating an answer and uses them as context.
Practical meaning: RAG is precisely what explains why web sources and structured data affect the model's answer in real time, not only through training.
Zero-click (zero-click)
A situation in which the user gets a sufficient answer directly in the interface of a search or answer system and does not visit any external site.
Practical meaning: In zero-click, the brand either participates in forming the answer or loses contact with the user entirely.
Knowledge graph (Knowledge Graph)
A structured database of entities and the relationships between them, used by search and answer systems to identify, classify, and describe real-world objects.
Practical meaning: A brand's presence in a knowledge graph (for example, Google Knowledge Graph or Wikidata) increases the likelihood of correct entity identification in an answer.
Entity disambiguation (entity disambiguation)
A system's ability to distinguish one entity from another when names, categories, or contexts coincide or are similar.
Practical meaning: If a brand name is ambiguous or overlaps with a generic concept, the model may systematically confuse it with another entity.
AI Overview
A block of synthesized answer content that Google displays above the standard search results. It is generated by an AI model based on web sources.
Practical meaning: For many categories, AI Overview has already become the first point of contact between the user and the information. The brand either enters this block or ends up below it.
Hallucination (hallucination)
A situation in which a language model generates a claim that is unsupported by sources or factually wrong, yet sounds confident.
Practical meaning: A hallucination can either help a brand (the model "invents" a non-existent advantage for it) or harm it (it attributes someone else's weakness to it). Both are dangerous.
Answer bubble
An effect in which the same brand looks materially different across the answers of different answer systems because of differences in training data, retrieval architecture, and synthesis logic.
Practical meaning: There is no single, unified AI visibility. Diagnosis must be conducted across several platforms.
Agentic choice
A scenario in which an AI agent independently searches, compares, and makes a decision on behalf of the user — without showing an intermediate list of options.
Practical meaning: In an agentic scenario, the brand competes not for human attention but for machine preference. This is the next frontier after the answer environment.
Confidence grade (Confidence Grade)
An internal assessment of research run quality. It is calculated from the width of the confidence interval, the percentage of broken prompts, and the dispersion across scenario families. Grades: A (high) — narrow interval, stable result. B (good) — reliable result with moderate variation. C (moderate) — noticeable sensitivity to corpus composition. D (exploratory) — unstable result; a repeat run is recommended.
Practical meaning: The grade shows how stable the measurement is. At C or below, a repeat run is recommended before decisions are made.
Native mode
A mode in which the model answers only from its training data, without internet access. It shows how firmly the brand is embedded in the model's base knowledge.
Practical meaning: If the native score is high, the brand is already embedded in the model. If it is low, the model does not know the brand without external prompts.
Web mode
A mode in which the model additionally searches for information on the internet before answering. It shows the extent to which external sources strengthen or weaken the brand's position.
Practical meaning: The difference between web and native shows whether the site and the external environment help the brand or hold it back.
Visibility Language Field (VLF)
A phenomenon in which an AI model assembles a different competitive landscape for a brand depending on the language of the prompts. When the prompt language changes, some competitors disappear from responses, others appear, and rankings shift radically. For a dominant category brand the headline score fluctuates moderately, but for less prominent competitors the spread can reach dozens of points — up to complete invisibility in certain languages. VLF arises from asymmetry of training corpora, differences in web sources, and the formation of separate associative graphs for each language. The model may know a brand equally well in every language yet recommend it differently: placing it first in one language while yielding the top spot to a competitor that did not exist in the first language at all.
Practical meaning: Brand knowledge does not guarantee a recommendation. A brand may be equally known to the model in five languages yet make the shortlist in only two. Competitors also differ — a strategy that works against one set of rivals may be useless in another language. A separate study is recommended for each target-market language.
Related materials
Foreword and guide to the updated AI100 corpus
How the AI100 research library is organized: article structure, material types, difficulty levels, reading paths, and navigation.
Open the material →Why a strong brand can still be invisible to AI systems
Explains the central paradox: a brand can be well known to people and yet poorly distinguishable for AI at the moment of real choice.
Open the material →Where these metrics work inside the report
Presence share, influence index, and external authority coefficient are not abstract formulas. In the final AI100 report they become concrete weights that build the main visibility score.
See how the score is built →