Not a blog, but a research library about how AI sees brands
AI100 articles, terms, and metrics — from foundational concepts to diagnostic research. Navigate by task, reading path, or reference.
| Material | Type | Level | Min |
|---|---|---|---|
| Foreword and guide to the updated AI100 corpusHow the AI100 research library is organized: article structure, material types, difficulty levels, reading paths, and navigation. | Guide | Introductory | 4 |
| AI100 Glossary of Terms and MetricsThe canonical dictionary of all AI100 terms, metrics, and concepts. Definitions, formulas, and the practical meaning of each indicator. | Reference | Introductory | 8 |
| Mini-research card for the AI100 libraryAn observation card template for recording data from each AI100 test run — so that individual responses build into a research history. | Observation template | Introductory | 4 |
| Why a strong brand can still be invisible to AI systemsExplains the central paradox: a brand can be well known to people and yet poorly distinguishable for AI at the moment of real choice. | Foundational text | Introductory | 7 |
| What AI really “knows” about a company: the brand’s internal representationExamines how a language model holds a brand internally: not as a card with a description, but as a probabilistic network of categories, attributes, and associations. | Foundational text | Intermediate | 7 |
| Which sources AI uses to form an opinion about a brand — and why the site is not the only heroThe layers from which AI assembles its opinion of a brand: the brand's own site, search context, independent reviews, user platforms — and why the site is no longer the sole arbiter. | Foundational text | Intermediate | 7 |
| From search engine to AI intermediary: how the customer path is changingHow the AI intermediary changes the customer journey: choice and comparison increasingly happen before the click, and the first synthesized answer becomes the frame for decision. | Foundational text | Introductory | 8 |
| What the market offers for AI visibility growth — and where the hidden costs liveA map of approaches the market uses to increase AI visibility: what genuinely helps and what merely creates an illusion of control. | Foundational text | Intermediate | 7 |
| The economics of invisibility: how a company loses demand before the first clickHow to translate the problem of AI invisibility from an abstract conversation about traffic into the language of early economic losses and manageable metrics. | Foundational text | Introductory | 7 |
| Mention, citation, and influence: three levels of brand presence in AI answersThree levels of brand presence in AI answers — mention, citation, and influence — and why a single metric is not enough for diagnostics. | Research article | Intermediate | 8 |
| The “answer bubble”: why the same brand looks different in ChatGPT, Google, Copilot, and other systemsWhy there is no single AI visibility: the same brand can look noticeably different across ChatGPT, Google AI Overviews, Copilot, and Perplexity. | Research article | Intermediate | 7 |
| Update lag: how quickly AI systems change their view of a company after news, a product launch, or a price changeWhy there is a time gap between a fact changing about a brand and its stable appearance in machine answers — and how to observe this lag in practice. | Research article | Advanced | 7 |
| Access economics: crawling, indexing, training, and the brand’s right to manage its presenceThe modes that make up AI access to brand content — crawling, indexing, training, licensing — and why this is already an economic question. | Research article | Advanced | 7 |
| Machine-readable commercial infrastructure: markup, product feeds, and catalogs as a language AI can understandThe data and markup layer that makes a brand and its products understandable to machines: catalogs, product feeds, structured descriptions, and their synchronization. | Research article | Advanced | 7 |
| External authority versus the brand’s own site: which sources really create the right to be recommendedWhich external signals and independent sources help a brand earn the right to be recommended in AI answers — and why the brand's own site without them is not enough. | Research article | Intermediate | 7 |
| Category drift: how a brand loses not only to a competitor, but to someone else’s frame of choiceHow a brand can lose not to a competitor but to a different choice frame: AI shifts the user's task into another category and assembles a different set of alternatives. | Research article | Intermediate | 7 |
| SEO and AI visibility: what carries over, what does not, and where familiar optimization can backfireWhat transfers from classic SEO to the AI answer environment, what stops working, and what new requirements emerge. | Foundational text | Introductory | 7 |
| Practical action map: how to strengthen a brand’s machine distinctnessSix sequential steps for improving AI visibility: from identity verification through language reassembly and trust contour to monitoring. | Guide | Intermediate | 8 |
| Observation from a run: how site language made a brand invisible in its own categoryAn observation from a real AI100 test run: a brand with strong SEO turned out to be invisible to AI because of a gap between the site language and the query language. | Field note | Intermediate | 4 |
| Visibility through the lens of language and geographyWhy the same brand looks different in AI answers across different languages and countries — and what practical consequences follow. | Research article | Intermediate | 7 |
| Multimodal distinctness: when a brand is searched not with wordsHow visual search, voice queries, and multimodal interfaces change brand visibility requirements — and what transfers from text optimization to the world of images and voice. | Research article | Advanced | 7 |
| ChatGPT Instant Checkout: purchasing without leaving the conversationOpenAI launched purchases directly inside ChatGPT — Instant Checkout. An analysis of what changed and how it affects brand visibility. | Update | Introductory | 2 |
| When the buyer is not a person but their agentHow brand visibility changes when an autonomous AI agent — one that searches, compares, and decides on its own — stands between the company and the buyer. | Research article | Intermediate | 7 |
| Wikipedia, Wikidata, and Knowledge Graph: the invisible foundation of AI visibilityWhy brand presence in Wikipedia, Wikidata, and Knowledge Graph has become a practical lever for AI visibility — and how to work with it. | Foundational text | Intermediate | 5 |
| Visibility Language Field: why the same brand lives in different competitive worldsWhen we ran the same brand across five languages, we expected noise — small score fluctuations. Instead, we found that when the language changes, what changes is not the brand's score but the entire market around it. | Field note | Intermediate | 7 |
Where to begin
A route for the first entry into the topic
Route for founders
A frame for risk, demand, and brand position
Route for marketers
Sources, citation, and practical diagnosis
Route for technical leads
Metrics, infrastructure, and control over data availability
Route for researchers
The corpus as an observation system
Agency and consultant path
Methodological foundation for selling AI visibility services
Full course
All 24 materials from fundamentals to advanced diagnostics
Concepts
Terms used in AI100 articles and reports. Consistent vocabulary helps read the research and compare results.
- Machine distinctness
- An answer system's ability to consistently resolve a brand as the correct entity in the relevant scenario. Stronger than simple name recognition.
- Functional visibility
- A brand's participation in the actual moment of choice: in a shortlist, a recommendation, or a comparison. A brand may be well known yet functionally invisible.
- Source contour
- The set of owned, external, user-generated, and structured sources from which the system forms an opinion about the brand. The key object of audit.
- Mention
- The fact that the brand name or entity appears in the answer. The weakest level of presence.
- Citation
- A situation in which the brand or a source associated with it becomes evidentiary support for the answer. Stronger than a simple mention.
- Influence
- A brand's ability to define the frame of the answer: the criteria, the category, the list of alternatives, and the language of comparison. The most valuable and the rarest level.
- Update lag
- The delay between a change in a fact about the brand and its stable appearance in a machine-generated answer. It is made up of several stages.
- Machine readability
- The degree to which the properties of a brand and product can be reliably extracted from structured and synchronized data. Especially important in commercial scenarios.
- Category drift
- The displacement of a brand into someone else's choice frame, or the substitution of its market with another task category. It often happens before direct brand comparison.
- Loss before the click
- The brand participation that is lost in shaping choice before the user even clicks through to the site. A new economic unit of analysis.
- Answer environment
- An interaction mode in which the answer system does not return a list of documents, but immediately produces a synthesized answer to the user's question. It is in the answer environment that a brand competes not for rank in the results, but for a place inside the finished answer.
- GEO (Generative Engine Optimization)
- A set of practices aimed at making a brand appear more often and more accurately in the answers of generative answer systems. It differs from classic SEO in that it optimizes not for position in a list of links, but for participation in a synthesized answer. A market term already used by competitors and clients. Useful to know, but important to remember that behind the acronym stands the same task — machine distinctness.
- LLMO (Large Language Model Optimization)
- A synonym for GEO that emphasizes optimization specifically for language models, rather than for search systems with an AI layer. It appears in market materials less often than GEO, but it describes the same task.
- RAG (Retrieval-Augmented Generation)
- An architectural approach in which a language model retrieves relevant documents from an external source (web search, knowledge base, catalog) before generating an answer and uses them as context. RAG is precisely what explains why web sources and structured data affect the model's answer in real time, not only through training.
- Zero-click (zero-click)
- A situation in which the user gets a sufficient answer directly in the interface of a search or answer system and does not visit any external site. In zero-click, the brand either participates in forming the answer or loses contact with the user entirely.
- Knowledge graph (Knowledge Graph)
- A structured database of entities and the relationships between them, used by search and answer systems to identify, classify, and describe real-world objects. A brand's presence in a knowledge graph (for example, Google Knowledge Graph or Wikidata) increases the likelihood of correct entity identification in an answer.
- Entity disambiguation (entity disambiguation)
- A system's ability to distinguish one entity from another when names, categories, or contexts coincide or are similar. If a brand name is ambiguous or overlaps with a generic concept, the model may systematically confuse it with another entity.
- AI Overview
- A block of synthesized answer content that Google displays above the standard search results. It is generated by an AI model based on web sources. For many categories, AI Overview has already become the first point of contact between the user and the information. The brand either enters this block or ends up below it.
- Hallucination (hallucination)
- A situation in which a language model generates a claim that is unsupported by sources or factually wrong, yet sounds confident. A hallucination can either help a brand (the model "invents" a non-existent advantage for it) or harm it (it attributes someone else's weakness to it). Both are dangerous.
- Answer bubble
- An effect in which the same brand looks materially different across the answers of different answer systems because of differences in training data, retrieval architecture, and synthesis logic. There is no single, unified AI visibility. Diagnosis must be conducted across several platforms.
- Agentic choice
- A scenario in which an AI agent independently searches, compares, and makes a decision on behalf of the user — without showing an intermediate list of options. In an agentic scenario, the brand competes not for human attention but for machine preference. This is the next frontier after the answer environment.
- Confidence grade (Confidence Grade)
- An internal assessment of research run quality. It is calculated from the width of the confidence interval, the percentage of broken prompts, and the dispersion across scenario families. Grades: A (high) — narrow interval, stable result. B (good) — reliable result with moderate variation. C (moderate) — noticeable sensitivity to corpus composition. D (exploratory) — unstable result; a repeat run is recommended. The grade shows how stable the measurement is. At C or below, a repeat run is recommended before decisions are made.
- Native mode
- A mode in which the model answers only from its training data, without internet access. It shows how firmly the brand is embedded in the model's base knowledge. If the native score is high, the brand is already embedded in the model. If it is low, the model does not know the brand without external prompts.
- Web mode
- A mode in which the model additionally searches for information on the internet before answering. It shows the extent to which external sources strengthen or weaken the brand's position. The difference between web and native shows whether the site and the external environment help the brand or hold it back.
- Visibility Language Field (VLF)
- A phenomenon in which an AI model assembles a different competitive landscape for a brand depending on the language of the prompts. When the prompt language changes, some competitors disappear from responses, others appear, and rankings shift radically. For a dominant category brand the headline score fluctuates moderately, but for less prominent competitors the spread can reach dozens of points — up to complete invisibility in certain languages. VLF arises from asymmetry of training corpora, differences in web sources, and the formation of separate associative graphs for each language. The model may know a brand equally well in every language yet recommend it differently: placing it first in one language while yielding the top spot to a competitor that did not exist in the first language at all. Brand knowledge does not guarantee a recommendation. A brand may be equally known to the model in five languages yet make the shortlist in only two. Competitors also differ — a strategy that works against one set of rivals may be useless in another language. A separate study is recommended for each target-market language.
Report metrics
Quantitative metrics that appear in the AI100 report. Main score weights determine the AI Visibility Score. Diagnostic weights apply only to the explanatory layer.
Main score weights
| Metric | Weight | Description |
|---|---|---|
| Mention Rate | 24% | Share of neutral scenarios where the brand appears in the model's answer at all. A low Mention Rate means the model doesn't recall the brand without a direct hint. This is the most basic indicator: the brand either exists for the model or it doesn't. |
| Top-3 Rate | 14% | How often the brand lands in the top three of the answer rather than merely appearing near the bottom. High Top-3 with low Top-1 means the brand is visible but doesn't dominate. The difference between 'being on the list' and 'leading the list' is the difference between visibility and influence. |
| Top-1 Rate | 10% | How often the model names the brand first — making it the top pick. A consistent Top-1 means the brand has become the model's default recommendation in the category. This is the strongest dominance signal. |
| Avg Position | 15% | Weighted average position of the brand across all answers where it appears. The closer to 1, the higher the brand sits on average. Useful for tracking progress: a 0.5 improvement between runs is a real shift. |
| Prompt Coverage | 14% | In what share of all corpus scenarios the brand appears at least once. Unlike Mention Rate, this counts unique scenarios rather than individual answers. Low coverage means the brand is visible only in one type of question. |
| Response Share | 10% | What share of all mentions in answers belongs to this brand. If there are 10 players in the category and Response Share is 10%, the brand gets exactly its fair share. Above 15% means the brand pulls disproportionate attention. |
| Text Share | 5% | How much text the model devotes to the brand in its answer. A brand can be mentioned in one word or given a full paragraph — Text Share measures that difference. High Text Share with low Mention Rate means the brand appears rarely, but when it does, the model talks about it at length. |
| Domain Citation | 8% | How often the model cites the brand's official domain in web mode. This shows how useful the site is to the model as a source. Low Domain Citation with high Mention Rate means the model knows the brand but doesn't use its site. |
Diagnostic weights
| Metric | Weight | Description |
|---|---|---|
| Recommendation Rate | 30% | Share of answers where the model explicitly recommends the brand, not just mentions it. 'I'd recommend X' and 'X exists' are different things. A high Recommendation Rate means the model considers the brand worth recommending, not just known. |
| Recommendation Strength | 25% | How convincingly the model phrases its recommendation. 'You could look at X' and 'X is the best choice for this task' carry different weight. This metric separates a polite mention from a confident recommendation. |
| Centrality | 20% | Whether the brand is the main topic of the answer or just one item on a list. High centrality means the model builds its answer around the brand. Low means the brand is mentioned but the answer isn't about it. |
| Positive Tone | 15% | Share of answers with explicitly positive tone toward the brand. A model can recommend neutrally or enthusiastically — Positive Tone captures that difference. Consistently negative tone signals a problem in how the model perceives the brand. |
| Argument Quality | 10% | Whether the model supports its recommendation with concrete arguments or sticks to generalities. Quality argumentation means the model can explain why this particular brand. This is the most mature level of visibility. |
Research scenarios
Each scenario tests a different type of visibility
- First appearance
- The user asks about a category for the first time — no names, no hints. The model must recall the brand on its own, relying solely on internal knowledge. This is the baseline test: does the brand exist for the machine as part of the category.
- Shortlist
- The user asks the model to suggest several options for comparison. The test shows whether the brand makes the consideration set — or the model compiles a short list without it.
- Comparison
- The model compares several solutions against user criteria. The test checks how convincingly the brand holds its position in a direct side-by-side — and what arguments the model finds in its favor.
- Ranking
- The model builds an explicit ranking by given criteria. The test shows where the brand lands in the hierarchy and how consistently it holds the upper part of the list.
- Trust
- The user asks whether a solution can be trusted — is it reliable, safe, proven. The test measures whether the model associates the brand with dependability and is willing to recommend it as a safe choice.
- Expertise
- The model answers a question that requires deep product or industry knowledge. The test checks whether the model sees expertise signals in the brand — specific details, specialization, unique competencies.
- Task search
- The user describes a specific task or problem without naming products. The model selects a fitting solution on its own. The test shows whether the model recalls the brand as an answer to a practical need.
- Topic navigation
- The user explores a topic in several steps — from general questions to specific ones. The test checks whether the brand surfaces as the conversation deepens and in what context the model mentions it.
- Agent choice
- An AI agent (automated system) selects a solution without human involvement — for example, for integration or automation. The test simulates a scenario where the brand must be chosen by a machine for a machine.