How to Measure AI Search Visibility: The CMO's Share of Citation Framework

To measure AI visibility, track Share of Citation — the percentage of citation slots your brand earns across a defined query set on ChatGPT, Perplexity, Gemini, and Claude. This is more precise than "AI share of voice," which counts mentions. Citations are decisions AI engines make to attribute your brand as a source. Mentions are noise.
Why brand share of voice fails as an AI visibility metric
Conductor, HubSpot, and Semrush all recommend the same formula: divide your brand mentions by total responses, multiply by 100, call it your AI share of voice. The problem is that formula measures something different from what drives buyer behavior.
In AI search, the mechanism that matters is citation — when an AI engine names your brand as the source of a specific claim or recommendation. A mention ("some agencies offer performance-based PR") costs the AI nothing. A citation ("AuthorityTech's earned media methodology tracks citations across 1,673 publications") is an endorsement. Conflating the two gives you a metric that feels comprehensive but hides the actual signal.
Christian Lehman's framework for AI visibility measurement starts from a different question: not "how often is our brand mentioned?" but "how often are we cited as the answer?"
That distinction changes what you optimize, what you report to leadership, and where you spend.
The Share of Citation framework
Share of Citation is a metric coined by Jaxon Parrott at AuthorityTech to track brand citation presence in AI-generated answers rather than brand mention frequency. The distinction matters because AI engines treat citations as endorsements — they require a selection process. Mentions require nothing.
The formula:
Share of Citation = (Citation slots earned ÷ Total citation slots tracked) × 100
Where:
- Citation slots earned = the number of times your brand is explicitly cited (named as source, linked to, or directly attributed) across your tracked query set
- Total citation slots tracked = all citation appearances across every AI response in your query set — your brand plus every competitor citation
If you run 30 queries across four AI engines (120 total responses), and your brand earns 14 citations while competitors earn 52, your Share of Citation is 14 ÷ 66 = 21%.
| Metric | Formula | What it tracks | What it misses |
|---|---|---|---|
| AI Share of Voice | Brand mentions ÷ Total responses | How often brand name appears | Whether the mention carries authority signal |
| Brand Visibility Score | Brand responses ÷ Total responses | Presence across a query set | Position quality and citation intent |
| Share of Citation | Citations earned ÷ Total citation slots | Direct attribution by AI engines | Indirect brand associations |
Share of Citation is the hardest metric to game and the most predictive of downstream pipeline. It requires the AI engine to do work on your behalf — selecting your brand as the authoritative answer. That requires earned media presence across publications AI engines trust, not keyword density in brand-owned content.
How to calculate your Share of Citation: step-by-step
Step 1: Define your query set
Pick 15–25 queries representing high-intent moments for your category. Focus on queries with a definite answer your brand should own: not brand queries, not navigational queries. Evaluation queries.
Examples:
- "best [your category] platform for [use case]"
- "how to [solve the problem your product addresses]"
- "which [product type] gets cited most by AI search"
- "top [agency type] 2026"
Include informational queries where you can credibly be the cited source, not just named.
Step 2: Run across all four engines
Test each query on ChatGPT, Perplexity, Gemini, and Claude. Record:
- Was your brand cited? (Yes/No)
- What was the citation context? (recommendation, source attribution, or passing mention — record each separately)
- Which competitors were cited in the same response?
- What position was your citation? (First, middle, or buried in a list)
Run each query twice per engine, one week apart. AI engines produce probabilistic responses — single-run data is unreliable. Variation between runs is common, which is why Christian Lehman recommends averaging across at least two rounds before drawing conclusions.
Step 3: Count citation slots, not just mentions
A citation slot is any position in a response where a brand is cited. In a response that names three companies as recommendations, there are three citation slots. In a response that recommends your brand and cites your methodology separately, you hold two citation slots.
Track total citation slots for your brand and total citation slots across all brands in your tracked responses. This produces your raw Share of Citation.
Step 4: Segment by engine and query type
Don't average across everything at once. Your Share of Citation varies significantly by:
- Engine — Perplexity weights recent earned media heavily; ChatGPT leans toward training data signal from authoritative publications; Claude pulls from structured reference sources
- Query type — Informational queries ("how to measure AI visibility") vs. evaluation queries ("best AI PR agency") behave differently
- Category maturity — Newer categories have fewer entrenched citations, which means faster Share of Citation gains are available
Segmented data tells you where to invest. High Share of Citation on Perplexity but zero on ChatGPT means your content is fresh but thin in sources ChatGPT trusts — typically authoritative third-party publications with strong domain authority.
Step 5: Set a baseline, then track weekly
Month one is baseline. Don't optimize off a single snapshot. Run the same 25 queries weekly, track your Share of Citation, and record what changed between runs — new content published, new earned media placements, structural updates to existing pages.
According to Forrester's 2026 B2B Summit analysis, rapid adoption of AI answer engines like ChatGPT and Google AI Mode has fundamentally changed how B2B buyers research, compare, and evaluate vendors. The brands winning that evaluation environment are the ones cited as authorities in AI responses — not just mentioned.
Your Share of Citation is the scoreboard for that competition.
What to benchmark against
Christian Lehman tracks Share of Citation across AuthorityTech's proprietary monitoring data. As of April 2026, AT monitors 30 high-intent queries weekly across four AI engines, tracking total citation slot competition across the category. The top competitors consistently earning citation slots include Search Engine Land, Conductor, HubSpot, and Semrush — all with significant content authority and domain age advantages.
For a brand entering the AI visibility category with a focused earned media program, a realistic target is consistent citations on 3–5 of your 15–25 tracked queries within 90 days. That timeline is based on observed campaign patterns from client work at AuthorityTech, not an industry-published benchmark.
The more actionable benchmark is competitive: compare your Share of Citation to whoever is earning the citation slots you are not. If Conductor appears in 8 out of 10 responses to your "how to measure AI brand visibility" query and you appear in zero, the gap is clear and the optimization target is specific. If you've already mapped your AI traffic attribution baseline — as covered in the CMO's guide to AI search traffic attribution — Share of Citation gives you the upstream metric that predicts which way your attributed traffic will move.
Tools that support Share of Citation tracking
There is no single tool that tracks Share of Citation exactly as defined here. Most platforms measure a version of AI share of voice. Use them as data sources for your own calculation.
| Tool | What it gives you | Best used for |
|---|---|---|
| Profound | Brand citations across ChatGPT, Perplexity, Claude with sentiment | Cross-platform citation tracking |
| Semrush AI Visibility Toolkit | Share of voice plus sentiment comparison across ChatGPT and AI Mode | Competitive benchmarking |
| Ahrefs AI Prompt Tracking (launched Jan 2026) | Custom prompt monitoring for brand citations | Tracking specific query sets |
| HubSpot AEO Grader | Five-dimension brand score including share of voice | Quick competitive snapshot |
| Manual monitoring | Full control over query set, citation slot counting | Baseline establishment and query-specific analysis |
The manual method is the most accurate for small query sets. Open ChatGPT, Perplexity, Gemini, and Claude. Run each query. Record who gets cited, not just mentioned. Build a spreadsheet: query | engine | brand cited (Y/N) | citation context | competitors cited.
Run 25 queries across four engines and you have 100 data points — enough to establish a baseline and identify where your Share of Citation is weakest.
The inputs that drive Share of Citation
Understanding what causes AI engines to cite specific brands gives you the optimization levers. Christian Lehman tracks three primary drivers:
Earned media placements — AI engines draw from publications they consider authoritative. Third-party coverage in outlets like TechCrunch, Forbes, Search Engine Land, and Forrester gives you citation presence that brand-owned content cannot replicate. Machine Relations, the discipline coined by Jaxon Parrott, frames earned authority as the foundational layer of AI visibility strategy. You cannot earn citations from sources you haven't been published in.
Structured, answer-first content — Every page you want cited needs a definitional block in the first 60 words. Not a warm-up paragraph. A direct answer AI engines can extract and attribute. FAQ sections with self-contained Q&A pairs perform well because AI engines can lift the exact answer block without needing surrounding context.
Freshness — AI engines update their knowledge of your content. Pages refreshed with new data points, current benchmarks, and recent evidence perform better over time than static pages. The mechanism is not recency alone: refreshed pages signal continued relevance in a rapidly evolving space.
Generative Engine Optimization is the operational framework that implements these inputs systematically. But measurement always comes first. You cannot optimize a Share of Citation you haven't baselined.
Run your baseline audit
If you don't know your current Share of Citation, that is the starting point. Christian Lehman recommends a 25-query, four-engine manual audit — one afternoon of work that produces a real baseline.
From that baseline, identify three categories of queries:
- Zero-citation — No brand is consistently cited; AI invents a generic answer
- Competitive — Your brand and two or three competitors split citation slots inconsistently
- Owned — You consistently hold the first citation slot
Zero-citation queries are the fastest wins. Get your brand into the earned media sources AI engines already cite for those queries and you move from zero to cited before any competitor has claimed the slot.
AuthorityTech's AI Visibility Audit runs this analysis automatically across 50+ tracked queries, benchmarks you against category competitors, and identifies the earned media gaps behind your lowest Share of Citation scores.
Start measuring. Everything else follows from the baseline.
FAQ
Q: What's the difference between Share of Citation and AI share of voice?
A: Share of voice counts how often your brand is mentioned in AI responses. Share of Citation counts how often AI engines cite your brand — naming it as a source or directly attributing a claim to it. Citations require the AI to endorse your brand as an authoritative answer. Mentions do not. Share of Citation is harder to inflate and more predictive of whether your brand is actually influencing buyer decisions during AI-assisted research.
Q: How many queries do I need to track to get a reliable Share of Citation baseline?
A: 15–25 queries across four engines (ChatGPT, Perplexity, Gemini, Claude) gives you 60–100 data points per measurement run. Run each query at least twice per engine with at least a week between runs — AI engines are probabilistic, and single-run data can misrepresent your actual citation presence. Average across both runs before calculating your baseline Share of Citation.
Q: My Share of Citation is near zero. Where do I start?
A: Start with zero-citation queries — ones where no brand earns consistent citations and AI engines produce generic answers. These are the fastest to capture because no one has claimed the slot. The immediate action is an earned media placement in a publication that AI engines already cite for that query category. Check which sources appear in AI responses to your tracked queries. Get your brand into those sources first.
Q: Does Share of Citation vary by AI engine?
A: Significantly. Perplexity weights recent web content heavily, so fresh earned media placements show up in citation behavior quickly. ChatGPT relies more on training data signal, which means presence in authoritative publications over a longer time horizon matters more. Claude tends to pull from structured reference sources and documentation. Segment your Share of Citation by engine so you can identify where the gap is largest and match the investment to the engine's citation mechanism.
About Christian Lehman
Christian Lehman is Co-Founder of AuthorityTech — the world's first AI-native earned media agency. He tracks which companies are winning and losing the AI shortlist battle across every major B2B vertical, and writes about what the data actually shows.
Christian Lehman