Rand Fishkin simply published the most important piece of primary research the AI visibility business has seen up to now.
His conclusion – that AI instruments produce wildly inconsistent model suggestion lists, making “rating place” a meaningless metric – is appropriate, well-evidenced, and lengthy overdue.
However Fishkin stopped one step in need of the reply that issues.
He didn’t discover why some manufacturers seem constantly whereas others don’t, or what would transfer a model from inconsistent to constant visibility. That resolution is already formalized, patent pending, and confirmed in manufacturing throughout 73 million model profiles.
Once I shared this with Fishkin immediately, he agreed. The AI fashions are pulling from a semi-fixed set of choices, and the consistency comes from the info. He simply didn’t have the bandwidth to dig deeper, which is truthful sufficient, however the digging has been finished – I’ve been doing it for a decade.
Right here’s what Fishkin discovered, what it truly means, and what the info proves about what to do about it.
Fishkin’s information killed the parable of AI rating place
Fishkin and Patrick O’Donnell ran 2,961 prompts throughout ChatGPT, Claude, and Google AI, asking for model suggestions throughout 12 classes. The findings had been shocking for many.
Fewer than 1 in 100 runs produced the identical checklist of manufacturers, and fewer than 1 in 1,000 produced the identical checklist in the identical order. These are chance engines that generate distinctive solutions each time. Treating them as deterministic rating methods is – as Fishkin places it – “provably nonsensical,” and I’ve been saying this since 2022. I’m grateful Fishkin lastly proved it with information.
However Fishkin additionally discovered one thing he didn’t totally unpack. Visibility share – how typically a model seems throughout many runs of the identical immediate – is statistically significant. Some manufacturers confirmed up virtually each time, whereas others barely appeared in any respect.
That variance is the place the true story lies.
Fishkin acknowledged this however framed it as a greater metric to trace. The actual query isn’t the best way to measure AI visibility, it’s why some manufacturers obtain constant visibility and others don’t, and what strikes your model from the inconsistent pile to the constant pile.
That’s not a monitoring downside. It’s a confidence downside.
Your customers search everywhere. Make sure your brand shows up.
The SEO toolkit you know, plus the AI visibility data you need.
Start Free Trial
Get started with


AI methods are confidence engines, not suggestion engines
AI platforms – ChatGPT, Claude, Google AI, Perplexity, Gemini, all of them – generate each response by sampling from a chance distribution formed by:
- What the mannequin is aware of.
- How confidently it is aware of it.
- What it retrieved for the time being of the question.
When the mannequin is extremely assured about an entity’s relevance, that entity seems constantly. When the mannequin is unsure, the entity sits at a low chance weight within the distribution – included in some samples, excluded in others – not as a result of the choice is random however as a result of the AI doesn’t have sufficient confidence to commit.
That’s the inconsistency Fishkin documented, and I acknowledged it instantly as a result of I’ve been monitoring precisely this sample since 2015.
- Metropolis of Hope showing in 97% of most cancers care responses isn’t luck. It’s the results of deep, corroborated, multi-source presence in precisely the info these methods eat.
- The headphone manufacturers at 55%-77% are in a center zone – recognized, however not unambiguously dominant.
- The manufacturers at 5%-10% have low confidence weight, and the AI contains them in some outputs and never others as a result of it lacks the arrogance to commit constantly.
Confidence isn’t nearly what a model publishes or the way it buildings its content material. It’s about the place that model stands relative to each different entity competing for a similar question – a dimension I’ve lately formalized as Topical Place.
I’ve formalized this phenomenon as “cascading confidence” – the cumulative entity belief that builds or decays by each stage of the algorithmic pipeline, from the second a bot discovers content material to the second an AI generates a suggestion. It’s the throughline idea in a framework I revealed this week.
Dig deeper: Search, answer, and assistive engine optimization: A 3-part approach
Each piece of content material passes by 10 gates earlier than influencing an AI suggestion
The pipeline is named DSCRI-ARGDW – found, chosen, crawled, rendered, listed, annotated, recruited, grounded, displayed, and received. That sounds sophisticated, however I can summarize it in a single query that repeats at each stage: How assured is the system on this content material?
- Is that this URL price crawling?
- Can it’s rendered accurately?
- What entities and relationships does it include?
- How positive is the system about these annotations?
- When the AI must reply a query, which annotated content material will get pulled from the index?
Confidence at every stage feeds the following. A URL from a well-structured, fast-rendering, semantically clear website arrives on the annotation stage with excessive amassed confidence earlier than a single phrase of content material is analyzed. A URL from a gradual, JavaScript-heavy website with inconsistent data arrives with low confidence, even when the precise content material is great.
That is pipeline attenuation, and right here’s the place the maths will get unforgiving. The connection is multiplicative, not additive:
- C_final = C_initial × ∏τᵢ
In plain English, the ultimate confidence an AI system has in your model equals the preliminary confidence out of your entity dwelling multiplied by the switch coefficient at each stage of the pipeline. The entity dwelling – the canonical internet property that anchors your entity in each data graph and each AI mannequin – units the beginning confidence, after which every stage both preserves or erodes it.
Preserve 90% confidence at every of 10 phases, and end-to-end confidence is 0.9¹⁰ = 35%. At 80% per stage, it’s 0.8¹⁰ = 11%. One weak stage – say 50% at rendering due to heavy JavaScript – drops the entire from 35% to 19% even when each different stage is at 90%. One damaged stage can undo the work of 9 good ones.
This multiplicative precept isn’t new, and it doesn’t belong to anybody. In 2019, I revealed an article, How Google Universal Search Ranking Works: Darwinism in Search, based mostly on a direct clarification from Google’s Gary Illyes. He described how Google calculates rating “bids” by multiplying particular person issue scores reasonably than including them. A zero on any issue kills your complete bid, regardless of how sturdy the opposite elements are.
Google applies this multiplicative mannequin to rating elements inside a single system, and no person owns multiplication. However what the cascading confidence framework does is apply this precept throughout the complete 10-stage pipeline, throughout all three data graphs.
The system supplies measurable switch coefficients at each transition and bottleneck detection that identifies precisely the place confidence is leaking. The maths is common, however the utility to a multi-stage, multi-graph algorithmic pipeline is the invention.
This entire system is the topic of a patent utility I filed with the INPI titled “Système et procédé d’optimisation de la confiance en cascade à travers un pipeline de traitement algorithmique multi-étapes et multi-graphes.” It’s not a metaphor, it’s an engineered system with an mental lineage going again seven years to a precept a Google engineer confirmed to me in particular person.
Fishkin measured the output – the inconsistency of advice lists. However the output is a symptom, and the trigger is confidence loss at particular phases of this pipeline, compounded throughout a number of data representations.
You’ll be able to’t repair inconsistency by measuring it extra exactly. You’ll be able to solely repair it by constructing confidence at each stage.
The corroboration threshold is the place AI shifts from hesitant to assertive
There’s a particular transition level the place AI habits adjustments. I name it the “corroboration threshold” – the minimal variety of impartial, high-confidence sources corroborating the identical conclusion about your model earlier than the AI commits to together with it constantly.
Under the brink, the AI hedges. It says “claims to be” as an alternative of “is,” it features a model in some outputs however not others, and the rationale isn’t randomness however inadequate confidence.
The model sits within the low-confidence zone, the place inconsistency is the predictable end result. Above the brink, the AI asserts – stating relevance as truth, together with the model constantly, working with the type of certainty that produces Metropolis of Hope’s 97%.
My information throughout 73 million model profiles locations this threshold at roughly 2-3 impartial, high-confidence sources corroborating the identical declare because the entity dwelling. That quantity is deceptively small as a result of “high-confidence” is doing the heavy lifting – these are sources the algorithm already trusts deeply, together with Wikipedia, business databases, and authoritative media.
With out these high-authority anchors, the brink rises significantly as a result of extra sources are wanted and every carries much less particular person weight. The edge isn’t a one-time gate. As soon as crossed, the arrogance compounds with each subsequent corroboration, which is why manufacturers that cross it early pull additional forward over time, whereas manufacturers that haven’t crossed it but face an ever-widening hole.
Not similar wording, however equal conviction. The entity dwelling states, “X is the main authority on Y,” two or three impartial, authoritative third-party sources affirm it with their very own framing, and the AI encodes it as truth.
This truth is seen in my information, and it explains precisely why Fishkin’s experiment produced the outcomes it did. In slim classes like LA Volvo dealerships or SaaS cloud computing suppliers – the place few manufacturers exist and corroboration is dense – AI responses confirmed greater pairwise correlation.
In broad classes like science fiction novels – the place 1000’s of choices exist and corroboration is skinny – responses had been wildly numerous. The corroboration threshold aligns with Fishkin’s findings.
Dig deeper: The three AI research modes redefining search – and why brand wins
Authoritas proved that fabricated entities can’t idiot AI confidence methods
Authoritas revealed a research in December 2025 – “Can you fake it till you make it in the age of AI?” – that examined this immediately, and the outcomes affirm that Cascading Confidence isn’t simply principle. The place Fishkin’s analysis exhibits the output downside – inconsistent lists – Authoritas exhibits the enter facet.
Authoritas investigated a real-world case the place a UK firm created 11 fully fictional “specialists” – made-up names, AI-generated headshots, faked credentials. They seeded these personas into greater than 600 press articles throughout UK media, and the query was easy: Would AI fashions deal with these pretend entities as actual specialists?
The reply was absolute: Throughout 9 AI fashions and 55 topic-based questions – “Who’re the UK’s main specialists in X?” – zero pretend specialists appeared in any suggestion. 600 press articles, and never a single AI suggestion. Which may appear to contradict a threshold of 2-3 sources, but it surely confirms it.
The edge requires impartial, high-confidence sources, and 600 press articles from a single seeding marketing campaign are neither impartial – they hint to the identical origin – nor high-confidence – press mentions sit within the doc graph solely.
The AI fashions seemed previous the surface-level protection and located no deep entity alerts – no entity dwelling, no data graph presence, no convention historical past, no skilled registration, no corroboration from the type of authoritative sources that truly transfer the needle.
The pretend personas had quantity, they’d mentions, however what they lacked was cascading confidence – the amassed belief that builds by each stage of the pipeline. Quantity with out confidence means inconsistent look at greatest, whereas confidence with out quantity nonetheless produces suggestions.
AI evaluates confidence — it doesn’t depend mentions. Confidence requires multi-source, multi-graph corroboration that fabricated entities essentially can’t construct.
Get the publication search entrepreneurs depend on.
AI citability focus elevated 293% in below two months
Authoritas used the weighted citability rating, or WCS, a metric that measures how a lot AI engines belief and cite entities, calculated throughout ChatGPT, Gemini, and Perplexity utilizing cross-context questions.
I’ve no affect over their information assortment or their outcomes. Fishkin’s methodology and Authoritas’ aren’t similar. Fishkin pinged the identical question repeatedly to measure variance, whereas Authoritas tracks various queries on the identical subject. That stated, the directional discovering is constant.
Their dataset contains 143 acknowledged digital advertising and marketing specialists, with full snapshots from the unique research by Laurence O’Toole and Authoritas in December 2025 and their newest measurement on Feb. 2. The sample throughout your complete dataset tells a narrative that goes far past particular person scores.
- The highest 10 specialists captured 30.9% of all citability in December. By February, they captured 59.5% – a 92% enhance in focus in below two months.
- The HHI, or Herfindahl-Hirschman Index, the usual measure of market focus, rose from 0.026 to 0.104 – a 293% enhance in focus. This occurred whereas the entire professional pool widened from 123 to 143 tracked entities.
Extra specialists are being cited, the sphere is getting greater, and the highest is pulling away sooner. Dominance is compounding whereas the lengthy tail grows.
That is cascading confidence at inhabitants scale. The specialists who actively handle their digital footprint – clear entity dwelling, corroborated claims, constant narrative throughout the algorithmic trinity – aren’t simply sustaining their place, they’re accelerating away from everybody else.
Every cycle of AI coaching and retrieval reinforces their benefit – assured entities generate assured AI outputs, which construct consumer belief, which generate constructive engagement alerts, which additional reinforce the AI’s confidence. It’s a flywheel, and as soon as it’s spinning, it turns into very, very arduous for opponents to catch up.
On the particular person stage, the info confirms the mechanism. I lead the dataset at a WCS of 23.50, up from 21.48 in December, a achieve of +2.02. That’s not as a result of I’m extra well-known than everybody else on the checklist.
It’s as a result of we’ve been systematically constructing my cascading confidence for years – clear entity dwelling, corroborated claims throughout the algorithmic trinity, constant narrative, structured information, deep data graph presence.
I’m the first take a look at case as a result of I’m accountable for all my variables – I’ve an enormous head begin. In a future article, I’ll dig into the small print of the scores and why the specialists have the scores they do.
The sample throughout my shopper base mirrors the inhabitants information. Manufacturers that systematically clear their digital footprint, anchor entity confidence by the entity dwelling, and construct corroboration throughout the algorithmic trinity don’t simply seem in AI suggestions.
They seem constantly, their benefit compounds over time, they usually exit the low-confidence zone to enter the self-reinforcing suggestion set.
Dig deeper: From SEO to algorithmic education: The roadmap for long-term brand authority
AI retrieves from three data representations concurrently, not one
AI methods pull from what I name the Three Graphs mannequin – the algorithmic trinity – and understanding this explains why some manufacturers obtain near-universal visibility whereas others seem sporadically.
- The entity graph, or data graph, accommodates specific entities with binary verified edges and low fuzziness – both a model is in, or it’s not.
- The doc graph, or search engine index, accommodates annotated URLs with scored and ranked edges and medium fuzziness.
- The idea graph, or LLM parametric data, accommodates discovered associations with excessive fuzziness, and that is the place the inconsistency Fishkin documented comes from.
When retrieval methods mix outcomes from a number of sources – they usually do, utilizing mechanisms analogous to reciprocal rank fusion – entities current throughout all three graphs obtain a disproportionate enhance.
The impact is multiplicative, not additive. A model that has a powerful presence within the data graph and the doc index and the idea house will get chosen much more reliably than a model current in just one.
This explains a sample Fishkin observed however didn’t have the framework to interpret – why visibility percentages clustered in another way throughout classes. The manufacturers with near-universal visibility aren’t simply “extra well-known,” they’ve dense, corroborated presence throughout all three data representations. The manufacturers within the inconsistent pool are sometimes current in just one or two.
The Authoritas pretend professional research confirms this from the unfavorable facet. The pretend personas existed solely within the doc graph, press articles, with zero entity graph presence and negligible idea graph encoding. One graph out of three, and the AI handled them accordingly.
What I inform each model after studying Fishkin’s information
Fishkin’s suggestions had been cautious – visibility share is an affordable metric, rating place isn’t, and types ought to demand clear methodology from monitoring distributors. All truthful, however that’s analyst recommendation. What follows is practitioner recommendation, based mostly on doing this work in manufacturing.
Cease optimizing outputs and begin optimizing inputs
Your complete AI monitoring business is fixated on measuring what AI says about you, which is like checking your blood strain with out treating the underlying situation. Measure if it helps, however the work is in constructing confidence at each stage of the pipeline, and that’s the place I focus my shoppers’ consideration from day one.
Begin on the entity dwelling
My expertise clearly demonstrates that this single intervention produces the quickest measurable outcomes. Your entity house is the canonical internet property that ought to anchor your entity in each data graph and each AI mannequin. If it’s ambiguous, hedging, or contradictory with what third-party sources say about you, it’s actively coaching AI to be unsure.
I’ve seen aligning the entity dwelling with third-party corroboration produce measurable adjustments in bottom-of-funnel AI quotation habits inside weeks, and it stays the best ROI intervention I do know.
Cross the corroboration threshold for the important claims
I ask each shopper to determine the claims that matter most:
- Who you’re.
- What you do.
- Why you’re credible.
Then, I work with them to make sure every declare is corroborated by not less than 2-3 impartial, high-authority sources. Not simply talked about, however confirmed with conviction.
That is what flips AI from “generally contains” to “reliably contains,” and I’ve seen it occur typically sufficient to know the brink is actual.
Dig deeper: SEO in the age of AI: Becoming the trusted answer
Construct throughout all three graphs concurrently
Information graph presence (structured information, entity recognition), doc graph presence (listed, well-annotated content material on authoritative websites), and idea graph presence (constant narrative throughout the corpus AI trains on) all want consideration.
The Authoritas research confirmed precisely what occurs when a model exists in just one – the AI treats it accordingly.
Work the pipeline from Gate 1, not Gate 9
Most search engine optimization and GEO recommendation operates on the show stage, optimizing what AI exhibits. But when your content material is shedding confidence at discovery, choice, rendering, or annotation, it would by no means attain show constantly sufficient to matter.
I’ve watched manufacturers spend months on display-stage optimization that produced nothing as a result of the true bottleneck was three phases earlier, and I all the time begin my diagnostic at the start of the pipeline, not the tip.
Preserve it as a result of the hole is widening
The WCS information throughout 143 tracked specialists exhibits that AI citability focus elevated 293% in below two months. The specialists who preserve their digital footprint are pulling away from everybody else at an accelerating price.
Beginning now nonetheless means beginning early, however ready means competing towards entities whose benefit compounds each cycle. This isn’t a one-time challenge. It’s an ongoing self-discipline, and the returns compound with each iteration.
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with


Fishkin proved the issue exists. The answer has been in manufacturing for a decade.
Fishkin’s analysis is a present to the business. He killed the parable of AI rating place with information, he validated that visibility share, whereas imperfect, correlates with one thing actual, and he raised the appropriate questions on methodology that the AI monitoring distributors ought to have been answering all alongside.
However monitoring AI visibility with out understanding why visibility varies is like monitoring a inventory value with out understanding the enterprise. The worth is a sign, and the enterprise is the factor.
AI suggestions are inconsistent when AI methods lack confidence in a model. They change into constant when that confidence is constructed intentionally, by:
- The entity dwelling.
- Corroborated claims that cross the corroboration threshold.
- Multi-graph presence.
- Each stage of the pipeline that processes your content material earlier than AI ever generates a response.
This isn’t hypothesis, and the proof comes from each route.
The method behind this method has been below improvement since 2015 and is formalized in a peer-review-track educational paper. A number of associated patent purposes have been filed in France, protecting entity information structuring, immediate meeting, multi-platform coherence measurement, algorithmic barrier development, and cascading confidence optimization.
The dataset supporting the work spans 25 billion information factors throughout 73 million model profiles. In tracked populations, shifts in AI citability have been noticed — together with circumstances the place the highest 10 specialists elevated their share from 31% to 60% in below two months whereas the general subject expanded. Impartial analysis from Authoritas experiences findings that align with this mechanism.
Fishkin proved the issue exists. My focus over the previous decade has been on implementing and refining sensible responses to it.
That is the primary article in a collection. The second piece, “What the AI professional rankings truly inform us: 8 archetypes of AI visibility,” examines how the pipeline’s results manifest throughout 57 tracked specialists. The third, “The ten gates between your content material and an AI suggestion,” opens the DSCRI-ARGDW pipeline itself.
Contributing authors are invited to create content material for Search Engine Land and are chosen for his or her experience and contribution to the search neighborhood. Our contributors work below the oversight of the editorial staff and contributions are checked for high quality and relevance to our readers. Search Engine Land is owned by Semrush. Contributor was not requested to make any direct or oblique mentions of Semrush. The opinions they categorical are their very own.
#Rand #Fishkin #proved #suggestions #inconsistent #heres #repair

