Final 12 months, after spending just a few days at a piece summit in Austria, I requested Perplexity for the newest information associated to website positioning and AI search. It responded with particulars a few supposed “September 2025 ‘Perspective’ Core Algorithm Replace” that Google had simply rolled out, emphasizing “deeper experience” and “completion of the consumer journey.”
It sounded believable sufficient … in the event you don’t stay and breathe Google core updates. Sadly for Perplexity, I do.
I knew immediately that this data wasn’t proper. For one, Google hasn’t named core updates in years. It additionally already had SERP options known as “Views.” And if a core replace had truly rolled out whereas I used to be away, I’d’ve been flooded with messages. So I checked Perplexity’s sources … and, shock! Each citations got here from made-up, AI-generated slop on a few website positioning company blogs, confidently fabricating particulars about an algorithm replace that by no means truly occurred.
Like a nasty sport of phone, this pretend website positioning information unfold throughout a number of web sites – possible pushed by AI programs scanning and regurgitating information regardless of accuracy, all within the race to publish and scale “recent” content material. That is how we find yourself with this mess:

This unhealthy data reinforces itself to turn into the official narrative. To today, you’ll be able to ask an LLM of your alternative (together with ChatGPT, AI Mode, and AI Overviews) in regards to the September 2025 “Views” replace, and they’re going to confidently reply with details about the way it “essentially shifted how search outcomes are ranked:”

Or that it “shifted what ‘good content material’ truly means in follow.”

The issue is: the “September 2025 “Views” replace by no means occurred. It by no means affected rankings. It by no means shifted something about good content material. As a result of it doesn’t truly exist.
Sarcastically, while you go on to probe the language mannequin about this, it appears to know that is the case:

I tweeted about this incident shortly after it occurred, which acquired the CEO of Perplexity’s consideration; he tagged his head of search within the tweet feedback.

This isn’t a one-off incident. It’s a sample I’ve seen numerous instances in AI search responses, particularly on subjects associated to website positioning and AI search (GEO/AEO). And I’ve a working idea on the way it spreads: one AI-generated article hallucinates a element, websites working AI content material pipelines scrape and regurgitate it, extra AI-generated websites scrape the identical misinformation, and all of a sudden a made-up algorithm replace has citations. For a RAG-based system like Perplexity or AI Overviews, enough citations are basically all it needs to treat something as fact, no matter whether or not it’s truly true.

At this level, I’d take into account this widespread. I lately had a shopper ship me website positioning/GEO data that was factually incorrect, pulled straight from AI-generated slop on a random, vibe-coded company weblog. The shopper had no concept. I imagine that in the event you’re attempting to study website positioning or AI search straight from an LLM, that is, sadly, an more and more possible final result.
I ran comparable testing throughout Google’s March 2026 core update and located a number of AI-generated articles already claiming to share the “winners and losers” whereas the replace was nonetheless rolling out.
The articles begin with imprecise, generic filler about core updates that doesn’t truly say something:

Then they listing “winners and losers” with out citing a single web site, leaning on imprecise, generalized claims that sound believable and fill the void left by a scarcity of dependable data:

Unsurprisingly, their websites are full of AI-generated photographs, AI help chatbots, and different clear alerts that little – if any – human involvement went into creating this content material.

The Period Of AI Misinformation
If somebody on the web says it, based on AI, it have to be true.
That’s the fact for the overwhelming majority of individuals utilizing AI search at present. Solely about 50 million of ChatGPT’s 900 million weekly active users are paying subscribers, that means roughly 94% are on the free tier. Google’s AI Overviews and AI Mode are free by design – and AI Overviews reached over 2 billion monthly active users as of mid-2025.
These are the fashions most AI customers are at the moment interacting with, they usually don’t have any actual mechanism for distinguishing between data that’s true and data that’s merely repeated throughout sufficient sources. Repetition is handled as consensus. If sufficient sources say it, it turns into truth, no matter whether or not any of these sources concerned a human who truly verified the declare.
Placing The Downside To The Take a look at
I lately spoke to journalists from each the BBC and the New York Times about the issue of misinformation in AI-generated responses. Within the case of the BBC article, the writer Thomas Germaine and I examined publishing fictitious weblog posts on our private websites to see whether or not AI Overviews would current the made-up data as truth, and the way rapidly.
Even understanding how unhealthy the issue was, I used to be alarmed by the outcomes.
On my private weblog, in January 2026, I printed an AI-generated article a few pretend Google core replace, which by no means truly occurred. I included the element that Google “accepted the replace between slices of leftover pizza.” Inside 24 hours, Google’s AI Overviews was confidently serving this fabricated data again to customers:
(Be aware: I’ve since deleted the article from my web site as a result of it was exhibiting up in individuals’s feeds and being lined on exterior websites, additional contributing to the precise downside I’m mentioning right here!)

First, AI Overviews confirmed that there was certainly a core replace in January 2026. As a reminder: There was not. My web site was the one supply making this declare, and that was apparently sufficient to set off the AI Overview.
Subsequent, I requested it in regards to the pizza, and it responded accordingly:

Higher but, the AI Overview discovered a approach to join my fabricated pizza element to an actual incident: Google’s struggles with pizza-related queries in 2024. It didn’t simply regurgitate the lie – it contextualized it.
ChatGPT, which is believed to use Google’s search results, rapidly surfaced the identical fabricated data, although it at the least flagged that the announcement didn’t match Google’s formal communications:

I deleted my article after getting messages from individuals who had seen my pretend data circulating through RSS feeds and scrapers. I knew it was simple to affect AI responses. I didn’t know it could be that simple.
I additionally questioned whether or not my web site had a bonus, given its robust backlink profile and established authority within the website positioning house.
So I spoke to the BBC journalist, Thomas Germaine, and he put this to the check on his private web site, which usually obtained little or no natural visitors. He printed a fictitious article in regards to the “Best Tech Journalists at Eating Hot Dogs,” calling himself the No. 1 greatest (in true website positioning vogue).
According to Thomas’ article in the BBC, inside 24 hours, “Google parroted the gibberish from my web site, each within the Gemini app and AI Overviews, the AI responses on the prime of Google Search. ChatGPT did the identical factor, although Claude, a chatbot made by the corporate Anthropic, wasn’t fooled.”
To be honest: the question Thomas selected was area of interest sufficient that only a few customers would ever truly seek for it, which is strictly what Google identified in its response to the BBC. When there are “information voids,” Google stated, this will result in decrease high quality outcomes, and the corporate is “working to cease AI Overviews exhibiting up in these instances.” My major query is: When? The product has already been stay for two years!
Why Information Voids Aren’t A Nice Excuse
Information voids might contribute to the issue, however for my part, they don’t excuse it. These AI responses are being consumed by a whole lot of hundreds of thousands of customers, and “we’re engaged on it” isn’t a solution when the programs are already deployed at that scale.
Within the New York Instances article, “How Accurate Are Google’s A.I. Overviews?,” the precise scale of this downside was put to the check. In response to the info discovered within the research, Google’s AI Overviews have been correct 91% of the time. This sounds first rate till you truly do the maths: With Google processing over 5 trillion searches a 12 months, this means that tens of hundreds of thousands of erroneous answers are generated by AI Overviews each hour.
To make issues worse: Even when AI Overviews have been correct, 56% of appropriate responses have been “ungrounded,” that means the sources they linked to didn’t totally help the data offered. So greater than half the time, even when the reply occurs to be proper, a consumer clicking via to confirm it could discover sources that don’t truly again up what they have been simply advised. That quantity additionally acquired worse with the newer mannequin – it was 37% with Gemini 2 and rose to 56% with Gemini 3.
The NYT article drew a whole lot of feedback from customers sharing their very own experiences, and the frustration was palpable. The core criticism wasn’t simply that AI Overviews get issues unsuitable – it’s that they by no means admit uncertainty. AI Overviews ship each reply with the identical assured, authoritative tone, whether or not the data is correct or fully fabricated, which suggests customers don’t have any dependable approach to distinguish dependable data from hallucination at a look.
As many commenters identified, this truly makes search slower: As a substitute of scanning a listing of sources and evaluating them your self, you now need to fact-check the AI’s abstract earlier than doing all of your precise analysis. The instrument, supposedly designed to save lots of time for the consumer, is now creating double work for the consumer.
A number of the feedback additionally bolstered my identical issues about AI solutions citing made-up, AI-generated content material. A number of customers described what quantities to the identical misinformation cycle: AI programs coaching on AI-generated content material, citing unvetted Reddit posts and Fb feedback as authoritative sources, and producing a self-reinforcing loop of degrading high quality. A number of commenters in contrast it to creating a replica of a replica. Even the defenders of AI Overviews admitted they nonetheless must confirm every little thing, which form of undermines the core premise: that AI-generated solutions save customers effort and time.
How “Smarter” LLMs Are Trying To Repair the Downside
It’s value monitoring how the AI corporations try to unravel these issues. For instance, utilizing the RESONEO Chrome extension, you’ll be able to observe clear variations in how ChatGPT’s free-tier mannequin (GPT-5.3) responds in comparison with GPT-5.4, the extra succesful mannequin out there solely to paying subscribers.
For instance, when asking in regards to the current March 2026 Core Algorithm Replace, I used ChatGPT’s extra succesful “Pondering” mannequin (5.4). The mannequin goes via six rounds of pondering, a lot of which is clearly supposed to scale back low-quality and spammy data from making its manner into the reply. It even appends the names of reliable individuals with authority on core updates (Glenn Gabe & Aleyda Solis) and limits the fan-out searches to their websites (web site:gsqi.com and web site:linkedin.com/in/glenngabe) to drag up higher-quality solutions.

It is a step in the fitting course, and the mannequin produces measurably higher solutions. In response to OpenAI’s own launch announcement, GPT-5.4’s particular person claims are 33% much less prone to be false, and its full responses are 18% much less prone to comprise errors in comparison with GPT-5.2. GPT-5.3, the mannequin out there to free customers, additionally improved over its predecessor. According to OpenAI’s own data, it produces 26.8% fewer hallucinations than prior fashions with net search enabled, and 19.7% fewer with out it.
However these enhancements are tiered. Probably the most succesful mannequin is paywalled, and the free-tier mannequin, whereas higher than what got here earlier than, remains to be meaningfully much less dependable. Different main AI platforms comply with the identical sample: higher reasoning and accuracy reserved for paying subscribers, sooner and cheaper fashions for everybody else. The result’s that the 94% of ChatGPT customers on the free tier, and the billions of customers interacting with free AI search merchandise like AI Overviews are getting solutions from fashions which are extra prone to be unsuitable and fewer outfitted to flag uncertainty.
That is the half that makes me most uncomfortable: Most of those customers most likely don’t understand the hole exists. AI is being marketed in all places: Tremendous Bowl advertisements, billboards, and product launches framing AI as the way forward for information. Folks see “ChatGPT” or “AI Overview” and assume they’re interacting with one thing that is aware of what it’s speaking about. They’re most likely not eager about which mannequin tier they’re on, or whether or not a paid model would give them a materially completely different reply to the identical query.
I perceive the economics. These corporations must scale, and providing free tiers drives adoption. However for my part, it’s irresponsible to deploy these merchandise to billions of individuals, body them as “intelligence,” after which quietly reserve the extra correct variations for the fraction of customers keen to pay. Particularly when the free variations (together with the one on the prime of Google search) are this prone to the sort of misinformation documented all through this text.
The Burden Of Proof Has Shifted
The September 2025 “Views” Google replace nonetheless doesn’t exist. However in the event you ask an LLM about it at present, it would nonetheless let you know about it with full confidence. That hasn’t modified within the months since I first flagged it, and it most likely received’t change anytime quickly, as a result of the content material that fabricated it’s nonetheless listed, nonetheless cited, and nonetheless getting used to generate new content material that references it as truth. The AI slop misinformation cycle continues.
That is what makes the issue so tough to repair. It’s not a single hallucination that may be patched. It’s a feedback loop that compounds over time, and each day that these programs are stay at scale, the loop will get more durable to interrupt. The AI-generated slop that seeded the unique misinformation is now a part of the coaching information and used as a retrieval supply for the following batch of AI-generated solutions.
I don’t assume the reply is to cease utilizing AI. However I do assume it’s value being sincere about what these merchandise truly are proper now: prediction engines that deal with the quantity of knowledge as a proxy for its accuracy. Till that modifications, the burden of fact-checking falls on the consumer. And most customers don’t know they’re carrying it, not to mention have the time or inclination to do it.
I’d warn entrepreneurs or publishers attempting to take website positioning or GEO recommendation from massive language fashions: the information is contaminated, and may all the time be verified by actual specialists with expertise within the discipline.
Extra Sources:
This post was originally published on Lily Ray NYC Substack.
Featured Image: elenabsl/Shutterstock
#Slop #Loop

