Can a fake brand win in AI search? New experiment says yes

Can a fake brand win in AI search? New experiment says yes

In November 2024, with SE Rating’s analysis crew, we started a 16-month experiment to check how AI-generated content performs in organic search. We launched 20 web sites throughout totally different niches and tracked their efficiency over time.

However we didn’t cease there.

We wished to look past rankings and perceive how AI programs uncover, interpret, and cite info. So we expanded the challenge right into a extra bold set of experiments on AI search and LLM visibility.

For the following part, we created a brand new fictional model in an actual area of interest with actual competitors to see how shortly AI programs would decide it up and whether or not it may very well be cited alongside or above trusted trade leaders and authorities sources.

After the primary month, a number of patterns turned clear.

Methodology behind the experiment

We created a fictional model and printed content material about it throughout:

  • Model new web site representing the model, registered particularly for the experiment.
  • 11 extra domains, throughout a yr outdated, with prior historical past and present rankings.

Throughout these websites, we examined seven content material codecs:

  • Deep guides.
  • “Alternate options” listicles.
  • “Better of” listicles.
  • Assessment articles.
  • Comparability (“vs”) pages.
  • How-to/tutorial content material.
  • Clickbait-style articles.

We began publishing in March 2026 and tracked how 5 AI programs responded: ChatGPT, Google’s AI Overviews, Google’s AI Mode, Perplexity, and Gemini.

In complete, we tracked 825 prompts throughout totally different question varieties and situations, which generated 15,835 AI solutions through the first month.

For every immediate, we checked out three issues:

  • Whether or not our model (or one in all our websites) appeared within the AI reply
  • Whether or not it was cited as a supply
  • How usually it appeared as the primary cited supply (place 1)

This experiment remains to be ongoing, and the primary month was designed to see how AI programs reply to newly created, totally accessible info tied to a fictional model.

Key experiment insights

  • 96% of all AI visibility for our pretend model got here from branded searches. Even in an actual area of interest with comparatively low competitors, a very new area had little likelihood of competing with established manufacturers for broader, non-branded subjects.
  • On queries that solely our pretend model may realistically reply, we outperformed established opponents (DT 40+) by as a lot as 32x and achieved near-exclusive visibility in lower than 30 days.
  • Even with out sturdy authority, the pages that clearly defined who we have been, what we supplied, and the way we have been totally different (e.g., “[Brand Name] Compete Information” and “About Us”) turned essentially the most cited sources from the primary area. This exhibits that model positioning may be formed early in AI search.
  • Perplexity was the quickest engine to floor new content material. Newly printed pages normally reached place #1 inside 1–3 days of indexation. Nonetheless, Perplexity usually cited extra domains as a substitute of the primary model website.
  • Google’s AI Mode was essentially the most secure for branded queries tied to distinctive claims (exhibiting our model at #1 for a median of 90% of prompts). 
  • Gemini, against this, usually misidentified the model. And even for uniquely branded queries, this AI platform supplied 60% of AI solutions with no citations to our model.
  • Deep guides, evaluate articles, and comparability pages generated the very best variety of AI citations, whereas extra generic codecs like how-to articles and listicles confirmed minimal influence.
  • A topical silo made up of 1 hub web page and 10 supporting articles generated no AI citations. In the meantime, a set of 30 brief, repetitive pages (500-750 phrases every) generated greater than 1,800 citations. So, on this check, high-volume content material publishing mattered greater than inside linking.

One of many clearest takeaways from the primary month is {that a} brand-new website has restricted probabilities of competing for broader, non-branded subjects, even in a distinct segment with comparatively low competitors.

AI programs did decide up our fictional model shortly, however most of that visibility got here when the question was already related to the model itself, whether or not by means of:

  • the model title
  • product-specific claims
  • or different brand-related angles

Particularly, out of all AI solutions, 96% (15,553 out of 15,835) got here from branded searches.

Non-branded informational queries produced simply 4% of AI solutions in complete, and even these principally got here by means of our supporting check domains.

The sample was even stronger on the primary fictional model website itself. There, we recorded:

  • 10,253 AI solutions for branded queries
  • and simply 6 for non-branded ones

That could be a 1,700x distinction.

This feels acquainted as a result of it mirrors traditional website positioning. New manufacturers nonetheless want time to earn belief, construct recognition, and compete for broader subjects. When AI programs reply normal trade questions, they have a tendency to depend on established and authoritative sources.

This is the reason the strongest leads to our experiment got here from prompts tied to info solely our model may reply, similar to how the product works, how usually it updates, and so forth.

These queries alone generated 11,430 AI solutions with citations to our model, accounting for 72% of allvisibility within the experiment.

The reason being easy: there isn’t any competitors.

If a question is one thing like “Was [Brand Name] initially constructed as an inside software?”, just one supply can realistically reply it. AI programs don’t want to check sources, consider authority, or resolve conflicts.

That gave our fictional model a significant benefit. Even with no area authority, it outperformed established opponents (DT 40+) by as much as 32x on these queries.

What all this implies for entrepreneurs and enterprise house owners is that when customers ask about your model, AI programs are more likely to depend on your web site as one of many predominant sources of knowledge. So, the content material they cite needs to be totally aligned with the way you need your model to be positioned.

Our experiment helps this. The “Full Information” web page on the primary website appeared in 1,799 AI solutions (the very best consequence within the dataset) largely as a result of it consolidated key model info in a single place. The “About Us” web page adopted with 1,500 AI solutions. Collectively, these have been essentially the most cited URLs from our predominant area, with LLMs counting on them 3–5 instances extra usually than the extra domains.

In observe, AI programs might find out about your model shortly, however what they be taught will depend on what you publish. Your core pages ought to clearly reply all of the questions which are vital in your model: who you’re, what you supply, and the way you’re totally different.

This manner, you can begin shaping your narrative in LLMs at the same time as a brand new or small model, earlier than you might have the authority to compete for broader trade subjects.

Perception 2: AI engines behave very otherwise

One other sturdy sample within the experiment is that the 5 AI programs don’t behave alike. They fluctuate not simply in how usually they point out the fictional model, however in how shortly they decide it up, how persistently they cite it, and which domains they like as sources.

Google’s AI Mode: Probably the most secure for branded visibility

Google AI Mode was essentially the most dependable engine within the dataset.

All through the experiment, it positioned our area in place 1 for branded queries in about 90% of instances. In contrast to different engines, it didn’t present main fluctuations or dependency on different check domains.

If there was one place the place direct model visibility was predictable, this was it.

Google’s AI Overviews: Excessive visibility, decrease consistency

Google’s AI Overviews additionally surfaced our examined area for branded queries, however the sample was much less constant.

We noticed our model seem in place 1 for 14 days for some prompts, adopted by a drop mid-month that didn’t get better. Extra broadly, mentions and hyperlinks for branded queries fluctuated closely, showing and disappearing a number of instances every week.

But when hyperlinks have been included, it precisely described the model. When no hyperlinks have been proven, it usually claimed there was no public info accessible.

The takeaway right here just isn’t that AI Overviews failed to acknowledge the model. It did. However that visibility was tougher to maintain over time.

Perplexity: The quickest to choose up new content material, however not at all times brand-first

Perplexity was the breakout engine for contemporary content material. 

It picked up newly listed pages inside 1–3 days, which clearly made it the first driver of early visibility inside our experiment. 

However this pace comes with a tradeoff.

As an alternative of persistently citing pages from our predominant area, Perplexity usually used our supporting check domains as sources. 

In early March, our predominant model held place 1. However as we printed extra content material on supporting domains, these domains steadily changed it in AI citations.

By the top of the month,six totally different domains have been being cited: our predominant model website and 5 supporting check domains the place we had printed extra content material in regards to the pretend model.

So whereas Perplexity will increase total visibility, it doesn’t at all times ship that visibility on to the primary model website.

ChatGPT: Slower to react, stronger over time

ChatGPT confirmed essentially the most noticeable development over time.

Originally of March, there have been no hyperlinks or mentions of our model in any respect. However because the month progressed, visibility steadily elevated.

This development was particularly clear throughout particular content material varieties:

  • Distinctive claims drove the strongest efficiency, accounting for almost all of visibility, with round 70% of citations showing in place 1.
  • Assessment articles began with zero presence however shortly gained traction, reaching constant place 1 rankings by March 17.
  • Comparability (“vs”) articles achieved the very best consistency total, with mentions on 29 out of 31 days by the top of the month.

Total, ChatGPT didn’t instantly acknowledge the model. As soon as it acknowledged the model, ChatGPT started surfacing it ceaselessly, particularly for branded prompts.

Gemini: weakest efficiency and most inconsistent conduct

Gemini was the weakest engine within the dataset and the least constant.

Initially, it struggled to determine our area of interest appropriately. Nonetheless, the outcomes improved once we modified how we requested the questions. When prompts have been framed as comparisons (“X vs Y”) or evaluations, Gemini was more likely to acknowledge the model appropriately. 

Even then, the outcomes have been nonetheless restricted. Within the best-performing situation (queries primarily based on distinctive claims in regards to the model), Gemini failed to incorporate any citations to our model in about 60% of responses.

Perception 3: Content material format issues, however so does the amount

Subsequent, for this experiment, we examined seven totally different content material varieties throughout each our predominant website and supporting check websites.

And what we discovered is that complete, in-depth content material earns much more AI citations than shorter articles.

The strongest-performing codecs have been:

  • Deep guides (5,000–6,000 phrases): ~900 AI solutions per web page
  • Assessment articles: ~257 AI solutions per web page
  • Comparability (“vs”) articles: ~145 AI solutions per web page

This doesn’t imply there’s one very best content material size or that longer pages routinely carry out higher. The stronger outcomes possible got here from the depth, construction, and completeness of the knowledge these codecs supplied.

This discovering additionally aligns with our broader analysis, the place we’ve seen that detailed, well-structured content material performs higher throughout platforms like AI Mode and ChatGPT.

Pages with narrower or much less complete protection generated fewer citations total. For instance:

  • How-to articles/tutorials: 22 AI solutions per web page
  • Clickbait/skeptical articles: 19
  • “Better of” listicles: 11
  • “Alternate options” listicles: 4

As a part of the experiment, we additionally examined a “spam” strategy: publishing 30 skinny pages (500–750 phrases every) on one in all our check domains.

Individually, these pages have been weak (averaging simply 63 AI solutions per web page).

However collectively, they generated 1,897 complete AI solutions, which makes it the highest-performing content material setup on the area stage.

Nonetheless, skinny content material just isn’t inherently “higher” due to this consequence. It simply exhibits that quantity can generally compensate for high quality by rising the probability of retrieval and quotation (particularly in AI engines like Perplexity that prioritize freshness).

In easy phrases, a number of sturdy pages win on high quality, however numerous weaker pages can nonetheless win on total publicity.

Perception 4: Topical clustering alone doesn’t produce AI visibility

Some of the helpful adverse findings got here from the content material construction check.

For this a part of the experiment, we created a hub web page on one in all our check domains and linked it to 10 supporting articles. In principle, this setup ought to have constructed sturdy topical depth and semantic reinforcement. All 11 pages have been listed, correctly structured, and internally linked.

But, they generated zero AI citations.

That is vital as a result of it challenges a standard assumption carried over from conventional website positioning: that topical clustering routinely improves authority or will increase the probability of being retrieved.

At the least on this experiment, it didn’t.

That doesn’t imply matter clusters are ineffective. It means they don’t seem to be ample alone. Inside linking and semantic breadth might assist a search engine perceive a website, however AI programs nonetheless want a purpose to retrieve and cite a particular web page for a particular reply.

So, do AI engines reward entity coherence greater than reality verification?

Even inside only one month, the outcomes level to a transparent conclusion:

AI programs seem to reply extra strongly to consistency, repetition, and availability than to strict verification.

That shouldn’t be overstated. It isn’t that LLMs “consider something.” But when a declare is:

  • Structured clearly
  • Repeated throughout related pages
  • Phrased like a reality
  • Out there in retrievable supply environments

Then AI programs might floor it surprisingly simply.

We additionally noticed this in guide checks of LLM responses in AI Outcomes Tracker. For prompts similar to “is [brand] price it,” some programs responded positively and really helpful utilizing our utterly unknown fictional model.

It might not be as a result of LLMs routinely favor each new model. In some instances, when little or no adverse info exists, a system might fill the hole with a impartial or positive-sounding response primarily based on the restricted alerts accessible. 

However the consequence is identical: if a very fictional model can generate constant citations and favorable suggestions below sure circumstances, then model narratives in AI search could also be extra versatile than they appear.

Remaining ideas

A very powerful final result of this experiment isn’t {that a} fictional model achieved visibility.

It’s that visibility adopted a repeatable sample as soon as particular inputs have been launched: branded context, distinctive claims, various content material codecs, and ample presence throughout totally different sources.

That results in two vital conclusions.

  • AI search just isn’t random. It follows identifiable alerts, and people alerts may be studied, examined, and influenced.
  • AI remains to be extremely delicate to manipulation. AIs don’t have their very own sense of reality, verification processes, or crucial pondering. The identical components that assist reputable manufacturers grow to be seen can be used to simulate credibility.

If there’s one lesson right here, it’s which you could’t assume AI programs will precisely signify your organization, product, or class by default.

It’s a must to actively form the knowledge surroundings they depend on.

And that is solely the primary month of outcomes. We’re persevering with to gather information, increase the experiment, and monitor how these patterns change over time.

Contributing authors are invited to create content material for Search Engine Land and are chosen for his or her experience and contribution to the search group. Our contributors work below the oversight of the editorial staff and contributions are checked for high quality and relevance to our readers. Search Engine Land is owned by Semrush. Contributor was not requested to make any direct or oblique mentions of Semrush. The opinions they categorical are their very own.


#pretend #model #win #search #experiment

Leave a Reply

Your email address will not be published. Required fields are marked *