ChatGPT’s Default & Premium Models Search The Web Differently

ChatGPT’s Default & Premium Models Search The Web Differently

Ask ChatGPT’s default and premium fashions the identical query, and so they’ll cite virtually totally totally different sources, based on a Writesonic analysis.

GPT-5.4 Considering, ChatGPT’s premium mannequin, despatched 56% of its citations to model web sites. GPT-5.3 Immediate, the default for all logged-in ChatGPT customers, despatched 8%.

Throughout all prompts, the 2 fashions shared solely 7% of their cited sources. The explanation comes right down to how every mannequin searches the online earlier than answering.

Identical Query, Completely different Search Technique

When fashions had been requested about CRM software program, GPT-5.3 despatched one broad question and cited techradar.com and designrevision.com. GPT-5.4 despatched separate queries restricted to hubspot.com, salesforce.com, and attio.com for pricing, then checked g2.com and capterra.com for critiques.

GPT-5.4 averaged 8.5 sub-queries, lots of them restricted to particular domains, and used web site: operators in 156 of its 423 whole queries. No different ChatGPT mannequin examined used web site: operators in any respect.

OpenAI’s documentation says ChatGPT search rewrites prompts, however doesn’t word how fashions resolve which domains to focus on or when to make use of web site: operators.

The place The Citations Land

GPT-5.3 leaned closely on third-party content material. Weblog posts and articles made up 32% of its citations, with Forbes (15 citations), TechRadar (10), and Tom’s Information (10) as the highest domains.

GPT-5.4 went the opposite route. Model homepages accounted for 22% of citations, pricing pages 19%, and product pages 10%.

GPT-5.3 cited 4 pricing pages throughout all 49 conversations that triggered net search. GPT-5.4 cited 138. For manufacturers that gate pricing behind a “contact gross sales” web page, this might imply GPT-5.4 has much less to work with when answering comparability queries.

On head-to-head comparability prompts like “HubSpot vs Salesforce vs Pipedrive,” GPT-5.3 by no means cited a model web site. GPT-5.4 cited manufacturers 83% to 100% of the time on those self same prompts.

How This Connects To Search Rankings

Writesonic used SerpAPI to test whether or not cited domains additionally appeared in Google and Bing outcomes for a similar question.

For GPT-5.3, 47% of cited domains additionally appeared in Google outcomes. The overlap means that Google rankings are a minimum of partially predictive for the default mannequin.

For GPT-5.4, 75% of cited domains didn’t seem in Google or Bing outcomes for a similar person immediate. That means GPT-5.4 might rely much less on conventional search rankings and extra on focused area queries, although that hasn’t been independently verified.

Why This Issues

Model visibility in ChatGPT might rely upon which mannequin a person is working.

For the default mannequin, third-party protection on overview websites and media shops seems to drive citations. For the premium mannequin, first-party content material, significantly pricing and product pages, seems to matter extra.

Trying Forward

As ChatGPT continues rolling out new fashions, the patterns recognized right here might change.

Most cited URLs within the check pattern included utm_source=chatgpt.com, giving manufacturers a approach to measure referral site visitors immediately in analytics.


#ChatGPTs #Default #Premium #Fashions #Search #Net #Otherwise

Leave a Reply

Your email address will not be published. Required fields are marked *