For the past decade, image SEO was largely a matter of technical hygiene:
- Compressing JPEGs to appease impatient visitors.
- Writing alt text for accessibility.
- Implementing lazy loading to keep LCP scores in the green.
While these practices remain foundational to a healthy site, the rise of large, multimodal models such as ChatGPT and Gemini has introduced new possibilities and challenges.
Multimodal search embeds content types into a shared vector space.
We are now optimizing for the “machine gaze.”
Generative search makes most content machine-readable by segmenting media into chunks and extracting text from visuals through optical character recognition (OCR).
Images must be legible to the machine eye.
If an AI cannot parse the text on product packaging due to low contrast or hallucinates details because of poor resolution, that is a serious problem.
This article deconstructs the machine gaze, shifting the focus from loading speed to machine readability.
Technical hygiene still matters
Before optimizing for machine comprehension, we must respect the gatekeeper: performance.
Images are a double-edged sword.
They drive engagement but are often the primary cause of layout instability and slow speeds.
The standard for “good enough” has moved beyond WebP.
Once the asset loads, the real work begins.
Dig deeper: How multimodal discovery is redefining SEO in the AI era
Designing for the machine eye: Pixel-level readability
To large language models (LLMs), images, audio, and video are sources of structured data.
They use a process called visual tokenization to break an image into a grid of patches, or visual tokens, converting raw pixels into a sequence of vectors.
This unified modeling allows AI to process “a picture of a [image token] on a table” as a single coherent sentence.
These systems rely on OCR to extract text directly from visuals.
This is where quality becomes a ranking factor.
If an image is heavily compressed with lossy artifacts, the resulting visual tokens become noisy.
Poor resolution can cause the model to misinterpret those tokens, leading to hallucinations in which the AI confidently describes objects or text that do not actually exist because the “visual words” were unclear.
Reframing alt text as grounding
For large language models, alt text serves a new function: grounding.
It acts as a semantic signpost that forces the model to resolve ambiguous visual tokens, helping confirm its interpretation of an image.
As Zhang, Zhu, and Tambe noted:
- “By inserting text tokens near relevant visual patches, we create semantic signposts that reveal true content-based cross-modal attention scores, guiding the model.”
Tip: By describing the physical aspects of the image – the lighting, the layout, and the text on the object – you provide the high-quality training data that helps the machine eye correlate visual tokens with text tokens.
The OCR failure points audit
Search agents like Google Lens and Gemini use OCR to read ingredients, instructions, and features directly from images.
They can then answer complex user queries.
As a result, image SEO now extends to physical packaging.
Current labeling regulations – FDA 21 CFR 101.2 and EU 1169/2011 – allow type sizes as small as 4.5 pt to 6 pt, or 0.9 mm, on compact packaging.
- “In case of packaging or containers the largest surface of which has an area of less than 80 cm², the x-height of the font size referred to in paragraph 2 shall be equal to or greater than 0.9 mm.”
While this satisfies the human eye, it fails the machine gaze.
The minimum pixel resolution required for OCR-readable text is far higher.
Character height should be at least 30 pixels.
Low contrast is also an issue. Contrast should reach 40 grayscale values.
Be wary of stylized fonts, which can cause OCR systems to mistake a lowercase “l” for a “1” or a “b” for an “8.”
Beyond contrast, reflective finishes create additional problems.
Glossy packaging reflects light, producing glare that obscures text.
Packaging should be treated as a machine-readability feature.
If an AI cannot parse a packaging photo because of glare or a script font, it may hallucinate information or, worse, omit the product entirely.
Originality as a proxy for experience and effort
Originality can feel like a subjective creative trait, but it can be quantified as a measurable data point.
Original images act as a canonical signal.
The Google Cloud Vision API includes a feature called WebDetection, which returns lists of fullMatchingImages – exact duplicates found across the web – and pagesWithMatchingImages.
If your URL has the earliest index date for a unique set of visual tokens (i.e., a specific product angle), Google credits your page as the origin of that visual information, boosting its “experience” score.
Dig deeper: Visual content and SEO: How to use images and videos
Get the newsletter search marketers rely on.
The co-occurrence audit
AI identifies every object in an image and uses their relationships to infer attributes about a brand, price point, and target audience.
This makes product adjacency a ranking signal. To evaluate it, you need to audit your visual entities.
You can test this using tools such as the Google Vision API.
For a systematic audit of an entire media library, you need to pull the raw JSON using the OBJECT_LOCALIZATION feature.
The API returns object labels such as “watch,” “plastic bag” and “disposable cup.”
Google provides this example, where the API returns the following information for the objects in the image:
| Name | mid | Score | Bounds |
| Bicycle wheel | /m/01bqk0 | 0.89648587 | (0.32076266, 0.78941387), (0.43812272, 0.78941387), (0.43812272, 0.97331065), (0.32076266, 0.97331065) |
| Bicycle | /m/0199g | 0.886761 | (0.312, 0.6616471), (0.638353, 0.6616471), (0.638353, 0.9705882), (0.312, 0.9705882) |
| Bicycle wheel | /m/01bqk0 | 0.6345275 | (0.5125398, 0.760708), (0.6256646, 0.760708), (0.6256646, 0.94601655), (0.5125398, 0.94601655) |
Good to know: mid contains a machine-generated identifier (MID) corresponding to a label’s Google Knowledge Graph entry.
The API does not know whether this context is good or bad.
You do, so check whether the visual neighbors are telling the same story as your price tag.


By photographing a blue leather watch next to a vintage brass compass and a warm wood-grain surface, Lord Leathercraft engineers a specific semantic signal: heritage exploration.
The co-occurrence of analog mechanics, aged metal, and tactile suede infers a persona of timeless adventure and old-world sophistication.
Photograph that same watch next to a neon energy drink and a plastic digital stopwatch, and the narrative shifts through dissonance.
The visual context now signals mass-market utility, diluting the entity’s perceived value.
Dig deeper: How to make products machine-readable for multimodal AI search
Quantifying emotional resonance
Beyond objects, these models are increasingly adept at reading sentiment.
APIs, such as Google Cloud Vision, can quantify emotional attributes by assigning confidence scores to emotions like “joy,” “sorrow,” and “surprise” detected in human faces.
This creates a new optimization vector: emotional alignment.
If you are selling fun summer outfits, but the models appear moody or neutral – a common trope in high-fashion photography – the AI may de-prioritize the image for that query because the visual sentiment conflicts with search intent.
For a quick spot check without writing code, use Google Cloud Vision’s live drag-and-drop demo to review the four primary emotions: joy, sorrow, anger, and surprise.
For positive intents, such as “happy family dinner,” you want the joy attribute to register as VERY_LIKELY.
If it reads POSSIBLE or UNLIKELY, the signal is too weak for the machine to confidently index the image as happy.
For a more rigorous audit:
- Run a batch of images through the API.
- Look specifically at the faceAnnotations object in the JSON response by sending a FACE_DETECTION feature request.
- Review the likelihood fields.
The API returns these values as enums or fixed categories.
This example comes directly from the official documentation:
"rollAngle": 1.5912293,
"panAngle": -22.01964,
"tiltAngle": -1.4997566,
"detectionConfidence": 0.9310801,
"landmarkingConfidence": 0.5775582,
"joyLikelihood": "VERY_LIKELY",
"sorrowLikelihood": "VERY_UNLIKELY",
"angerLikelihood": "VERY_UNLIKELY",
"surpriseLikelihood": "VERY_UNLIKELY",
"underExposedLikelihood": "VERY_UNLIKELY",
"blurredLikelihood": "VERY_UNLIKELY",
"headwearLikelihood": "POSSIBLE"
The API grades emotion on a fixed scale.
The goal is to move primary images from POSSIBLE to LIKELY or VERY_LIKELY for the target emotion.
UNKNOWN(data gap).VERY_UNLIKELY(strong negative signal).UNLIKELY.POSSIBLE(neutral or ambiguous).LIKELY.VERY_LIKELY(strong positive signal – target this).
Use these benchmarks
You cannot optimize for emotional resonance if the machine can barely see the human.
If detectionConfidence is below 0.60, the AI is struggling to identify a face.
As a result, any emotion readings tied to that face are statistically unreliable noise.
- 0.90+ (Ideal): High-definition, front-facing, well-lit. The AI is certain. Trust the sentiment score.
- 0.70-0.89 (Acceptable): Good enough for background faces or secondary lifestyle shots.
- < 0.60 (Failure): The face is likely too small, blurry, side-profile, or blocked by shadows or sunglasses.
While Google documentation does not provide this guidance, and Microsoft offers limited access to its Azure AI Face service, Amazon Rekognition documentation notes that:
- “[A] lower threshold (e.g., 80%) might suffice for identifying family members in photos.”
Closing the semantic gap between pixels and meaning
Treat visual assets with the same editorial rigor and strategic intent as primary content.
The semantic gap between image and text is disappearing.
Images are processed as part of the language sequence.
The quality, clarity, and semantic accuracy of the pixels themselves now matter as much as the keywords on the page.
Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. Search Engine Land is owned by Semrush. Contributor was not asked to make any direct or indirect mentions of Semrush. The opinions they express are their own.
#Image #SEO #multimodal

