27 March 2026

The Dual Audience Problem: Optimising for Humans and AI Search Agents

Bottom Line:

  • AI search agents (Perplexity, Gemini Shopping, Claude) are making product recommendations based on how well your pages are structured, not how they look
  • Most product pages are built for visual scanning - they're opaque to agents parsing structured data
  • The fix overlaps almost entirely with SEO best practice: clean schema, structured descriptions, consistent pricing, fast response times

A customer searches "best waterproof running jacket under €150" on Perplexity. They get a list of five products with names, prices, and direct links. The AI made the selection. The click goes straight to the product page.

Your product wasn't on that list. Not because it doesn't qualify - it does. Because your page wasn't structured for the agent to extract it confidently.

How agents parse product pages

Human shoppers scan visually. They look at imagery, skim headlines, check reviews. AI agents parse structure. They read schema markup, extract attributes from structured data, check price consistency, and assess whether the page description matches the product name and URL.

Most ecommerce PDPs are built for the visual scan. Specifications buried in expandable tabs. Key attributes embedded in unstructured paragraph copy. Schema markup that's partial or out of date. A human works around this with pattern recognition. An agent either extracts what it needs cleanly or moves to the next result.

A product that converts well when a human lands on it won't surface in AI-mediated search if the structured data is thin. You're invisible to a growing distribution channel - and the fix isn't creative. It's data architecture.

What agents need from your pages

Complete schema markup. Product schema (schema.org/Product) with all material fields: name, description, price, availability, brand, SKU, and review aggregate. Missing fields reduce the agent's confidence and push your product down the ranking.

Front-loaded descriptions. Prose that opens with core attributes - material, weight, key use case, dimensions - extracts cleanly. A description that opens with brand story and buries specs three paragraphs in is optimised for browsing, not parsing.

Price and availability consistency. If the price in your schema differs from the rendered price - common with dynamic pricing and promotional mechanics - agents flag the inconsistency. Some won't surface the product at all. Audit schema against rendered content regularly, especially during sale periods.

Response time under 2 seconds. Agents crawl at scale. Pages that respond slowly get skipped. Your LCP isn't just a conversion metric - it's a crawlability metric.

One strategy, not two

The overlap between AI readability and SEO is nearly complete. Clean schema improves both organic ranking and agent extraction. Structured descriptions improve both human readability and parsing accuracy. Fast response times improve both bounce rate and crawl frequency.

You don't need a separate AI optimisation workstream. You need the same infrastructure work done with a clearer understanding of why it matters - which now includes a channel that selects products on your behalf before the customer opens a browser.

Structure your product data for the agent. The human experience improves as a side effect.

Want to see if this applies to your store?