Insight / signal
Google AI Search Is Still Search. Most People Will Read That Wrong.
The winning play is not a secret AI SEO trick. It is a website and knowledge base that are crawlable, understandable, useful, specific, structured and usable by both humans and machines.
Google has just published official guidance on optimising for generative AI features in Search.
A lot of people in SEO and marketing are about to read it badly.
The wrong takeaway is: SEO is dead.
The other wrong takeaway is: just add an llms.txt file and you will magically appear in AI answers.
The real takeaway is more boring, and much more useful.
Google AI Search is still Search.
AI Overviews and AI Mode may change the interface, the query behaviour and the user journey, but underneath that layer Google is still relying on crawling, indexing, retrieval, ranking, snippets, page quality and broader trust signals from the web.
That matters because it kills a lot of the current AI search grift in one go.
If your website is technically messy, thin, vague or hard to parse, AI is not going to rescue it. In most cases, AI makes that weakness more expensive.
Your website now has more readers than you think, and most of them are machines.
Google’s AI layer still depends on retrieval
The clearest part of Google’s guidance is that generative AI features in Search are rooted in core Search systems.
In plain English, Google does not just ask a model to improvise an answer from nowhere. It retrieves information from its Search index, grounds the response in that material, then generates a summary or answer with links and citations where appropriate.
That means the first question is still brutally simple:
Can Google retrieve you?
If it cannot crawl the page, you are out.
If it can crawl the page but the content is generic, unclear or low-value, you are probably still out.
If it can crawl the page and the content is genuinely useful, specific and relevant, then you at least have a shot.
That is not glamorous. It is just how this works.
Query fan-out makes lazy keyword pages even weaker
One of the more important implications is query fan-out.
Google’s AI systems do not always stay confined to the exact words a user typed. They can expand outwards and explore adjacent searches around the underlying intent.
So a query like “best CRM for a small agency” may trigger related retrieval around pricing, integrations, team size, automation, reporting, onboarding and switching costs.
That changes the content job.
A page that simply repeats the target phrase 11 times is not helping much. A page that genuinely answers the surrounding problem is much more useful.
This is where “write for humans” is true but incomplete.
You need to write for humans in a way machines can understand.
That means clear headings, clear entities, useful comparisons, specific examples, visible product or service detail, sensible structure and enough context for retrieval systems to understand when your page is relevant.
Not spam.
Not 400 cloned pages.
Not town-name swapping.
Actual topic coverage.
Google did not kill llms.txt. It killed the fantasy around it.
This is the bit people are most likely to mangle.
Google says you do not need special AI text files like llms.txt for its generative AI Search features.
That is true.
What is not true is the lazy follow-up claim that llms.txt is therefore pointless.
Those are two different arguments.
llms.txt is not a magic Google AI Overview ranking lever.
It can still be very useful AI infrastructure.
That distinction matters.
A good llms.txt file can help models, assistants, agents, internal RAG tools and non-Google crawlers find the canonical parts of a business faster. It can point them toward the right product pages, pricing, FAQs, docs, policies, explainers, case studies and APIs.
That does not guarantee Google will cite you.
It does make your business easier for machines to understand.
Think of it as signage, not sorcery.
Good signage does not guarantee someone will buy. It does reduce the odds that they, or the machine acting on their behalf, get lost.
Technical accessibility and citation worthiness are different problems
This is the distinction most of the market keeps collapsing into one blob.
There are at least two different layers here.
First, technical AI accessibility.
This is about whether systems can find, parse and use your information.
That includes crawlable HTML, sensible headings, clean rendering, usable navigation, structured data where it matches visible content, sane robots rules, XML sitemaps, accessible components, machine-readable docs and, yes, things like llms.txt.
Second, citation worthiness.
This is about whether a system chooses you as a source.
That depends much more on usefulness, specificity, originality, trust, first-hand evidence, topical depth, comparisons, examples, reviews, proof and whether the page actually answers the question better than the alternatives.
A technically clean site can still be ignored if the content is generic.
A brilliant page can still fail if it is technically invisible.
You need both.
That is why one-trick AI SEO advice is usually nonsense. Add a file. Add schema. Rewrite a title. Generate hundreds of question pages. None of those tactics, on their own, solve the real problem.
The real job is to make the business easier to understand.
For humans.
For Google.
For LLMs.
For agents.
Generic content gets weaker as AI gets better
Google’s guidance also reinforces something that should already be obvious: generic content is weak.
That is bad news for a web full of cloned SaaS articles, polite agency wallpaper and local service pages where only the town name changes.
AI has made that sludge cheaper to produce. It has not made it more worth reading.
If AI Search pushes users toward more specific, multi-part queries, original signal matters more, not less.
That means first-hand experience, implementation detail, actual examples, screenshots, test results, pricing clarity, case studies, product specifics, customer evidence and opinions that come from doing the work rather than paraphrasing the internet.
Most businesses already have that knowledge.
They just do not publish it properly.
It sits in sales calls, support tickets, onboarding documents, delivery notes, Slack threads and the heads of experienced people.
The businesses that win will be the ones that turn that buried knowledge into clear public pages.
Not performative thought leadership.
Useful signal.
Agent-ready websites are becoming commercial infrastructure
Another under-discussed part of Google’s framing is agents.
Google talks about systems interacting with websites through rendering, the DOM, page structure and accessibility trees.
That matters because the future buyer journey is not guaranteed to be:
person searches, person clicks, person reads, person fills in form.
It may increasingly look more like:
person asks an assistant, assistant researches options, assistant compares vendors, assistant checks pricing and proof, assistant recommends a shortlist, assistant helps complete the next step.
If your website is a glossy brochure glued on top of weak structure, that is a problem.
If the important information is buried behind broken tabs, vague copy, inaccessible components, strange scripts or over-designed theatre, that is a problem.
Agent-ready does not mean ugly.
It means the structure under the design is legible.
A good modern website now has to do two jobs at once.
Persuade the human.
Inform the machine.
That is no longer just SEO. It is business infrastructure.
What businesses should actually do now
The sensible response to Google’s guidance is not to chase hacks.
It is to audit the web version of the business properly.
Can Google access the important pages?
Can a machine understand what the business actually does?
Is the content specific enough to deserve retrieval and citation?
Does the site answer adjacent questions around the customer’s real intent?
Can an agent find pricing, policies, FAQs, contact routes, product details and usable forms?
Have you created clean machine-friendly guidance outside normal marketing pages?
That last part is where llms.txt, markdown docs, structured resource pages and canonical knowledge hubs still matter.
Not because Google says they are required for AI Overviews.
Because Google is no longer the only machine reading the site.
The real shift
The market will try to turn this moment into another trick.
It is not a trick.
It is a representation problem.
If the web version of your business is vague, then AI does not see the real business. It sees a lossy proxy.
That is the risk.
The opportunity is the same thing in reverse.
Businesses that make their services, products, proof, knowledge and actions clearer will be easier to retrieve, easier to trust and easier for both humans and machines to use.
So no, do not throw SEO away.
Also, do not pretend a single AI file solves this.
Core SEO still matters.
Useful original content still matters.
Structured data still matters.
llms.txt still matters, in the right lane.
Accessible, well-built pages still matter.
The winners will not be the people selling one weird hack a week.
They will be the businesses with more signal, less sludge and a web presence that actually reflects what they know.