March 19, 20267 min readSEOforGPT team

    Why Google's Head of Search Is Both Right and Wrong About GEO

    John Mueller said AI visibility is just SEO. He's partially right — and that's exactly what makes it misleading. Here's the 60% he didn't mention.

    GEOSEOAI VisibilityStrategyGoogle

    Executive Summary

    • John Mueller (Google, Jan 2026) said AI systems rely on search, implying GEO is redundant with SEO.
    • SEO fundamentals do matter for AI visibility — roughly 40% of the signals overlap.
    • The other 60% includes third-party community presence, forum mentions, entity recognition across non-Google sources.
    • A brand can rank #1 on Google for a keyword and have 0% ChatGPT visibility for the same query.
    • Measurement matters: you cannot infer your AI citation rate from your Google rankings.

    Main Answer

    What He Got Right

    SEO fundamentals do matter for AI visibility. This isn't in dispute.

    Google's E-E-A-T signals (Experience, Expertise, Authoritativeness, Trustworthiness) influence how LLMs evaluate source credibility. A brand with zero domain authority and no quality content is unlikely to get cited by ChatGPT either. Structured data helps. Fast, crawlable pages help. Content that directly answers questions helps.

    Run enough brand queries through ChatGPT, Perplexity, and Gemini and a pattern emerges: roughly 40% of what drives AI citation overlaps with what drives Google rankings. Authority matters in both systems. Depth matters in both systems.

    Mueller is not wrong about the fundamentals. He's wrong about what "fundamentals" means in practice when LLMs are doing the citing.


    The 60% He Didn't Mention

    Google indexes pages. It reads your content, crawls your links, measures your authority.

    LLMs do something different. They synthesize patterns from massive training corpora and real-time retrieval. That corpus includes Reddit threads, G2 reviews, Trustpilot entries, Quora answers, comparison blog posts, YouTube transcripts, and forum discussions across a dozen platforms that Google doesn't weigh the same way.

    What this means: a brand can have a perfectly optimized website, rank on page one for its category keywords, and still be invisible to AI because it has no third-party footprint outside its own domain.

    Three things that drive AI citation that standard SEO does not target:

    Third-party mentions on non-indexed forums. LLM training data includes communities like Reddit, Hacker News, and niche Slack archives. A brand that has never been discussed in those spaces has no pattern for the model to recognize. You can't optimize your way into that with on-page SEO.

    Entity recognition across sources. For an LLM to cite you confidently, your brand needs to appear as a consistent entity across multiple independent sources. One authoritative blog post doesn't establish entity confidence. Ten independent sources mentioning the same brand attributes does. This is how knowledge graphs form inside model weights.

    Comparative context. When someone asks ChatGPT "what's the best project management tool for engineering teams," the model pulls from content that explicitly compares options. Review aggregators, analyst roundups, user comparison threads. If your brand isn't present in those formats, you don't surface in that query pattern regardless of your Google ranking for "best project management tool."


    A Concrete Example

    Take a hypothetical: a project management SaaS, call them Taskr. They've done everything right on Google. They rank #1 for "best project management tool." They've got 85 domain authority, 40,000 monthly organic visits, a full content library optimized to category keywords.

    Now run the query "what's the best project management tool for engineering teams" through ChatGPT 4o.

    Taskr doesn't appear. Linear does. Jira does. Height does.

    Why? Linear has 3,200+ mentions on Hacker News. Height got discussed in 14 high-signal subreddits about engineering tools. Jira has 18,000 G2 reviews with structured comparative language that feeds into training data. Taskr has a great website and almost no community presence outside it.

    Taskr's Google rank is real. Their AI visibility is zero. These are not the same measurement.

    This isn't a constructed scenario. It's the pattern that shows up when you run AI visibility checks against brands that have done everything right on Google but haven't built a third-party footprint.


    What to Actually Do

    If you want to move the 60% that SEO doesn't cover, here's what the data points toward:

    1. Get into review platforms with volume. G2, Capterra, and Trustpilot entries appear in LLM training data at high frequency. The goal isn't star ratings for human buyers. It's creating entity-linked text that contains your brand name alongside your category, use cases, and key differentiators. Aim for 50+ reviews with substantive text. Not 5.

    2. Earn Reddit presence, don't manufacture it. Authentic mentions in relevant subreddits carry signal weight LLMs treat as community validation. Participate in communities where your buyers already are. Answer questions without pitching. Your brand gets associated with your category over time in the exact format models learn from.

    3. Build third-party comparison coverage. Pitch tool roundups, analyst comparison posts, and "best of" lists in your category. Not for the backlink. For the comparative context. "Tool X vs Tool Y" articles are exactly the format LLMs pull from when a user asks a comparison query.

    4. Establish entity consistency across sources. Your brand name, category, key features, and target audience should appear in the same terms across at least 10 independent sources. Wikipedia, Crunchbase, LinkedIn company pages, industry publications. Inconsistent entity data creates citation uncertainty inside models.

    5. Measure what's actually happening. Run your target queries through ChatGPT, Perplexity, and Gemini every week. Track whether you appear, where you appear, and how you're described. This is not something you can infer from your Google rankings. It requires direct measurement.


    Mueller Was Describing a Floor, Not a Ceiling

    He's right that you can't have AI visibility without SEO fundamentals. He's describing a floor: if your site is a mess, LLMs won't trust it either.

    But the ceiling is different. Brands hitting that ceiling aren't just the ones with good SEO. They're the ones with the third-party footprint, community presence, and entity recognition that LLMs actually cite.

    Informa TechTarget, a Nasdaq-listed B2B media company, launched enterprise GEO services in Q1 2026 citing one specific data point: 60% of searches now end without a click. Buyers are discovering vendors through AI responses before they ever visit a website. A company with $1.6B in revenue decided that warranted a new service line.

    That's not a vendor inventing a problem. That's a market signal.


    The Practical Question

    If your CMO read Mueller's quote and concluded "we're fine, SEO covers this" -- the real question isn't whether Mueller is right. It's whether your brand actually appears when a buyer asks AI what to buy in your category.

    That's a measurement question, not a theory question.

    Check your AI visibility at SEOforGPT.io -- run your category queries, see where you stand versus your Google rankings, and find out if you have a gap worth fixing. The audit takes about 5 minutes. Knowing the answer is free.

    Frequently Asked Questions

    If I do good SEO, won't that automatically help my AI visibility?

    Partially. About 40% of what drives AI citation overlaps with SEO signals: domain authority, content depth, E-E-A-T. But the remaining 60% includes things traditional SEO doesn't target: Reddit threads, G2 reviews, Hacker News discussions, comparison blog posts on third-party sites. A brand with excellent Google rankings but zero community presence will still be invisible in AI answers.

    How do I know if I have an AI visibility gap?

    Run the queries your buyers actually use through ChatGPT, Perplexity, and Claude. Not branded queries like 'what is [your product]' — those usually return something. Run unbranded category queries: 'best [your category] tool', 'how to [problem you solve]', '[your competitor] alternative'. If you don't appear in those, you have a gap regardless of your Google rankings.

    What's the fastest way to improve AI citation rate?

    Third-party mentions are the highest-leverage starting point. Get substantive G2 and Capterra reviews (50+ with real text describing your use case). Earn Reddit mentions in relevant communities. Get listed in comparison roundups. These create the entity recognition pattern that LLMs pull from. It's slower than on-page optimization but more durable.

    Users also found this interesting

    If you want to keep exploring this topic, these guides are the next most relevant reads.

    Ready to optimize your content for AI?

    Start creating AI-native content that gets discovered and recommended by leading AI systems.