Skip to content
The cornerstone guide

The AEO Guide — how to get named in AI answers.

Eight chapters. ~25 minutes. Everything we'd tell you on a whiteboard before quoting a project. Read once and you will have a better mental model than 95% of marketing teams.

25 min read

This is the full guide we wish every brand had before AI answers started eating the top of the funnel. It is written in plain English. It is what we'd tell you on a whiteboard before quoting a project. Read it once and you will have a better mental model than 95% of marketing teams.

We break it into 8 chapters. You can read linearly or jump around via the table of contents on the left. If you only read one section, read Chapter 3.


Chapter 1. What "AEO" actually is

AEO stands for Answer Engine Optimization. It's the practice of making your brand show up in the answers that tools like ChatGPT, Claude, Perplexity, Gemini, and Copilot generate when people ask questions about your category.

It is adjacent to SEO, but not the same thing. SEO optimizes for Google's ranking. AEO optimizes for what AI cites. There's overlap — good content helps both — but the formats that win AI answers aren't identical to the ones that win Google's blue links.

The five AIs we track

You can run AEO as a dedicated function, or as a layer on top of your existing content team. Either way, the work is a combination of measurement, content, authority, and site setup. The rest of this guide walks through each.

Chapter 2. Why AI answers became the shortlist

Over 400 million people a week ask AI questions about products, services, and vendors. A large and rising share of buying research now happens inside these tools, before a buyer ever lands on a marketing site or talks to a sales team.

When AI answers a buyer's question, it typically names two or three brands and briefly explains why. That answer is functionally the shortlist. If you're one of the named brands, you enter the set. If you're not, you don't — and in most categories, buyers don't go looking past the three the AI gave them.

400M+
weekly AI users
globally, across the top 5 tools
62%
mean concentration
top 3 brands in a category
9–18 mo
time to category default
with serious investment
5 surfaces
where this plays out
ChatGPT, Claude, Perplexity, Gemini, Copilot

The shortlist is also sticky. Once a brand is consistently named in a category, the models keep naming it — they lean on patterns in their training data and on the sources they've learned to trust. Getting in early compounds; getting in late gets expensive.

Chapter 3. How AI decides who to name

This is the chapter most people skip, and the one we'd bet you $1,000 no-one on your marketing team could explain to you if you asked right now.

AI systems combine three things when they answer a category question: what they've been trained on (their baseline), what they can retrieve in real time (citations), and what they've been told to trust (source weighting). Specifics vary by model, but the pattern holds across all five.

3.1 Training-data baselines

The models have seen a lot of content about most categories. A brand that shows up repeatedly across the training data — Reddit threads, comparison pages, tech publications, GitHub issues, product-review sites — carries a baseline advantage. This is why companies with a decade of organic content can still be cited even when their current marketing is dormant.

3.2 Real-time retrieval

Most of these tools now retrieve live results to supplement their training data. They browse, they read, they cite. The pages they cite become the answers they give. This is the lever most under your direct control: if you publish the content AI wants to cite, you win the citation.

3.3 Source weighting

Every major AI has implicit preferences for sources. G2 and Capterra are weighted heavily in B2B SaaS. Reddit is weighted heavily in prosumer and consumer. Industry publications matter more in services. Academic and government sources are weighted in regulated categories. These weights shape which brands get pulled into answers.

Chapter 4. The content formats that get cited

If you study the content that AI most often cites across categories, you land on a predictable set of formats:

  1. Comparison pages — honest, table-based head-to-heads between named competitors.
  2. Long-form answer pieces — articles that answer a single specific buyer question in the first two paragraphs, then expand.
  3. Help-doc and knowledge-base articles — plain-language answers under a trusted subdomain.
  4. Third-party reviews and directory listings — G2, Capterra, TrustRadius, industry directories.
  5. Niche-platform presence — Reddit threads, Hacker News discussions, specialty forums.
  6. Podcast and video transcripts — especially for categories where buyers already watch or listen.
  7. Industry-publication placements — guest articles, quoted commentary, expert roundups.

Notice what's not on the list: generic thought-leadership blog posts, PR-voiced executive-bylined pieces, keyword-stuffed SEO pages. AI is increasingly blind to these. Google is slowly becoming blind to them too.

For a deeper dive into each format, see our guide:

Chapter 5. Authority — the trust layer

Models don't trust content on its own. They trust content from sources they've learned are reliable. Building your presence on those sources is what we call the authority layer.

5.1 Third-party directories

G2, Capterra, TrustRadius, GetApp, Product Hunt — and category-specific directories. Claim every relevant one. Fill every field. Get 20+ recent reviews in the first 90 days of an initiative. Publish comparison content inside the directory's own system where available.

5.2 Podcasts and industry publications

Getting your senior operators onto 2–4 relevant podcasts per quarter, and into 2–4 expert roundups or guest posts, materially shifts how AI describes your category over 6–9 months.

5.3 Niche platforms

Reddit is the obvious one, but don't stop there. For developer-focused categories, it's Hacker News and GitHub discussions. For legal and financial services, it's ABA, AICPA, and equivalent association publications. For healthcare, it's peer-reviewed publications and professional associations. Find yours.

Chapter 6. Site setup — the boring but essential layer

The technical layer rarely moves numbers on its own, but it amplifies everything above it. A beautifully-written comparison page with broken schema or slow loading is an opportunity lost.

  • Schema.org markup: Article, FAQPage, Product, Organization, WebSite, BreadcrumbList on every page where applicable.
  • Robots.txt that explicitly allows GPTBot, ClaudeBot, PerplexityBot, and Google's AI crawler.
  • Sitemap that lists every indexable page, updated on every publish.
  • Canonical URLs set correctly. No duplicate canonicals. No hidden-by-JS critical content.
  • Consistent Open Graph, Twitter, and standard meta tags.
  • Fast load times. AI crawlers skip pages that time out.

Chapter 7. Measurement — what to actually track

The shortest version: track how often AI names you. The slightly longer version:

  1. Build a tracked question set — 100 to 250 real questions your buyers type into AI.
  2. Scan all five major AIs weekly.
  3. Log who's named, in what position, alongside whom.
  4. Measure mention rate (percentage of answers naming you), share of AI answers (percentage of named answers that include you), and question-level trends (which specific questions moved).
  5. Tie content to the specific questions it's designed to influence, so when the number moves, you know what moved it.

Good tracking should also keep you honest. Flat months get the same transparency as good months. If the tools you're using cherry-pick numbers or only show you the wins, they're not doing the job.

Chapter 8. A 12-month plan you can actually run

Here is a 12-month plan for a typical B2B brand running AEO seriously for the first time. Scale up or down based on your category maturity and content velocity.

Months 1–3: foundations

  • Build the tracked question set. Run the baseline scan.
  • Claim and rebuild every relevant third-party listing.
  • Ship schema, sitemap, and technical-AEO fixes.
  • Publish the first 6–12 pieces of content: 2 comparison pages, 3 answer-first long-forms, 5 help-doc articles.
  • Get one founder or senior operator onto 2 podcasts.

Months 4–6: cadence

  • Two comparison pages and four long-form answer pieces per month.
  • Weekly help-doc publishing.
  • Reddit presence established under real names.
  • Four more podcast appearances.
  • Mid-engagement strategy review. Kill anything that isn't moving numbers.

Months 7–12: compounding

  • Content cadence continues. Budget shifts toward highest-yielding formats.
  • Industry-publication and guest-author placements open up.
  • Share of AI answers competitive with category leaders.
  • Quarterly leadership review. Plan year two.

If you've made it to the bottom of this guide: you now have a better picture of AEO than almost anyone in your category. If you'd like the version of this tailored specifically to your brand — with the actual numbers for where you stand today, who's beating you, and the first 90-day plan we'd ship — book a demo. We run the audit before every call. You keep the report.

Keep reading

Go deeper on any chapter.

See how AI talks about your brand today.

Book a 30-minute call. We'll run the report before we meet and walk you through it on the call. You keep the full report. No strings attached.

Custom audit included · No pitch decks · No pricing games