Before you hire anyone (including us), here is the fastest way to get a real picture of whether AI recommends your brand. This is the shrunk-down version of the audit we run before every demo. It'll take you about 45 minutes.
1. Make your question list
Write down 15–25 real questions your buyers ask AI. Not keywords — the actual sentences they'd type. Mix early-stage (category-level) and late-stage (vendor-level) questions.
- "best AI CRM for Series B startups"
- "Zendesk vs Intercom for ecommerce"
- "how do I automate onboarding flows"
- "cheapest HIPAA-compliant hosting"
- "what's a good creatine brand without fillers"
If you're stuck, call three recent customers and ask them exactly what they typed in the week before finding you. It is almost always different from what your team assumes.
2. Ask all five
Run every question through ChatGPT, Claude, Perplexity, Gemini, and Copilot. Do it in a clean session so you're not training the model on your own brand. For each answer, log three things: were you named, in what position, and who else was named.
3. Score honestly
Tally the results. Total mentions, divided by total possible mentions (questions × models). That's your baseline mention rate. Most B2B brands come in between 6% and 18% on their first scan. Don't panic — that's the starting line.
4. Note who's winning
Track the three brands most often named across your questions. Open the content and sources those brands have published. Look for patterns: are they on G2? Are they writing comparison pages? Are they showing up on Reddit? Whatever is consistent across their presence is what's feeding the models.
5. Pick the three easiest fixes
- If you're missing a comparison page for your top two competitors, write it this quarter.
- If you're not on G2 or Capterra (and you're B2B SaaS), claim and optimize those listings this month.
- If you don't have answer-shaped help-doc content for your top 10 buyer questions, ship that content in the next 60 days.
You've now done more for your AI visibility than 99% of brands in your category. If you'd like a deeper, scored version of this — with continuous weekly tracking and the content we'd actually ship — that's the free audit every demo includes.
A fair warning
The manual version of this is useful once. The operational version — tracked weekly, with content tied to specific questions — is what actually moves the numbers over time. Don't confuse an afternoon of prompts with a strategy.
See where your brand actually stands in AI answers.
We'll run a full custom audit before the call. You keep the report regardless.
The content AI actually cites — a formats primer
Not all content wins in AI answers. Here are the formats that consistently get cited across ChatGPT, Claude, Perplexity, Gemini, and Copilot — and why.
What to measure when SEO no longer predicts pipeline
If your organic traffic is flat and your pipeline is moving the wrong way, the metric that used to explain things isn't explaining anymore. Here's what to track instead.
Teardown: when the smaller brand wins the answer
How a 40-person challenger took share in AI answers from a 4,000-person incumbent — without outspending them.