This entry is part 5 of 5 in the series A Practical Guide to AI SEO

You’ve built the right pages and earned the right citations. Now it’s time to make your site machine-readable and measurable. That’s what Tactic 3 is about. We’ll discuss here how to turn your content into clear signals for search engines and AI platforms, then track how often those systems actually use you in their answers.

 

Tactic 3: Technical SEO

 

Schema Markup

 

Schema Markup

AI search is beyond keyword matching and is more about understanding intent and context. Schema markup helps bots understand not just what your content says but also what it means. Without it, your content is plain text to search engines and LLMs; with it, they can better interpret what you’re talking about.

Why it’s important?

  • Helps search engines and AI understand your content.
  • Enables rich results that dominate SERPs, which in turn helps you pop up in AI answers. Some SERP presence is still needed for AI to pick you up (AI mostly crawls the first few pages of search engines), so visibility matters—even if classic rankings are less central than before.

(Schema markup is simply a piece of code you add to your site to tell crawlers what your content is about. The schema.org  library maintains the vocabulary.)

High-impact schema types for articles and content:

 

  • Article / BlogPosting (blogs, guides, news; add headlines, author creds, images, date published)
  • FAQ (appears directly as Q&A; useful for overviews and AI summaries)
  • HowTo (stepwise content)
  • Organization (homepage; helps with brand recognition)
  • LocalBusiness (for local context)
  • Product (price, availability, brand visible for physical products)

Recommended types by business model:

 

  • B2B SaaS — SoftwareApplication, Organization, FAQPage, Article/BlogPosting, HowTo (can include AggregateRating to benefit from rich snippet)
  • B2B Services — Organization, Service, Person, Review, Event
  • B2B Products — Organization, Product, Review, LocalBusiness, OpeningHours, GeoCoordinates

Quick implementation walkthrough:

 

  • You can ask ChatGPT to crawl your website and produce complete schema markup to implement. It will return JSON directly. JSON-LD is recommended. You may need to update details in that code.
  • Add the code in your CMS under custom code (paste into the header).
  • Validate with Google’s Schema Markup Validator (for errors) and Rich Results Test (for rich-result eligibility). If non-critical issues appear, ask ChatGPT to fix them or edit manually.
  • You can also identify schema types manually: first categorize pages by content type, then develop templates per category. Start with high-traffic/high-conversion (priority) pages. Refer to Google Search Central docs (e.g., Article schema) to validate.
  • Track results via search visibility in both search engines and AI. Schema alone won’t get you ranked—it works in combination with the previous two tactics—so keep content and citations strong.
  • Keep schema updated as content changes and stay current with schema.org vocabulary updates.

Use schema to highlight competitive advantages

5 Layer Schema Stack

You can use schema to reinforce your unique selling propositions, or audit competitor schema to spot what they’re missing. For reliable visibility gains, apply a simple hierarchy (source):

  • Foundation Layer: Core organizational and website-level schema applied sitewide, establishing your brand identity and site structure for crawlers.
  • Structural Layer: Navigation-focused markup such as breadcrumbs and sitelink search boxes that help search engines understand page hierarchy and relationships.
  • Page-Type Layer: Schema tailored to each content type — whether article, product, service, or video — to define its purpose and context.
  • Enhancement Layer: Supporting attributes like reviews, ratings, and offers that enrich your listings with credibility and engagement signals.
  • Specialisation Layer: Advanced, niche-specific schema elements that spotlight unique details or domain expertise, giving your content an extra edge in visibility.

NOTE: Schema is not a direct AI-ranking lever but it helps you stand out on SERP, improve CTR and engagement, and build the visibility that AI can pick up. Avoid overuse of general markup; focus on honest, well-aligned schema markup. 

Cross-Linking

 

Traditional internal links help people move around your site and help crawlers find pages. AI Answer cross-linking goes further: it connects related ideas on purpose so tools like ChatGPT, Claude, and Gemini can follow your map of topics, not just a list of URLs.

Types of links that count:

  • Internal links (between your own pages)
  • External links (to trusted, relevant sources and earned backlinks)
  • Reciprocal links (use rarely, only when they truly add value)
  • Breadcrumbs (in the UI and as structured data to show page hierarchy)

How it works:


Think of cross-linking as building a simple framework for your ideas. When you connect glossary terms, explainer pages, and focused sub-sections, you show how topics relate — where one builds on another, where they branch, and where they connect. AI systems use these signals to pick the right snippets, handle follow-up questions, and write clearer summaries.

Why is it important for AI retrieval?

  • Stronger flow. Pages that link related sections make it easier for models to follow your reasoning from start to finish.
  • More presence in one session. Linked parts of a page (or site) are more likely to be cited together, leading to more multi-sentence and multi-term mentions.
  • Better handling of new terms. When a model sees an unfamiliar word, links to short definitions and nearby topics help it understand faster.
  • Clearer topic map. Consistent links show what matters most and how ideas connect—useful for the way AI organizes information.

Cross-linking makes your pages become a navigable map, which hints that your content is organised, rich, and worth using. Done consistently, it’s a key AI-optimisation tactic that improves visibility, clarity, and answer quality, not just breadcrumbs for people, but trail markers for bots.

Additional Technical SEO Strategies

 

  1. Clear, concise, and descriptive meta descriptions that align with the content. It is not a ranking factor but it does influence how AI will summarize and preview your content.
  2. Clear heading hierarchy (h1 > h2 > h3 and p tags) and structural clarity help both search engines and LLMs segment and interpret your content.
  3. Clean code, sitemaps, no crawl barriers, and fast load times create a solid baseline.

Robots.txt vs llms.txt

Both files are “signposts” for bots, but they speak to different audiences. robots.txt talks to search engines about what they can crawl. llms.txt (a newer idea) talks to AI companies about how your content can be used, especially for model training.

robots.txt 

User-agent: *

Disallow: /private/

Allow: /public/

llms.txt 

Allow: /blog/

Disallow: /premium-content/

Contact: admin@example.com

Meaning

User-agent: which bot the rule applies to (e.g., Googlebot).
Disallow: paths you don’t want crawled.
Allow: paths you’re fine exposing (even inside blocked folders).

Practical suggestion for B2B teams:

There has been no official confirmation regarding the usage and significance of llms.txt for Ai crawlers. robots.txt alone is sufficient to tell AI crawlers what they’re allowed or not allowed to crawl. 

In most cases, keep things open so both search engines and AI tools can discover and understand your content. That helps your brand show up more often. By default, doing nothing means you’re allowing all content to be crawled. 

But if you must protect certain pages, you can still allow normal crawling but disallow training bots/crawlers

AISEO Performance Measurement

 

Use an AISEO/AEO/GEO tracker to monitor how often you show up and your average position across AI surfaces. One question can be asked many ways, and different AI tools surface different answers—often with random distribution. Track:

  • Appearance rate: How often your brand/page is mentioned or cited across models for a query set.
  • Average slot position: The typical placement of your brand in the answer (main body, first sidebar citation, footnotes). LLMs don’t always “rank,” so think in slots.
  • Share of citations: Your % of all citations shown for a query (your citations ÷ total citations).
  • Citation role: Whether you appear as a primary source (central to the answer) or supporting (sidebar/footnote).
  • Winning page type: Which of your formats tend to get cited (listicles, comparisons, use-case pages, product/pricing, docs/help).
  • Follow-up coverage: Do you still get cited across the next 3–5 follow-up questions (pricing, integrations, setup, alternatives)?
  • Persistence across regenerations: Do you still appear after the model is asked to regenerate the answer 2–3 times?
  • Contextual win map: Where you win under different personas/contexts (industry, company size, region, budget).
  • Personalization sensitivity: Whether logged-in/history-influenced sessions change your visibility.
  • Robustness under negation: Do you still appear for prompts like “best alternatives to <YourBrand>”?

How to measure it manually?

 

1) Build a fixed test panel

Create 25-50 core queries plus 5-10 paraphrases each. Test them across ChatGPT, Claude, Perplexity, Gemini.

2) Run controlled trials

For each query + model:

  • Capture coverage (mentioned or not), slot position, citation role, all cited domains.
  • Hit Regenerate 2-3 times and record whether you still appear (persistence).

3) Follow-up chains

For each core query, ask 3-5 common follow-ups (pricing, alternatives, integration with X, implementation). Track whether you’re still cited.

4) Persona/context variants

Add short context to the prompt (e.g., “for a mid-market SaaS in the US with a $50k budget”). Record where you win/lose; this becomes your contextual win map.

5) Clean vs. warm sessions

Test once in a clean browser (no history) and once after doing brief research similar to your ICP. Note changes (personalisation sensitivity).

6) Disclosure prompts

Ask: “List the sources you used and why.” Save the wording; this reveals selection cues (freshness, authority, clarity, schema).

7) Negation tests

Run “best alternatives to <YourBrand>” or “reasons not to choose <YourBrand>.” If you still show, that’s strong robustness.

8) Screenshot ledger

Screenshot each win (date/model/query) for proof and trend review.

9) Downstream checks

Watch for branded organic traffic, referrals from AI sidebars, brand-term search volume, and sales notes like ‘found via ChatGPT’. Imperfect, but directional.

10) Change-impact diary

When you ship a content/schema update, re-test at +3 days, +10 days, +30 days to see time-to-adoption.

Here’s a ready-to-use Google Sheet schema (columns + formulas) to run this manually → AISEO_Performance_Tracker.xlsx

Execution checklist

Weekly

  • Add 3–5 new queries to QUERIES (keep total 25–50 core).
  • Run each query on 4 models; record 3 regenerations in RUNS.
  • Capture screenshots for wins; paste links in RUNS and SCREENSHOTS_INDEX.
  • Update RESULTS_SUMMARY pivots; flag drops/gains.
  • Re-run negation tests for top 10 queries.
  • Log any content/schema changes in CHANGE_LOG; schedule retests.

Bi-weekly

  • Do persona/context variants for top 15 queries; update Contextual win map fields.
  • Audit Winning page type in FORMATS; plan 1–2 net-new assets that mirror winning formats.

Monthly

  • Compare Appearance rate, Average slot, Share of citations, Persistence month-over-month.
  • Review Downstream signals for directional lift.
  • Compile a short internal memo with screenshot proofs and 3 prioritized fixes.

After every content/schema update

  • Re-test affected queries at +3d, +10d, +30d; note time-to-adoption and selection cues from disclosure prompts.

Notes on Google AI Overviews

You might be wondering how to show up in Google AI overviews. Google has officially confirmed that its AI Search does not require specialised optimisation. Standard SEO is sufficient for both AI Overviews and AI Mode. 

Google’s focus remains on core SEO fundamentals:

  • High-quality content
  • E-E-A-T (Expertise, Authoritativeness, Trustworthiness)
  • Structured data and on-page optimization
  • Technical SEO
  • User experience

End of chapter 5

 

In this chapter, we discussed the 3rd Tactic: Technical SEO and how to make your site legible to machines and measurable on AI surfaces. Ship clean schema, build semantic cross-links, make thoughtful choices about robots.txt/llms.txt, and then track answers, not just keywords. Combined with Tactic 1 (high-quality content) and Tactic 2 (citations), this creates the momentum AI platforms look for when selecting and citing sources.

Series Navigation<< Chapter 4: How to Rank in AI Answers (3 Key Tactics That Work for GPT, Claude, Perplexity, etc.)
Share Article

Leave a Reply

Ready To Get Started

About us

We help early stage B2B companies generate more revenue via ads, SEO, and by optimizing their existing or adding new tech to GTM stack for state of the art automation, generating more revenue and actionable reporting.

© 2025 Growth9 | All Rights Reserved