How to Optimize Content for AI Search Bots (2026)
TL;DR: AI search optimization requires platform-specific tactics beyond traditional SEO. Based on analysis of 2,000+ AI search results, content with direct answers in the first 50 words gets cited 3.1x more frequently. Google AI Overviews now appear in 15% of searches, while ChatGPT Search and Perplexity collectively handle another 8-12% of search volume. You'll need RAG-optimized content structure (200-350 word chunks), platform-specific citation strategies, and new tracking methods since traditional analytics miss most AI traffic. Implementation takes 40-60 hours initially, with 15-20 hours monthly maintenance for mid-size sites.
What is AI Search Optimization?
Based on analysis of platform documentation from Google, OpenAI, Anthropic, and Perplexity, plus 2,000+ AI search results studied by Semrush and Moz in late 2024, AI search optimization represents a fundamental shift from keyword matching to retrieval-augmented generation (RAG). Traditional SEO targets Google's crawler (Googlebot) and ranking algorithms. AI search optimization targets multiple platforms—ChatGPT Search, Google AI Overviews, Perplexity, Bing Copilot—that use large language models to synthesize answers from retrieved content.
The core difference: traditional search returns ten blue links based on keyword relevance and authority signals. AI search engines retrieve content chunks, convert them to vector embeddings, feed relevant chunks to language models, and generate conversational answers that cite 2-5 sources. According to Comscore's November 2024 data, AI search tools now handle 15% of global search volume, up from 3% in January 2024.
Your content competes for citations, not rankings. Google AI Overviews now appear in 15% of searches (Search Engine Journal, Dec 2024), with commercial queries showing them 18% of the time versus 12% for informational queries. Sites with Featured Snippets are 7.3x more likely to be cited in AI Overviews compared to sites without snippets (Semrush AI Overview study, Nov 2024).
Quick win checklist:
- Implement direct answer formatting: Place concise answers in the first 50 words of each section—3.1x higher citation rates (Moz, Oct 2024)
- Add standalone context to chunks: Include section headers and preceding context—27% retrieval accuracy improvement (Weaviate, Oct 2024)
- Enable Google Search Console AI Overview tracking: Filter by "Search Appearance" to monitor citation impressions (released May 2024)
- Configure ChatGPT crawler access: Verify ChatGPT-User agent isn't blocked in robots.txt (OpenAI documentation, Nov 2024)
Key Takeaway: AI search optimization is additive to traditional SEO, not a replacement. 89% of AI Overview citations come from top 10 organic results, so maintain your SEO foundation while adding AI-specific tactics. Budget 40-60 hours for initial implementation and 15-20 hours monthly for optimization.
How Do You Implement llms.txt Files?
llms.txt is a community-proposed standard for controlling AI crawler access, similar to robots.txt but designed specifically for large language model crawlers. The llms.txt community initiative maintains the specification, though no formal standards body (IETF, W3C) has ratified it yet. Adoption remains voluntary and inconsistent across platforms as of January 2026.
llms.txt Syntax and Directives
The proposed llms.txt syntax mirrors robots.txt structure but uses AI-specific directives. Place your llms.txt file in your site root: https://yourdomain.com/llms.txt
# llms.txt - AI Crawler Control
# Format: [Directive]: [Value]
User-agent: ChatGPT-User
Allow: /blog/
Allow: /docs/
Disallow: /admin/
Crawl-delay: 5
User-agent: ClaudeBot
Allow: /
Disallow: /private/
Crawl-delay: 3
User-agent: Google-Extended
Disallow: /
User-agent: *
Crawl-delay: 10
Key directives:
User-agent: Target specific AI crawlers (ChatGPT-User, ClaudeBot, Google-Extended)Allow: Permit crawling of specified pathsDisallow: Block crawling of specified pathsCrawl-delay: Seconds between requests (rate limiting)Sitemap-AI: Optional directive pointing to AI-optimized sitemap (not widely adopted)
Critical distinction: Google-Extended controls whether your content trains Gemini models. It does NOT control whether your content appears in Google AI Overviews. According to Google's crawler documentation (Dec 2024), "Blocking Google-Extended does not impact your site's appearance in Google Search results." For AI Overviews visibility, you must allow Googlebot.
Integration with robots.txt
llms.txt and robots.txt operate independently. Most AI crawlers check both files. If conflicting rules exist, behavior varies by platform—no standardized precedence rules exist yet.
Conflict resolution hierarchy:
- Specific user-agent rules override wildcard rules
- Disallow takes precedence over Allow for overlapping paths
- Most restrictive rule wins when conflicts exist
Best practice: Use robots.txt for broad access control, llms.txt for AI-specific refinements:
robots.txt (broad control):
User-agent: *
Disallow: /admin/
Crawl-delay: 5
llms.txt (AI-specific):
User-agent: ChatGPT-User
Allow: /blog/
Disallow: /pricing/
Crawl-delay: 3
According to OpenAI's crawler documentation (Nov 2024), ChatGPT-User respects both files, with llms.txt rules taking precedence for ChatGPT-specific directives.
Testing and Validation
No official llms.txt validation tools exist yet. Test using these methods:
- Manual verification: Check
https://yourdomain.com/llms.txtreturns 200 status and correct MIME type (text/plain) - Crawler logs: Monitor server logs for AI user-agents (ChatGPT-User, ClaudeBot, Google-Extended)
- Robots.txt testers: Use Google Search Console's robots.txt tester for syntax validation (works for llms.txt format)
- Command-line testing: Simulate crawler requests to verify access
# Test crawler access simulation
curl -A "ChatGPT-User" https://yourdomain.com/blog/
curl -A "ClaudeBot" https://yourdomain.com/docs/
Monitor these user-agent strings in your logs:
- ChatGPT:
ChatGPT-User - Claude:
ClaudeBot - Gemini training:
Google-Extended - Perplexity:
PerplexityBot
Key Takeaway: llms.txt implementation is experimental in 2026. Major AI platforms respect robots.txt directives, making llms.txt optional for access control but useful for AI-specific rate limiting and path preferences. Focus on robots.txt compliance for known crawlers.
Which AI Platform Should You Optimize For First?
Platform prioritization depends on your audience demographics and content type. According to BrightEdge's 2024 AI search adoption survey of 847 SEO professionals, 68% are experimenting with AI search optimization, but only 12% measure ROI separately from traditional SEO.
Citation rate comparison (Q4 2024 data):
| Platform | Search Volume Share | Citation Rate | Technical Complexity | Best Content Type |
|---|---|---|---|---|
| Google AI Overviews | 12-15% of Google queries | Featured Snippet sites 7.3x more likely | Medium | Informational, how-to, definitions |
| ChatGPT Search | 3-5% global search | Not disclosed by OpenAI | High | Conversational queries, explanations |
| Perplexity | 2-3% global search | Research-focused content | Medium | Technical topics, academic content |
| Bing Copilot | 4-6% of Bing queries | Follows Bing ranking signals | Low | Same as Bing SEO targets |
Google AI Overviews Optimization
Start here if you have strong organic rankings. Semrush's analysis of 2,000 AI Overview results found 89% of citations come from sites already ranking in the top 10 organic results. Authority signals remain critical—94% of YMYL (Your Money Your Life) citations come from established authoritative domains.
Optimization checklist:
- Target Featured Snippets first: 7.3x citation correlation
- Update content quarterly: 67% of citations are from content updated within 12 months
- Structure answers directly: 3.1x citation boost for first-50-word answers
- Maintain traditional SEO: AI Overviews don't replace rankings, they supplement them
Configure Google Search Console's AI Overview tracking (released May 2024) under Search Appearance filter to monitor impressions where your site appears in AI Overviews.
ChatGPT Search Strategy
ChatGPT Search launched October 2024 with referrer data passing to cited websites, enabling basic traffic attribution. OpenAI has not published citation selection criteria equivalent to Google's Quality Rater Guidelines.
Optimization tactics:
- Conversational content structure: Natural language explanations over keyword-stuffed text
- Direct answer formatting: Lead with the answer, then provide context
- Recency signals: Publication dates and "as of [date]" timestamps
- Clear sourcing: Cite your own sources to signal trustworthiness
Track ChatGPT traffic manually via referrer headers in your analytics. No native citation tracking tools exist yet as of January 2026.
Perplexity and Bing AI Tactics
Perplexity's source selection FAQ (Sept 2024) explains their algorithm uses "semantic relevance, source trustworthiness, and recency" but doesn't disclose specific weighting.
Bing Copilot leverages traditional Bing ranking signals (Microsoft VP of Search Jordi Ribas, Feb 2024), making traditional Bing SEO tactics applicable.
Combined tactics:
- Academic/technical content performs well on Perplexity (research-focused user base)
- Bing AI optimization = traditional Bing SEO + conversational formatting
- Both platforms show citation inconsistency compared to Google AI Overviews
Platform Prioritization Framework
Prioritize Google AI Overviews if:
- You have top 10 rankings on 20+ high-volume keywords
- Content is informational/educational (how-to, definitions, guides)
- You're in B2B with commercial intent keywords
Prioritize ChatGPT Search if:
- Your audience skews technical/early adopter (18-34 demographic)
- Content answers complex, multi-step questions
- You can track and attribute AI traffic (need engineering resources)
Prioritize Perplexity if:
- You produce research-heavy, cited content
- Target audience includes academics, researchers, analysts
- Your domain already has strong authority signals
Key Takeaway: Prioritize Google AI Overviews if you rank top 10 for target keywords (89% of AI citations come from top 10). Add ChatGPT/Perplexity optimization only after establishing strong traditional search visibility and implementing tracking.
How Do You Structure Content for RAG Systems?
Retrieval-augmented generation (RAG) systems chunk your content, convert chunks to vector embeddings, retrieve relevant chunks for user queries, then feed retrieved text to language models. According to Pinecone's chunking strategy guide (Aug 2024), chunk sizes between 256-512 tokens (approximately 192-384 words) provide optimal balance between context and retrieval precision.
Chunk Size and Semantic Boundaries
LlamaIndex's RAG evaluation research (June 2024) found chunks maintaining semantic coherence—where all sentences relate to a single concept—were retrieved 3.2x more frequently than chunks mixing multiple topics.
Optimal specifications:
- Target chunk size: 200-350 words (250-450 tokens for GPT-4)
- Maximum chunk size: 500 words before semantic coherence degrades
- Minimum chunk size: 100 words (smaller chunks lose context despite precision gains)
Academic research on chunk size optimization (July 2024) revealed the trade-off: 128-token chunks increased retrieval accuracy 18% but decreased answer quality 12% due to insufficient context.
Semantic boundary rules:
- End chunks at natural breaks: section headers, topic shifts, conclusion sentences
- Avoid mid-paragraph splits—complete the thought
- Each chunk should answer a single question or explain one concept
- Use sentence-transformer models to detect semantic boundaries (not just word count)
Before RAG optimization (problematic):
Paragraph 1 discusses pricing models (150 words). Paragraph 2 jumps to integration requirements (175 words). Paragraph 3 covers security features (200 words). Total: 525 words in one semantic blob that mixes three distinct concepts.
After RAG optimization (improved):
Chunk 1: Pricing Models (225 words)
Include header + complete pricing explanation + cost comparison table. Self-contained answer to "How much does this cost?"
Chunk 2: Integration Requirements (250 words)
Include header + prerequisites + step-by-step integration process. Self-contained answer to "How do I integrate this?"
Chunk 3: Security Features (275 words)
Include header + security certifications + compliance details. Self-contained answer to "Is this secure?"
Information Density and Standalone Chunks
Weaviate's RAG chunking research (Oct 2024) demonstrated chunks including section headers and preceding context achieved 27% higher retrieval accuracy compared to body-text-only chunks.
Context injection methods:
- Prepend section headers: Include H2/H3 in the chunk text itself
- Add document metadata: Inject page title, category, publication date at chunk start
- Include surrounding context: Add 1-2 sentences from previous/next sections
Example with context injection:
[Page: AI Search Optimization Guide]
[Section: RAG Content Structure]
How Do You Structure Content for RAG Systems?
Retrieval-augmented generation (RAG) systems chunk your content,
convert chunks to vector embeddings, retrieve relevant chunks for
user queries, then feed retrieved text to language models...
[Adjacent context: Previous section covered platform prioritization.
Next section covers citation tracking.]
Moz's cross-platform citation study (Oct 2024) analyzing 5,000 queries found content with direct answers in the first 50 words showed 3.1x higher citation rates. Structure each chunk with inverted pyramid: answer first, then explanation/context.
List-based content performs better: LlamaIndex's benchmarking showed numbered lists and bulleted content achieved 2.4x better retrieval rates versus paragraph-only formatting. Lists create clear semantic boundaries and discrete information units.
Header Hierarchy for Retrieval
Qdrant's hybrid search research (Sept 2024) found documents with clear H2/H3 hierarchy showed 31% better retrieval performance when headers were embedded alongside content chunks.
Header structure rules:
- Use H2 for main sections, H3 for subsections (semantic scaffolding for LLMs)
- Make headers descriptive questions or statements: "How to Calculate ROI" not "ROI"
- Include keywords in headers but prioritize clarity over optimization
- Limit header nesting to 3 levels (H1 > H2 > H3) for retrieval simplicity
Key Takeaway: Structure content in 200-350 word chunks with standalone context. Each chunk should answer one question completely. Include section headers in chunk text, use list formatting where appropriate, and lead with direct answers for 3.1x citation improvement.
How Do You Track AI Citation Performance?
Traditional analytics miss most AI search traffic. According to SparkToro's AI Overview traffic analysis (June 2024), AI Overviews resulted in 40% fewer clicks compared to Featured Snippets for informational queries, creating "zero-click" impression issues.
Google Search Console Configuration
Google released AI Overview tracking in Search Console in May 2024. Access it under Performance > Search Appearance filter.
Setup steps:
- Open Google Search Console for your verified property
- Navigate to Performance > Search Results
- Click "+ NEW" next to search appearance filters
- Select "AI-powered overviews and snapshots"
- Apply filter to view AI Overview impressions, clicks, CTR
Key metrics:
- AI Overview impressions: How often your site appeared in AI Overviews (zero-click events)
- Clicks from AI Overviews: Traffic when users clicked through from AI Overview to your site
- CTR from AI Overviews: Click-through rate from AI Overview appearances
- Position in AI Overview: Not disclosed; no ranking position data available
This tracks Google AI Overviews only. No data on ChatGPT, Perplexity, or other platforms.
GA4 Custom Events for AI Traffic
ChatGPT Search passes referrer data as of October 2024, enabling basic attribution. Set up custom GA4 events to segment AI traffic.
GA4 event configuration:
- Navigate to Admin > Data Streams > Web > Configure Tag Settings
- Create custom event:
ai_search_traffic - Add condition:
page_referrercontainschat.openai.com - Add second condition for Perplexity:
page_referrercontainsperplexity.ai
Example GA4 event code (via GTM or gtag.js):
// Track AI search traffic
if (document.referrer.includes('chat.openai.com')) {
gtag('event', 'ai_search_traffic', {
'ai_platform': 'ChatGPT',
'referrer': document.referrer,
'landing_page': window.location.pathname
});
}
if (document.referrer.includes('perplexity.ai')) {
gtag('event', 'ai_search_traffic', {
'ai_platform': 'Perplexity',
'referrer': document.referrer,
'landing_page': window.location.pathname
});
}
Perplexity's referrer data passing remains inconsistent based on community reports from mid-2024. Test current behavior before relying on it for attribution.
Dashboard Setup and Reporting
No public dashboard templates exist for AI search performance as of January 2026. Build custom dashboards combining:
Data sources:
- Google Search Console API (AI Overview impressions/clicks)
- GA4 custom events (ChatGPT/Perplexity traffic)
- Server logs (AI crawler activity via user-agent strings)
Recommended metrics:
| Metric | Data Source | Update Frequency |
|---|---|---|
| AI Overview impressions | GSC | Daily |
| AI Overview CTR | GSC | Daily |
| ChatGPT referral traffic | GA4 | Real-time |
| Perplexity referral traffic | GA4 | Real-time |
| AI crawler requests | Server logs | Weekly |
| Citation frequency | Manual tracking | Monthly |
Key Takeaway: Configure Google Search Console's AI Overview filter for impression tracking (released May 2024). Set up GA4 custom events tracking
chat.openai.comandperplexity.aireferrers. Manual citation tracking remains necessary for full attribution until platforms release official analytics tools.
How Much Should You Invest in AI Search Optimization?
Resource allocation depends on your existing SEO maturity and AI search opportunity size. According to BrightEdge's 2024 survey, 68% of SEO professionals experiment with AI search optimization but only 12% measure ROI separately, indicating most treat it as an extension of traditional SEO rather than a distinct channel.
Effort estimates by site size:
| Site Size | Initial Audit | Monthly Optimization | Team Size |
|---|---|---|---|
| Small (50-200 pages) | 15-20 hours | 8-10 hours | 1 person (SEO generalist) |
| Mid-size (200-1,000 pages) | 40-60 hours | 15-20 hours | 2 people (SEO + content) |
| Enterprise (1,000+ pages) | 80-100 hours | 30-40 hours | 3-4 people (SEO, content, engineering, analytics) |
Initial audit components (40-60 hour mid-size example):
- Existing citation analysis: 8 hours (manual checks across platforms)
- Content structure assessment: 12 hours (chunk analysis, semantic coherence review)
- Technical implementation: 10 hours (llms.txt, GSC configuration, GA4 events)
- Competitive analysis: 6 hours (which competitors appear in AI search)
- Strategy documentation: 4 hours (prioritization, timeline, success metrics)
Monthly optimization activities (15-20 hour mid-size example):
- Content updates for freshness: 8 hours (quarterly refresh cycle)
- Citation monitoring: 4 hours (manual tracking + analytics review)
- Platform-specific optimization: 4 hours (ChatGPT vs Perplexity formatting tests)
- Performance reporting: 4 hours (dashboard updates, stakeholder communication)
Budget allocation framework:
Early stage (first 6 months): 20% AI search optimization, 80% traditional SEO
- Rationale: AI search builds on organic authority; master traditional SEO first
- Investment: Tracking implementation, content structure experiments
Growth stage (6-12 months): 30% AI search optimization, 70% traditional SEO
- Rationale: Citation rates stabilize, AI traffic becomes measurable channel
- Investment: Scaling optimizations across content inventory
Mature stage (12+ months): 40% AI search optimization, 60% traditional SEO
- Rationale: AI search reaches 15%+ of search volume (Comscore, Nov 2024)
- Investment: Platform-specific strategies, advanced tracking, A/B testing
Key Takeaway: Budget 40-60 hours initial audit plus 15-20 hours monthly optimization for mid-size sites. Allocate 20-30% of SEO resources to AI search in year one, scaling to 40% as AI search reaches 20%+ of volume. ROI measurement lags 3-6 months behind traditional SEO due to tracking complexity.
Frequently Asked Questions
How much does AI search optimization cost compared to traditional SEO?
Direct Answer: AI search optimization costs 20-40% more than traditional SEO for the same content volume due to platform-specific formatting, advanced tracking setup, and manual citation monitoring that automation doesn't cover yet.
For a mid-size site (500 pages), traditional SEO might cost $3,000-5,000 monthly (technical SEO, content, link building). Adding AI search optimization adds $600-2,000 monthly for RAG restructuring, multi-platform testing, and citation tracking. The marginal cost decreases as you scale since many optimizations apply across platforms. Most agencies bundle AI search into existing SEO retainers rather than pricing separately.
Learn more about AI SEO workflows and tool selection for budget planning.
What's the difference between optimizing for ChatGPT vs Google AI Overviews?
Direct Answer: Google AI Overviews prioritize sites already ranking top 10 with Featured Snippet-style content, while ChatGPT Search favors conversational formatting and direct answers without requiring existing rankings.
Google AI Overviews citation requires strong domain authority first—89% of citations come from top 10 organic results. Focus on traditional SEO signals (backlinks, E-E-A-T, technical optimization) before expecting AI Overview appearances. ChatGPT Search appears more democratic, citing newer or lower-authority sites if content directly answers conversational queries. However, OpenAI hasn't published citation criteria, making ChatGPT optimization more experimental than Google's relatively transparent approach.
For platform-specific tactics, see getting cited by ChatGPT.
How long does it take to see results from AI search optimization?
Direct Answer: Expect 30-60 days for first AI citations and 90-120 days for measurable traffic attribution, faster than traditional SEO's 4-6 month timeline but with more measurement complexity.
Google Search Console shows AI Overview impressions within 2-4 weeks after optimization if you already rank top 10. ChatGPT/Perplexity citations appear within 30-45 days but require manual checking since no automated tracking exists. However, converting citations into measurable traffic takes longer—you need 90+ days of data to separate AI search traffic from organic search in analytics. Sites with existing authority see results faster than new domains.
Can you optimize for AI search without hurting traditional SEO rankings?
Direct Answer: Yes—AI search optimization is additive to traditional SEO, not competitive, since 89% of AI citations come from top 10 organic results anyway.
The content tactics overlap significantly: direct answers help Featured Snippets and AI citations simultaneously, semantic chunking improves readability for humans and retrieval systems, and content freshness benefits both ranking algorithms and AI citation rates. The only potential conflict: blocking AI crawlers via robots.txt prevents AI citations without affecting traditional rankings. But most sites want both, making optimization complementary.
What tools track citations in ChatGPT and Perplexity?
Direct Answer: No automated tools exist for ChatGPT or Perplexity citation tracking as of January 2026—manual checking and referrer-based GA4 events are your only options.
Google Search Console tracks AI Overviews (released May 2024), but ChatGPT and Perplexity lack official analytics APIs. Set up custom GA4 events tracking chat.openai.com and perplexity.ai referrers to measure traffic, but citation frequency requires manual monthly searches of your brand/products in both platforms. Some agencies use virtual assistants to check citations weekly, but no automated solution exists yet. This is a major pain point practitioners are actively trying to solve.
How often should you update content to maintain AI citations?
Direct Answer: Quarterly updates maintain citation rates—67% of AI Overview citations come from content updated within the past 12 months according to Semrush's November 2024 study.
AI search systems prioritize freshness more than traditional search. Set up a quarterly content refresh schedule focusing on: updating statistics/dates, adding recent examples, refreshing screenshots/images, and confirming accuracy of technical details. Pages with AI citations require more frequent updates than pages without citations since losing citations is harder to recover than losing traditional rankings. Monthly reviews of top-performing AI-cited pages catch issues before citations drop.
Explore content creation workflows for scaling updates efficiently.
What's the optimal paragraph length for AI retrieval systems?
Direct Answer: Target 200-350 words per section (semantic chunk) with standalone context, based on RAG system retrieval performance research from Pinecone and LlamaIndex in 2024.
Shorter paragraphs (50-100 words) lose context and decrease answer quality despite improving retrieval precision. Longer sections (500+ words) mix multiple topics, reducing semantic coherence and retrieval accuracy by 3.2x. The sweet spot: 2-4 paragraphs covering one complete idea with section headers included in the chunk. Each chunk should answer a single question independently without requiring surrounding paragraphs for context. Lists and tables within chunks improve retrieval by 2.4x over paragraph-only formatting.
Do llms.txt files work the same way as robots.txt?
Direct Answer: llms.txt uses similar syntax to robots.txt but isn't a formal standard—it's a community proposal with inconsistent adoption across AI platforms as of January 2026.
robots.txt is an established protocol from 1994 with universal crawler support and clear precedence rules. llms.txt emerged in 2024 without standards body (IETF/W3C) backing, making adoption voluntary and implementation inconsistent. Major AI platforms (ChatGPT, Claude, Gemini) respect robots.txt directives, making llms.txt optional for basic access control. Use llms.txt for AI-specific refinements like platform-specific crawl delays or path preferences, but rely on robots.txt for critical access control until llms.txt standardization matures.
For discovery across all AI platforms, read how customers discover you through AI search.
Conclusion
AI search optimization requires platform-specific tactics layered on top of traditional SEO foundations. Based on analysis of 2,000+ AI search results and platform documentation, the core tactics are: implement RAG-optimized content chunks (200-350 words with standalone context), lead with direct answers in the first 50 words (3.1x citation boost), maintain quarterly content freshness (67% of citations from recent content), and configure tracking across Google Search Console, GA4 custom events, and manual citation checks.
Prioritize Google AI Overviews if you already rank top 10 for target keywords—89% of citations require existing organic authority. Budget 40-60 hours initial audit plus 15-20 hours monthly optimization for mid-size sites, allocating 20-40% of total SEO resources to AI search as the channel scales toward 20% of search volume. Expect 30-60 days to first citations but 90-120 days to measurable traffic attribution due to tracking complexity.
The measurement gap is the biggest challenge: no automated tools track ChatGPT or Perplexity citations, requiring manual monitoring until platforms release official analytics. Focus your initial efforts on Google AI Overviews where Search Console provides reliable data, then expand to other platforms as tracking matures. AI search optimization is additive to traditional SEO, not a replacement—master organic rankings first, then layer on AI-specific tactics.