A one-second delay in page load time reduces conversions by 7%. Now multiply that cost: AI crawlers abandon slow pages even faster than humans do, which means your content never makes it into AI-generated answers. In this guide, you’ll learn how to audit, prioritize, and optimize every Core Web Vital for both users and AI crawlers. We’ll cover LCP optimization, INP improvement, CLS fixes, and the specific performance thresholds AI systems demand. Reading time: approximately 15 minutes.
Why Core Web Vitals Matter More in the AI Era
Core Web Vitals have always influenced Google rankings. But the rise of AI search has added a new layer of urgency. When GPTBot, ClaudeBot, or PerplexityBot crawl your site, they operate under strict time budgets. If your page takes too long to serve content, these crawlers move on. Your carefully written content never gets indexed by AI systems, and you never appear in AI-generated responses.
Think of it this way: traditional SEO rewarded fast sites with higher rankings. AI SEO punishes slow sites with complete invisibility.
Google’s own data from the Chrome User Experience Report shows that sites meeting all three Core Web Vitals thresholds see 24% fewer page abandonment rates. For AI crawlers, the stakes are even higher. These systems don’t wait around. They have millions of pages to process and strict compute budgets to respect.
The connection between site speed and AI visibility is straightforward. Faster pages get crawled more completely. More complete crawling means more of your content enters AI training data and retrieval systems. More content in AI systems means more citations and more traffic from AI-powered search.
This isn’t theoretical. We’ll show you real metrics from companies that improved their Core Web Vitals and saw measurable gains in AI search visibility.
Understanding the Three Core Web Vitals
Before we start optimizing, let’s make sure we’re speaking the same language. Core Web Vitals are three specific metrics that Google uses to measure real-world user experience on your site.
Largest Contentful Paint (LCP)
LCP measures how long it takes for the largest visible element on your page to fully render. This is usually a hero image, a large text block, or a video thumbnail. It answers the question: “How quickly does the main content appear?”
Thresholds:
Interaction to Next Paint (INP)
INP replaced First Input Delay (FID) in March 2024. It measures the responsiveness of your page to user interactions throughout the entire page lifecycle, not just the first interaction. Every click, tap, and keyboard input is tracked. The worst interaction latency (minus outliers) becomes your INP score.
Thresholds:
Cumulative Layout Shift (CLS)
CLS measures how much the page layout shifts unexpectedly during loading. You know that frustrating experience when you’re about to tap a button and the page suddenly jumps? That’s layout shift. CLS quantifies it.
Thresholds:
Core Web Vitals Threshold Comparison Table
How AI Crawlers Handle Slow Pages
Understanding how AI crawlers behave differently from human visitors is essential for effective Core Web Vitals optimization. Here’s what happens when a bot visits your site.
Crawl Budget and Time Limits
AI crawlers operate with strict crawl budgets. GPTBot, ClaudeBot, and PerplexityBot allocate a fixed amount of time per domain. If your pages are slow, the crawler processes fewer pages in that window. A site with 1,000 pages and a 3-second average load time will have roughly half as many pages crawled compared to a site with a 1.5-second average.
JavaScript Rendering Challenges
Most AI crawlers do not render JavaScript the way Google’s crawler does. They rely primarily on server-rendered HTML. If your LCP element depends on client-side JavaScript to render, AI crawlers might see an empty page. This makes server-side rendering (SSR) and static generation critical for AI visibility.
The Content Extraction Pipeline
When an AI crawler visits your page, the process looks like this:
If step 2 takes too long, steps 3-5 never happen. Your content is invisible to that AI system. This is why site speed AI performance directly determines your visibility in tools like ChatGPT, Perplexity, and Claude.
Related: How to Make Your SaaS Visible to ChatGPT and AI Search Engines
Step 1: Audit Your Current Performance
You can’t fix what you don’t measure. Start with a comprehensive performance audit using both lab and field data.
Lab Data vs. Field Data
Lab data comes from running tests in controlled environments (Lighthouse, WebPageTest). It’s consistent and reproducible but doesn’t reflect real user conditions.
Field data comes from actual user visits collected by the Chrome User Experience Report (CrUX). It shows what real visitors experience but varies based on devices, networks, and geography.
You need both. Lab data helps you diagnose issues. Field data tells you if your fixes actually work in production.
Recommended Audit Tools
Running Your First Audit
Start with PageSpeed Insights. Enter your URL and look at three things:
Document your baseline numbers. You’ll need them to measure progress. Here’s a template:
PERFORMANCE BASELINE - [Your Site URL]
Date: [Today's Date]
---------------------------------------
LCP (Field): _____ seconds
LCP (Lab): _____ seconds
INP (Field): _____ milliseconds
CLS (Field): _____
CLS (Lab): _____
TTFB: _____ milliseconds
Total Page Size: _____ KB
Requests: _____
Checking AI Crawler Response Times
Standard performance tools measure what human visitors see. You also need to check what AI crawlers experience. Use curl to simulate a bot request:
# Measure server response time as GPTBot
curl -o /dev/null -s -w "TTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n" \
-H "User-Agent: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.2; +https://openai.com/gptbot)" \
https://yoursite.com/your-page
# Measure server response time as ClaudeBot
curl -o /dev/null -s -w "TTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n" \
-H "User-Agent: ClaudeBot/1.0" \
https://yoursite.com/your-page
If bot response times are significantly slower than normal browser response times, your server might be throttling bot traffic, or your CDN configuration may not be caching responses for bot user agents.
Related: Why Your SaaS Isn’t Showing Up in AI Search Results
Step 2: Prioritize Fixes by Impact
After your audit, you’ll likely have a long list of issues. Don’t try to fix everything at once. Prioritize by impact on both users and AI crawlers.
The Impact-Effort Matrix for Core Web Vitals
The general rule: fix LCP first. LCP has the most direct impact on both user experience and AI crawler success. If your server takes 3 seconds to respond, nothing else matters. AI crawlers may never see your content.
Step 3: Optimize Largest Contentful Paint (LCP)
LCP optimization is where you’ll see the biggest returns. Let’s break it down into the four sub-parts that make up your LCP time.
The Four Components of LCP
According to web.dev, LCP can be decomposed into four sub-parts:
Each sub-part needs a different optimization strategy.
Reducing TTFB
TTFB is the foundation of everything. If your server is slow, nothing else can compensate.
Server-side caching: Implement page-level caching for your most important pages. For WordPress sites, use a plugin like WP Super Cache or W3 Total Cache. For custom applications, implement Redis or Memcached.
# Nginx server-side caching configuration
location / {
proxy_cache my_cache;
proxy_cache_valid 200 60m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_cache_background_update on;
proxy_cache_lock on;
# Serve cached content to bots for faster response
set $skip_cache 0;
if ($http_user_agent ~* "GPTBot|ClaudeBot|PerplexityBot|Googlebot") {
set $skip_cache 0; # Always serve cached content to bots
}
add_header X-Cache-Status $upstream_cache_status;
}
CDN implementation: A Content Delivery Network serves your pages from edge servers close to the visitor. For AI crawlers, which often originate from US data centers, this reduces TTFB significantly if your origin server is in another region.
Database query optimization: Slow database queries are the most common cause of high TTFB. Identify queries that take more than 100ms and add proper indexes, implement query caching, or restructure the queries.
Optimizing the LCP Resource
Once your server responds quickly, focus on the LCP element itself. For most pages, this is a hero image.
Serve modern image formats:
<!-- Use picture element for format fallbacks -->
<picture>
<source srcset="/hero.avif" type="image/avif">
<source srcset="/hero.webp" type="image/webp">
<img src="/hero.jpg" alt="Descriptive alt text"
width="1200" height="630"
fetchpriority="high"
decoding="async">
</picture>
Use fetchpriority="high" on the LCP image. This tells the browser to prioritize loading this resource above others. It’s a simple attribute that can reduce LCP by 200-400ms.
Preload the LCP image:
<head>
<!-- Preload the hero image so it starts loading immediately -->
<link rel="preload" as="image" href="/hero.webp"
type="image/webp" fetchpriority="high">
<!-- Preconnect to third-party origins if LCP resource is external -->
<link rel="preconnect" href="https://cdn.example.com">
</head>
Before and After: LCP Optimization
Here’s what a real LCP optimization looks like for a SaaS marketing page:
Shopify reported that reducing their LCP by 1.2 seconds across their merchant storefronts resulted in a measurable increase in conversion rates. Pages that loaded in under 2.5 seconds had significantly better engagement metrics compared to slower pages.
Related: Schema Markup for AI Agents: JSON-LD Examples That Work
Step 4: Improve Interaction to Next Paint (INP)
INP measures responsiveness. While AI crawlers don’t click buttons, heavy JavaScript that causes poor INP scores often blocks content rendering too. Fixing INP usually means cleaning up JavaScript execution, which helps both users and bots.
What Causes Poor INP?
Poor INP almost always comes down to long tasks on the main thread. A long task is any JavaScript execution that blocks the main thread for more than 50 milliseconds. Common culprits include:
Breaking Up Long Tasks
The most effective INP improvement technique is breaking long tasks into smaller chunks using scheduler.yield() or setTimeout:
// BEFORE: One long task blocking the main thread for 300ms
function processAllItems(items) {
items.forEach(item => {
// Complex processing per item
validateItem(item);
transformItem(item);
renderItem(item);
});
}
// AFTER: Yielding to the main thread between chunks
async function processAllItems(items) {
const CHUNK_SIZE = 5;
for (let i = 0; i < items.length; i += CHUNK_SIZE) {
const chunk = items.slice(i, i + CHUNK_SIZE);
chunk.forEach(item => {
validateItem(item);
transformItem(item);
renderItem(item);
});
// Yield to the main thread so the browser can process user input
if (navigator.scheduling?.isInputPending() || i + CHUNK_SIZE < items.length) {
await scheduler.yield();
}
}
}
Defer Non-Critical JavaScript
Every script that loads synchronously is a potential INP killer. Audit your scripts and defer anything that isn’t needed for initial rendering:
<!-- BEFORE: Blocking scripts -->
<script src="/analytics.js"></script>
<script src="/chat-widget.js"></script>
<script src="/social-sharing.js"></script>
<!-- AFTER: Deferred and lazy-loaded scripts -->
<script src="/critical-app.js"></script>
<script defer src="/analytics.js"></script>
<script>
// Lazy-load chat widget after user interaction
document.addEventListener('scroll', () => {
import('/chat-widget.js');
}, { once: true });
// Lazy-load social sharing when visible
const observer = new IntersectionObserver((entries) => {
if (entries[0].isIntersecting) {
import('/social-sharing.js');
observer.disconnect();
}
});
observer.observe(document.querySelector('#social-section'));
</script>
INP Improvement: Real Results
The Economic Times, one of India’s largest news sites, reduced their INP from 860ms to 180ms by breaking up long tasks and deferring non-critical JavaScript. This directly improved their engagement metrics and reduced bounce rates.
Redbus, an Indian travel platform, improved their INP by optimizing React hydration and deferring third-party scripts. Their optimization work led to measurable improvements in user interaction responsiveness.
For AI crawling specifically, reducing JavaScript overhead means your server-rendered content is available faster. When GPTBot requests your page, it doesn’t execute JavaScript at all. But the same bloated scripts that cause poor INP often delay server-side rendering too, because the Node.js server is running the same heavy code to generate the HTML.
Step 5: Fix Cumulative Layout Shift (CLS)
CLS issues are often the easiest to fix but the most annoying for users. For AI crawlers, layout shifts are less of a direct concern, but they signal poor page quality, which can affect how AI systems rank your content.
Common CLS Causes and Fixes
1. Images and Videos Without Dimensions
This is the number one cause of CLS. When an image loads without explicit width and height, the browser doesn’t know how much space to reserve. The content below jumps when the image appears.
<!-- BAD: No dimensions, causes layout shift -->
<img src="/product-image.jpg" alt="Product">
<!-- GOOD: Explicit dimensions prevent layout shift -->
<img src="/product-image.jpg" alt="Product" width="800" height="600">
<!-- ALSO GOOD: CSS aspect-ratio for responsive images -->
<style>
.hero-image {
aspect-ratio: 16 / 9;
width: 100%;
height: auto;
}
</style>
<img src="/hero.jpg" alt="Hero" class="hero-image">
2. Web Fonts Causing Text Shift
When a custom font loads and replaces the fallback font, text can reflow and shift surrounding elements. Use font-display: swap with a matched fallback font:
/* Define font with swap behavior */
@font-face {
font-family: 'CustomFont';
src: url('/fonts/custom.woff2') format('woff2');
font-display: swap;
/* Preload the font to reduce swap time */
}
/* Use size-adjusted fallback to minimize shift */
@font-face {
font-family: 'CustomFont-Fallback';
src: local('Arial');
size-adjust: 105%;
ascent-override: 95%;
descent-override: 22%;
line-gap-override: 0%;
}
body {
font-family: 'CustomFont', 'CustomFont-Fallback', Arial, sans-serif;
}
3. Dynamic Content Injection
Ads, banners, cookie notices, and dynamically loaded content all cause layout shifts if you don’t reserve space for them:
/* Reserve space for ad slots */
.ad-slot-banner {
min-height: 250px;
width: 100%;
background: #f5f5f5; /* Placeholder background */
}
/* Reserve space for cookie banner */
.cookie-banner-container {
min-height: 80px;
position: fixed; /* Fixed positioning doesn't cause CLS */
bottom: 0;
width: 100%;
}
CLS Before and After
Related: llms.txt Implementation: Complete Guide for SaaS Companies
Step 6: Monitor and Maintain Performance
Core Web Vitals optimization isn’t a one-time project. Every new feature, every third-party script addition, and every content update can regress your performance. You need continuous monitoring.
Setting Up Performance Budgets
A performance budget defines the maximum acceptable values for key metrics. When a deployment exceeds these budgets, it’s flagged before reaching production.
{
"budgets": [
{
"resourceType": "script",
"budget": 300
},
{
"resourceType": "image",
"budget": 500
},
{
"resourceType": "total",
"budget": 1500
},
{
"timings": [
{
"metric": "lcp",
"budget": 2500
},
{
"metric": "cls",
"budget": 0.1
},
{
"metric": "inp",
"budget": 200
}
]
}
]
}
Integrating Performance Checks in CI/CD
Add Lighthouse CI to your deployment pipeline to catch regressions before they ship:
# .github/workflows/lighthouse-ci.yml
name: Lighthouse CI
on: [pull_request]
jobs:
lighthouse:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci && npm run build
- name: Run Lighthouse CI
uses: treosh/lighthouse-ci-action@v11
with:
urls: |
http://localhost:3000/
http://localhost:3000/pricing
http://localhost:3000/blog
budgetPath: ./budget.json
uploadArtifacts: true
Monitoring AI Crawler Performance Specifically
Set up server-side logging to track response times for AI crawler requests separately:
// Express.js middleware to log AI bot response times
const AI_BOT_PATTERNS = [
'GPTBot', 'ClaudeBot', 'PerplexityBot',
'Amazonbot', 'GoogleOther', 'cohere-ai'
];
app.use((req, res, next) => {
const startTime = Date.now();
const userAgent = req.headers['user-agent'] || '';
const isAIBot = AI_BOT_PATTERNS.some(bot => userAgent.includes(bot));
res.on('finish', () => {
const duration = Date.now() - startTime;
if (isAIBot) {
console.log(JSON.stringify({
type: 'ai_bot_request',
bot: userAgent,
path: req.path,
status: res.statusCode,
duration_ms: duration,
content_length: res.getHeader('content-length'),
timestamp: new Date().toISOString()
}));
// Alert if response time exceeds threshold
if (duration > 2000) {
alertSlack(`Slow AI bot response: ${req.path} took ${duration}ms for ${userAgent}`);
}
}
});
next();
});
Related: AI Search Analytics: How to Track ChatGPT and Perplexity Traffic in GA4
Core Web Vitals Optimization Priority Checklist
Use this checklist to work through your optimizations systematically. Items are ordered by typical impact:
High Priority (Do This Week)
Medium Priority (Do This Month)
Lower Priority (Do This Quarter)
Real-World Before and After Results
Let’s look at real examples of companies that made Core Web Vitals optimization a priority and saw measurable improvements.
Example 1: Vodafone
Vodafone optimized their LCP by 31% on their landing pages. Their work focused on image optimization, critical CSS inlining, and eliminating render-blocking resources. The result was a measurable increase in their sales conversion rate, demonstrating the direct revenue impact of Core Web Vitals optimization.
Example 2: Yahoo! JAPAN News
Yahoo! JAPAN News reduced their CLS to near zero by implementing explicit dimensions for all media elements and reserving space for dynamically inserted content. Their page views per session increased as users experienced fewer frustrating layout shifts.
Example 3: Tokopedia
Tokopedia, one of Indonesia’s largest e-commerce platforms, improved their LCP by 55% through image optimization, implementing a CDN, and optimizing their server response times. They saw a measurable improvement in session duration and engagement metrics after achieving passing Core Web Vitals scores.
Composite Before/After Summary
These results demonstrate that Core Web Vitals optimization delivers real business outcomes, not just better Lighthouse scores. And when you factor in AI crawler behavior, the benefits compound: faster pages mean more complete AI indexing, which means more AI search visibility.
Related: How to Make Your SaaS Visible to ChatGPT and AI Search Engines
Tool Recommendations for Ongoing Monitoring
Choosing the right tools depends on your budget, team size, and technical requirements. Here’s our recommended stack for comprehensive Core Web Vitals monitoring:
Free Tools
Paid Tools
Implementing Real User Monitoring (RUM)
For the most accurate picture of your Core Web Vitals, implement RUM using the web-vitals library:
import { onLCP, onINP, onCLS } from 'web-vitals';
function sendToAnalytics(metric) {
const body = JSON.stringify({
name: metric.name,
value: metric.value,
rating: metric.rating, // "good", "needs-improvement", or "poor"
delta: metric.delta,
id: metric.id,
navigationType: metric.navigationType,
url: window.location.href,
timestamp: Date.now()
});
// Use sendBeacon for reliability
if (navigator.sendBeacon) {
navigator.sendBeacon('/api/vitals', body);
} else {
fetch('/api/vitals', { body, method: 'POST', keepalive: true });
}
}
onLCP(sendToAnalytics);
onINP(sendToAnalytics);
onCLS(sendToAnalytics);
This sends your real-world Core Web Vitals data to your own analytics endpoint, where you can build dashboards, set alerts, and track improvements over time.
Related: Schema Markup for AI Agents: JSON-LD Examples That Work
Conclusion
Core Web Vitals optimization is no longer optional. For traditional SEO, it influences your Google rankings. For AI SEO, it determines whether your content is even crawled and indexed by AI systems. A slow page isn’t just a bad user experience. It’s a page that AI crawlers skip entirely.
The framework is straightforward: Audit, Prioritize, Optimize, Monitor. Start by measuring your baseline. Focus on LCP first because it has the highest impact on both users and AI crawlers. Then address INP and CLS. Finally, set up continuous monitoring so regressions don’t slip through.
Remember these key thresholds:
The companies that invest in Core Web Vitals optimization today will have a compounding advantage as AI search grows. Every page that loads fast enough for AI crawlers to index is another opportunity to appear in AI-generated responses. Every slow page is a missed opportunity.
Start with your highest-traffic pages. Measure, fix, verify, repeat. The performance improvements you make today will pay dividends across every channel, from organic search to AI citations to direct conversions.
Ready to optimize your site’s performance for AI crawlers? Contact WitsCode for a comprehensive Core Web Vitals audit. We’ll identify your biggest performance bottlenecks and build a prioritized optimization roadmap tailored to your stack, your traffic patterns, and your AI search visibility goals.
Get Your Free Performance Audit
FAQ
1. How do Core Web Vitals affect AI crawler behavior?
AI crawlers like GPTBot, ClaudeBot, and PerplexityBot operate under strict time budgets. When your page takes more than 2-3 seconds to serve content, these crawlers may abandon the request entirely. Unlike Google’s crawler, which queues slow pages for later rendering, most AI crawlers make a single pass. If your server doesn’t respond quickly with complete HTML, the AI system never sees your content and can’t include it in AI-generated responses. Optimizing LCP and TTFB directly improves how much of your site gets indexed by AI systems.
2. What’s the most impactful Core Web Vital to optimize first?
LCP optimization should always be your first priority. It has the most direct impact on both user experience and AI crawler success. Start with server response time (TTFB), then move to image optimization and resource prioritization. A site with a 1.5-second LCP will outperform a site with a 4-second LCP in every measurable way, from Google rankings to AI indexing completeness to conversion rates. Once LCP is under 2.5 seconds, shift your focus to INP and CLS.
3. Do AI crawlers execute JavaScript when crawling my site?
Most AI crawlers do not execute JavaScript. GPTBot, ClaudeBot, and PerplexityBot primarily rely on the raw HTML response from your server. This means if your content is rendered client-side using React, Vue, or Angular without server-side rendering, AI crawlers may see a blank or minimal page. This is why server-side rendering (SSR) or static site generation (SSG) is critical for AI visibility. Google’s crawler is an exception; it does execute JavaScript, but with a delay.
4. How often should I monitor Core Web Vitals?
We recommend a three-tier monitoring approach. Daily: Check automated CI/CD performance checks on every deployment. Weekly: Review your RUM (Real User Monitoring) dashboard for trends and anomalies. Monthly: Run a comprehensive audit using PageSpeed Insights and WebPageTest, comparing results to your baseline and previous months. Additionally, set up alerts for any AI crawler response times exceeding 2 seconds so you can catch regressions immediately.
5. Can site speed improvements alone increase my AI search visibility?
Site speed AI performance is necessary but not sufficient for AI search visibility. Think of Core Web Vitals optimization as removing a barrier. If your pages are too slow, AI crawlers can’t index your content at all, so no amount of great content will help. But once your site meets performance thresholds, you still need high-quality, structured content, proper schema markup, an llms.txt file, and authoritative backlinks. Performance optimization ensures the door is open for AI crawlers. Your content determines whether they find value when they walk through it.


