You spent three weeks getting your Core Web Vitals to green. You celebrated. Then six weeks later Google Search Console shows you are back in the red. A plugin updated. A third-party script changed. Someone ran an A/B test without telling you.
This is completely normal. CWV scores are not a one-time achievement. They are a continuous thing that needs monitoring. The good news: setting up monitoring is not complicated. Here is how to do it, free options first.
Why CWV Scores Regress (More Often Than You Think)
The most common causes of performance regressions, in rough order of frequency:
- Third-party script updates (25% of regressions): Your chat widget silently ships a new version with a 50KB heavier bundle. Your analytics script adds a new feature that blocks the main thread for an extra 80ms. You have no idea it happened until your users complain or Search Console turns red.
- New features and components (20%): Someone adds a carousel to the homepage. It looks great. It also adds 150KB of JavaScript and causes 0.15 CLS every page load.
- A/B tests gone wrong (15%): Marketing runs an experiment with a different hero image layout. The variant adds a layout shift. Test runs for 3 weeks before anyone notices.
- Plugin or dependency updates (15%): On WordPress, a performance plugin update accidentally removes critical optimizations. On Next.js, a dependency update brings in a heavier version of a library.
- Content changes (10%): Someone uploads a 3MB JPEG as the new homepage hero. Nobody checks PageSpeed Insights. LCP jumps from 2.1s to 5.8s.
The 28-day rolling window in Google Search Console means you will not see the damage for weeks. If you do not have monitoring, you are flying blind for a month before you even know there is a problem.
Free Monitoring Options That Actually Work
Option 1: Lighthouse CI with GitHub Actions (Best for Development Teams)
If you deploy via GitHub, you can add Lighthouse CI to your workflow and have it automatically test performance on every pull request. Here is a basic setup:
# .github/workflows/lighthouse.yml
name: Lighthouse CI
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
lighthouse:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: npm ci
- name: Run Lighthouse CI
run: |
npm install -g @lhci/[email protected]
lhci autorun
env:
LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}
And the Lighthouse CI config file:
// lighthouserc.json
{
"ci": {
"collect": {
"url": ["https://yoursite.com/", "https://yoursite.com/blog/"],
"numberOfRuns": 3
},
"assert": {
"assertions": {
"largest-contentful-paint": ["error", {"maxNumericValue": 2500}],
"cumulative-layout-shift": ["error", {"maxNumericValue": 0.1}],
"total-blocking-time": ["warn", {"maxNumericValue": 300}]
}
},
"upload": {
"target": "temporary-public-storage"
}
}
}
Now every pull request shows a Lighthouse report and fails if LCP goes above 2.5 seconds. Regressions get caught before they ship to production.
Option 2: CrUX API Monitoring (Free Field Data)
The Chrome UX Report API is free and gives you real CrUX field data for any URL with enough traffic. You can poll it daily with a simple script and send yourself an alert when metrics cross a threshold:
#!/usr/bin/env python3
import json, urllib.request, smtplib
from email.mime.text import MIMEText
CRUX_KEY = "YOUR_FREE_API_KEY"
URL = "https://yoursite.com/"
def check_cwv():
payload = {
"url": URL,
"formFactor": "PHONE",
"metrics": [
"largest_contentful_paint",
"interaction_to_next_paint",
"cumulative_layout_shift"
]
}
req = urllib.request.Request(
f"https://chromeuxreport.googleapis.com/v1/records:queryRecord?key={CRUX_KEY}",
data=json.dumps(payload).encode(),
headers={"Content-Type": "application/json"},
method="POST"
)
with urllib.request.urlopen(req) as r:
data = json.loads(r.read())
return data["record"]["metrics"]
metrics = check_cwv()
lcp_p75 = metrics["largest_contentful_paint"]["percentiles"]["p75"]
inp_p75 = metrics["interaction_to_next_paint"]["percentiles"]["p75"]
cls_p75 = metrics["cumulative_layout_shift"]["percentiles"]["p75"]
print(f"LCP p75: {lcp_p75}ms (good: under 2500)")
print(f"INP p75: {inp_p75}ms (good: under 200)")
print(f"CLS p75: {cls_p75} (good: under 0.1)")
Get a free CrUX API key from Google Cloud Console under "Chrome UX Report API." Run this script via a cron job daily. Add email alerts for any metric that crosses the threshold. Free, forever.
Option 3: Google Search Console Alerts
Set up email alerts in Google Search Console under Settings, Email preferences. You can get notified when GSC detects a Core Web Vitals issue. This is the slowest option (28-day data lag) but requires zero setup and is already built in to GSC.
Option 4: Real User Monitoring with web-vitals JS
For real-time field data from your own users (not CrUX), add the web-vitals library and send measurements to Google Analytics 4:
import { onLCP, onINP, onCLS, onFCP, onTTFB } from 'web-vitals';
function sendToGA4({ name, value, rating, id }) {
gtag('event', name, {
value: Math.round(name === 'CLS' ? value * 1000 : value),
metric_id: id,
metric_value: value,
metric_rating: rating, // 'good', 'needs-improvement', 'poor'
});
}
onLCP(sendToGA4);
onINP(sendToGA4);
onCLS(sendToGA4);
onFCP(sendToGA4);
onTTFB(sendToGA4);
Once this is running, go to GA4, create a custom report, and view these events over time. You will see your real user LCP, INP, and CLS distributions. Set up GA4 alerts (Insights, Custom) to notify you when the metric average crosses a threshold.
Paid Tools Worth the Money
| Tool | Starting Price | Best For | Worth It? |
|---|---|---|---|
| DebugBear | $35/mo | Daily synthetic tests, regression alerts, CrUX data integration | Yes, for teams shipping frequently |
| SpeedCurve | $20/mo | Visual filmstrip, synthetic + RUM, competitor comparison | Yes, for visual regression tracking |
| Calibre | $99/mo | CI/CD integration, performance budgets, team alerts | Only for teams with dedicated performance role |
| Sentry Performance | Bundled with Sentry | INP tracing, if you already use Sentry for error monitoring | Yes, if you are already paying for Sentry |
The Monitoring Stack We Recommend for Most Teams
For small teams with limited budgets:
- Lighthouse CI in GitHub Actions (free, catches regressions before deploy)
- web-vitals JS library reporting to GA4 (free, real user data)
- Google Search Console email alerts (free, official data)
For medium teams shipping multiple times per week:
- Everything above, plus DebugBear for daily synthetic monitoring and instant regression alerts
For enterprise or agency teams:
- SpeedCurve for visual regression and competitive benchmarking
- Self-hosted RUM for privacy-respecting real user data
- Custom Slack/PagerDuty integration from CrUX API for instant alerts
FAQ
How fast does CrUX data update?
Google updates CrUX data weekly on Tuesdays, with a 28-day rolling window. That means if you fix a problem today, you will not see the full improvement in CrUX for 4-5 weeks. This is why synthetic monitoring (Lighthouse CI) is so important: it gives you instant feedback while CrUX lags.
Do I need both synthetic monitoring and real user monitoring?
They answer different questions. Synthetic monitoring (Lighthouse CI, DebugBear) runs controlled tests and catches problems before they reach users. Real user monitoring (web-vitals + GA4, CrUX) shows what users actually experience. Both are useful. Synthetic catches regressions early. RUM shows your real-world baseline and catches issues synthetic tests miss (geography-specific problems, device-specific issues, interaction-based problems).
My CWV improved in Lighthouse CI but not in CrUX. Why?
Lab improvements show immediately in synthetic tests. Field data takes weeks to update. Also, your Lighthouse CI test runs one specific scenario. Real users have different devices, different network speeds, and interact with your page differently. Both can be true simultaneously: your lab test improved and your field data has not caught up yet, or your fix helped some scenarios but not the ones your users actually encounter.
Related Performance Guides
Still Getting Red Scores?
Run a free audit and get a punch-list of exactly what to fix. No account needed.
Run Free Audit →Still Getting Red Scores?
Run your site through VitalsFixer. Free audit in 30 seconds, no account needed.
Analyze My Site Free →Want an Expert to Handle It?
Real engineers, 48-hour turnaround, money back if scores don't improve.
View Expert Fix →