Advertisement
  1. SEJ
  2.  ⋅ 
  3. Technical SEO

Google Shows How To Fix LCP Core Web Vitals

Google Chrome's Barry Pollard explained 5 optimization tips for Largest Contentful Paint. Every SEO needs to read this.

Google Shows How To Fix LCP Core Web Vitals

Barry Pollard, the Google Chrome Web Performance Developer Advocate, explained how to find the real causes of a poor Lowest Contentful Paint score and how to fix them.

Largest Contentful Paint (LCP)

LCP is a core web vitals metric that measures how long it takes for the largest content element to display in a site visitors viewport (the part that a user sees in a browser). A content element can be an image or text.

For LCP, the largest content elements are block-level HTML elements that take up the largest space horizontally, like paragraph <p>, headings (H1 – H6), and images <img> (basically most HTML elements that take up a large amount of horizontal space).

1. Know What Data You’re Looking At

Barry Pollard wrote that a common mistake that publishers and SEOs make after seeing that PageSpeed Insights (PSI) flags a page for a poor LCP score is to debug the issue in the Lighthouse tool or through Chrome Dev Tools.

Pollard recommends sticking around on PSI because it offers multiple hints for understanding the problems causing a poor LCP performance.

It’s important to understand what data PSI is giving you, particularly the data derived from the Chrome User Experience Report (CrUX), which are from anonymized Chrome visitor scores. There are two kinds:

  1. URL-Level Data
  2. Origin-Level Data

The URL-Level scores are those for the specific page that is being debugged. Origin-Level Data is aggregated scores from the entire website.

PSI will show URL-level data if there’s been enough measured traffic to a URL. Otherwise it’ll show Origin-Level Data (the aggregated sitewide score).

2. Review The TTFB Score

Barry recommends taking a look at the TTFB (Time to First Byte) score because, in his words, “TTFB is the 1st thing that happens to your page.”

A byte is the smallest unit of digital data for representing text, numbers or multimedia. TTFB tells you how much time it took for a server to respond with the first byte, revealing if the server response time is a reason for the poor LCP performance.

He says that focusing efforts optimizing a web page will never fix a problem that’s rooted in a poor TTFB sore.

Barry Pollard writes:

“A slow TTFB basically means 1 of 2 things:

1) It takes too long to send a request to your server
2) You server takes too long to respond

But which it is (and why!) can be tricky to figure out and there’s a few possible reasons for each of those categories.”

Barry continued his LCP debugging overview with specific tests which are outlined below.

3. Compare TTFB With Lighthouse Lab Test

Pollard recommends testing with the Lighthouse Lab Tests, specifically the “Initial server response time” audit. The goal is to check if the TTFB issue is repeatable in order to eliminate the possibility that the PSI values are a fluke.

Lab Results are synthetic, not based on actual user visits. Synthetic means that they’re simulated by an algorithm based on a visit triggered by a Lighthouse test.

Synthetic tests are useful because they’re repeatable and allow a user to isolate a specific cause of an issue.

If the Lighthouse Lab Test doesn’t replicate the issue that means the problem isn’t the server.

He advised:

“A key thing here is to check if the slow TTFB is repeatable. So scroll down and see if the Lighthouse lab test matched up to this slow real-user TTFB when it tested the page. Look for the “Initial server response time” audit.

In this case that was much faster – that’s interesting!”

4. Expert Tip: How To Check If CDN Is Hiding An Issue

Barry dropped an excellent tip about Content Delivery Networks (CDNs), like Cloudflare. A CDN will keep a copy of a web page at data centers which will speed up delivery of the web pages but will also mask any underlying issues at the server level.

The CDN doesn’t keep a copy at every data center around the world. When a user requests a web page the CDN will fetch that web page from the server and then will make a copy of it in that server that’s closer to those users. So that first fetch is always slower but if the server is slow to begin with then that first fetch will be even slower than delivering the web page straight from the server.

Barry suggests the following tricks to get around the CDN’s cache:

  • Test the slow page by adding a URL parameter (like adding “?XYZ” to the end of the URL).
  • Test a page that isn’t commonly requested.

He also suggests a tool that can be used to test specific countries:

“You can also check if it’s particularly countries that are slow—particularly if you’re not using a CDN—with CrUX and @alekseykulikov.bsky.social ‘s Treo is one of the best tools to do that with.

You can run a free test here: treo.sh/sitespeed and scroll down to the map and switch to TTFB.

If particular countries have slow TTFBs, then check how much traffic is coming from those countries. For privacy reasons, CrUX doesn’t show you traffic volumes, (other than if it has sufficient traffic to show), so you’ll need to look at your analytics for this.”

Regarding slow connections from specific geographic areas,  it’s useful to understand that slow performance in certain developing countries could be due to the popularity of low-end mobile devices. And it bears repeating that CrUX doesn’t reveal which countries poor scores are coming from, which means bringing in Analytics to help with identifying countries with slow traffic.

5. Fix What Can Be Repeated

Barry ended his discussion by advising that an issue can only be fixed once it’s been verified as repeatable.

He advised:

“For server issues, is the server underpowered?

Or the code just too complex/inefficient?

Or database needing tuning?

For slow connections from some places do you need a CDN?

Or investigate why so much traffic from there (ad-campaign?)

If none of those stand out, then it could be due to redirects, particularly from ads. They can add ~0.5s to TTFB – per redirect!

Try to reduce redirects as much as possible:
– Use the correct final URL to avoid needing to redirect to www or https.
– Avoid multiple URL shortener services.”a

Related: Core Web Vitals: A Complete Guide

Takeaways: How To Optimize For Largest Contentful Paint

Google Chrome’s Barry Pollard offered five important tips.

1. PageSpeed Insights (PSI) data may offer clues for debugging LCP issues, plus other nuances discussed in this article that help make sense of the data.

2. The PSI TTFB (Time to First Byte) data may point to why a page has poor LCP scores.

3. Lighthouse lab tests are useful for debugging because the results are repeatable. Repeatable results are key to accurately identifying the source of a LCP problems which then enable applying the right solutions.

4. CDNs can mask the true cause of LCP issues. Use the Barry’s trick described above to bypass the CDN and fetch a true lab score that can be useful for debugging.

5. Barry listed six potential causes for poor LCP scores:

  • Server performance
  • redirects
  • code
  • database
  • Slow connections specific due to geographic location
  • Slow connections from specific areas that are due to specific reasons like ad campaigns.

Read Barry’s post on Bluesky:

I’ve had a few people reach out to me recently asking for help with LCP issues

Featured image by Shutterstock/BestForBest

Category News Technical SEO
ADVERTISEMENT
SEJ STAFF Roger Montti Owner - Martinibuster.com at Martinibuster.com

I have 25 years hands-on experience in SEO, evolving along with the search engines by keeping up with the latest ...