Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Google Shows How To Fix LCP Core Web Vitals


Barry Pollard, Google Chrome Web Performance Developer Advocate, explained how to find the real causes of Lowest Contentful Paint’s poor rating and how to fix them.

Largest Content Capture (LCP)

LCP is a key web vital metric that measures how long it takes for the largest element of content to render in a site visitor’s viewport (the part the user sees in the browser). A content element can be an image or text.

For LCP, the largest content elements are block-level HTML elements that take up the most space horizontally, such as paragraphs

headings (H1 – H6) and images (basically most HTML elements that take up a lot of horizontal space).

1. Know what data you’re looking at

Barry Pollard wrote that a common mistake publishers and SEOs make after seeing PageSpeed ​​Insights (PSI) flag a page for a poor LCP score is to fix the problem in Lighthouse or through Chrome Dev Tools.

Pollard recommends sticking with PSI because it offers multiple tips for understanding the issues that cause LCP to perform poorly.

It’s important to understand what data PSI gives you, especially the data derived from the Chrome User Experience Report (CrUX), which is the anonymized results of Chrome visitors. There are two types:

  1. URL-level data
  2. Data at origin level

URL-level results are those for the specific page being debugged. The data at the origin level is aggregated results from the entire website.

PSI will display data at the URL level if the URL has had enough metered traffic. Otherwise, data will be displayed at the origin level (aggregate result across the entire site).

2. Review the TTFB rating

Barry recommends looking at the TTFB (time to first byte) score because, according to him, “TTFB is the first thing that happens to your page.”

A byte is the smallest unit of digital data for representing text, numbers or multimedia. TTFB tells you how long it took the server to respond with the first byte, revealing whether the server’s response time is the reason for poor LCP performance.

He says that focusing efforts on website optimization will never solve the problem that is rooted in TTFB’s bad sore.

Barry Pollard writes:

“Slow TTFB basically means 1 of 2 things:

1) Sending requests to your server is taking too long
2) Your server is taking too long to respond

But which one it is (and why!) can be hard to fathom, and there are several possible reasons for each of those categories.”

Barry continued his review of LCP debugging with specific tests listed below.

3. Compare the TTFB with the Lighthouse lab test

Pollard recommends testing with the Lighthouse Lab Tests, specifically the “Initial server response time” audit. The goal is to verify that the TTFB problem is repeatable to eliminate the possibility that the PSI values ​​are a fluke.

Lab results are synthetic, not based on actual user visits. Synthetic means they are simulated by an algorithm based on a visit triggered by a Lighthouse test.

Synthetic tests are useful because they are repeatable and allow the user to isolate a specific cause of a problem.

If the Lighthouse Lab Test does not replicate the problem, it means the problem is not with the server.

He advised:

“The key thing here is to make sure the slow TTFB is reproducible. So scroll down and see if the Lighthouse lab test matches this slow TTFB from a real user when they tested the site. Look for the “Initial Server Response Time” audit.

In this case it was much faster – that’s interesting!”

4. Expert advice: How to check if the CDN is hiding a problem

Barry gave great advice about content delivery networks (CDNs) like Cloudflare. A CDN will keep a copy of the website in the data centers which will speed up the delivery of the websites but will also mask any underlying problems at the server level.

A CDN does not keep a copy in every data center around the world. When a user requests a web page, the CDN will fetch that web page from the server and then make a copy of it on a server closer to those users. So the first fetch is always slower, but if the server is slow to begin with, then the first fetch will be even slower than delivering the web page directly from the server.

Barry suggests the following tricks to bypass CDN caching:

  • Test for a slow page by adding a URL parameter (such as adding “?XYZ” to the end of the URL).
  • Test a page that is not frequently searched for.

It also suggests a tool that can be used to test specific countries:

“You can also check if countries in particular are slow—especially if you’re not using a CDN—with CrUX, and @alekseykulikov.bsky.social Treo is one of the best tools for that.

You can run a free test here: treo.sh/sitespeed and scroll down to the map and switch to TTFB.

If certain countries have slow TTFBs, check how much traffic is coming from those countries. For privacy reasons, CrUX does not show you traffic volumes (unless there is enough traffic to display), so you will need to look at your analytics for that.”

As for slow connections from certain geographies, it’s helpful to understand that slow performance in certain developing countries could be due to the popularity of cheaper mobile devices. And it’s worth repeating that CrUX doesn’t reveal which countries the bad results are coming from, which means introducing Analytics to help identify countries with slow traffic.

5. Fix the repeatable

Barry concluded his discussion by advising that the problem could only be resolved once it was confirmed to be repeatable.

He advised:

“For server issues, is the server down?

Or is the code simply too complex/inefficient?

Or does the database need to be adjusted?

For slow connections from some places, do you need a CDN?

Or investigate why so much traffic from there (ad campaign?)

If none of these stand out, it could be due to redirects, especially from ads. I might add ~0.5 are TTFB – per redirect!

Try to minimize redirects as much as possible:
– Use the correct final URL to avoid having to redirect to www or https.
– Avoid multiple URL shortening services.”

Takeaway: How to optimize for maximum content imaging

Barry Pollard of Google Chrome offered five important tips.

1. PageSpeed ​​Insights (PSI) data can offer clues for debugging LCP issues, plus other nuances discussed in this article that help make sense of the data.

2. PSI TTFB (time to first byte) data can indicate why a site has poor LCP results.

3. Lighthouse lab tests are useful for debugging because the results are reproducible. Reproducible results are key to accurately identifying the source of LCP problems which then enable the right solutions to be applied.

4. CDNs can mask the real cause of LCP problems. Use Barry’s trick described above to bypass the CDN and retrieve the real lab output which can be useful for debugging.

5. Barry listed six potential causes of poor LCP results:

  • Server performance
  • redirects
  • to code
  • database
  • Slow connections specific to geographic location
  • Slow connections from certain areas caused by certain reasons such as ad campaigns.

Read Barry’s post on Bluesky:

I’ve had a few people reach out to me recently asking for help with LCP issues

Featured Image Shutterstock/BestForBest



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *