Core Web Vitals is a relatively new metric from Google that measures the performance of your pages based on real world usage data. This new report is based on three metrics: LCP, FID, and CLS. If a URL does not have enough data for any of these metrics, the page status will be ranked on the metric with status of its most poorly performing metric.
Why does this matter?
Earlier this year Google announced that these new metrics will now definitely become a factor in ranking websites starting in 2021:
“Today, we’re building on this work and providing an early look at an upcoming Search ranking change that incorporates these page experience metrics. We will introduce a new signal that combines Core Web Vitals with our existing signals for page experience to provide a holistic picture of the quality of a user’s experience on a web page.”
That means that these metrics are now important for SEO if you want your pages to continue to perform next year. It should also be noted that—with these new metrics—Google will allow non-AMP pages to rank in their Top Stories feature for the first time. See below:
“As part of this update, we’ll also incorporate the page experience metrics into our ranking criteria for the Top Stories feature in Search on mobile, and remove the AMP requirement from Top Stories eligibility. Google continues to support AMP, and will continue to link to AMP pages when available.”
Let’s dig a little deeper into each of these metrics to see what they are and how we can measure them.
First Metric: Largest Contentful Paint (LCP)?
Google defines LCP as follows:
“The Largest Contentful Paint (LCP) metric reports the render time of the largest image or text block visible within the viewport.”
Essentially, this means that the LCP is the largest viewable block of your page that occurs by the time the rendering process is complete. See this example of the CNN website shared by Google:
As you can see, the largest element continually changes as the page renders, but only in the final screen do we see the largest element, which is considered the LCP.
To see how this can vary, let’s look at an example from Google demonstrating how the LCP can load in much earlier in the process:
In this example, we see that the LCP has drawn in quite early well before the page has finished rendering. This will result in a much better score.
A good score is considered having the Largest Contentful Paint occur within the first 2.5 seconds of the page load. Anything between 2.5 seconds and 4.0 seconds is considered to need improvement. Beyond 4.0 seconds is considered a poor implementation. See below for a visual example provided by Google:
For more information, see Google’s article on Largest Contenful Paint.
Second Metric: First Input Delay (FID)
FID metrics help define a user’s first impression of your website and is defined by Google as follows:
To be clear, this metric is not measured until a user attempts to interact with the page. Depending on when this occurs can drastically affect the score, so Google uses a compilation of data approach to measure it from various users over a period.
An example from Google below demonstrates how a longer first input delay could occur:
In this example, the page becomes visible via loading styles relatively early, but there are several main thread tasks that if the users first input occurred during those times there could be a delay before the browser was able to respond.
In the above example, the user attempted to interact with the page during one of the longer main thread tasks. However, the user had to wait for the main thread to complete before the browser could respond.
A good score is considered anything less than 100ms. Scores between 100ms and 300ms are considered as needs improvement while anything greater than 300ms is rated as poor.
You can measure FID via a variety of methods including PageSpeed Insights or via a chrome extension. Keep in mind that this cannot be a simulated test as it requires real user interaction to determine the score.
For more information see Google’s article on First Input Delay.
Third Metric: Cumulative Layout Shift (CLS)
Cumulative Layout Shift is a new metric based upon one of the web’s most annoying features: shifting content. Google defines it as follows:
“CLS measures the sum total of all individual layout shift scores for every unexpected layout shift that occurs during the entire lifespan of the page.”
If you’ve ever visited a page and start browsing only to have an ad suddenly pop in and push content out of the way disrupting your natural reading flow, you’ve experienced a poor CLS implementation.
A good example of why this is a poor experience is demonstrated in this example provided by Google:
The intent of this metric is to rank websites based on better usability. Those with a poor user experience will rank lower than those with a more enjoyable user experience.
A good CLS score is considered less than 0.1. Between 0.1 and 0.25 is considered needs improvement while anything greater than 0.25 is considered poor.
Keep content shift to a minimum unless necessary via expected items like expandable collapsed menus. (This is considered an expected layout shift and is not penalized by the metric. It is almost always user-initiated.)
Animations and transitions, especially those caused by a CSS transform, are another ideal way to keep from triggering a layout shift.
For more information, see Google’s article on Cumulative Layout Shift.
Web Metrics and SEO
Web performance optimization is only a piece of the puzzle when it comes to driving organic search (SEO) performance. If your web vitals have flatlined, give us a shout and let’s work together.