Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Largest Contentful Paint #191

Closed
digitarald opened this issue Jul 19, 2019 · 18 comments
Closed

Largest Contentful Paint #191

digitarald opened this issue Jul 19, 2019 · 18 comments
Labels
position: positive venue: W3C CG Specifications in W3C Community Groups (e.g., WICG, Privacy CG)

Comments

@digitarald
Copy link

digitarald commented Jul 19, 2019

Request for Mozilla Position on an Emerging Web Specification

Other information

w3ctag/design-reviews#378

@adamroach adamroach added the venue: W3C CG Specifications in W3C Community Groups (e.g., WICG, Privacy CG) label Nov 16, 2019
@RByers
Copy link

RByers commented Jun 23, 2020

Any thoughts on this yet? With Google's launch of web vitals, it would be great to better understand Mozilla's perspective on them.

@RByers
Copy link

RByers commented Jun 23, 2020

Also see the resources on the chromium speed metrics page to get some more context on how these metrics were developed. We're more than happy to share data, discuss any feedback etc. I know this has been talked about a bunch at the WebPerf WG already, and @dbaron's tag feedback is great.

/cc @npm1

@bdekoz
Copy link

bdekoz commented Jun 23, 2020

Hey Rick, we are still evaluating the web vitals bits that were discussed in W3C web perf at the beginning of June, including Largest Contentful Paint and how that fits in with the others deemed vital by Chrome. We're hoping to get more mobile data before taking a position in the near future.

@npm1
Copy link

npm1 commented Jun 23, 2020

Hi Benjamin, I don't think we filed an issue for Layout Instability, and there is not one specific for FID (but there is one for the whole Event Timing). @skobes is filing one for LI. Do we want to keep the conversation for FID in the Event Timing one or should I file a separate one?

@bdekoz
Copy link

bdekoz commented Jun 23, 2020

Keep it in Event Timing please

@anniesullie
Copy link

Hi Benjamin, you mentioned you're hoping to get more mobile data. Is that something we could help with? We did an analysis of over 4 million mobile sites on HttpArchive, showing that LCP correlates well with Speed Index and not much with other RUM metrics like FCP.

Please let me know if there is additional data we could collect that would help inform!

@bdekoz
Copy link

bdekoz commented Jul 7, 2020

@npm1 to help us sort through web vitals, I made tracker issues for each metric after all.

FID: #387
CLS: #386

@bdekoz
Copy link

bdekoz commented Jul 7, 2020

@anniesullie thanks for the HttpArchive link. Some of the internal analysis for LCP has been delayed due to recent events, and is not expected to be completed until the end of the month. I'll have more specific feedback then, but expect to recommend this as worth prototyping.

@smaug----
Copy link
Collaborator

Given some concepts around Event timing and scroll handling being still unclear (spec issues filed), it is a bit hard to say how LCP should work.

@sefeng211
Copy link
Member

sefeng211 commented Oct 28, 2021

Are there still outstanding issues/concerns that are preventing us from making a decision? I think we are leaning towards a worth prototyping position as we consider LCP has a good correlation with SpeedIndex.

I can make a PR if there are no objections. @smaug---- @bdekoz @Bas-moz

@annevk
Copy link
Contributor

annevk commented Nov 2, 2021

@achristensen07, hey, curious if WebKit has had the opportunity to discuss this API. And if so, what would be your perspective?

@achristensen07
Copy link

I understand that people want to measure and improve how long it takes for users to see most of their webpage, and I think this is an admirable goal. I'm not convinced that we have arrived at the metric that people are looking for, though. The spec currently says "The LargestContentfulPaint API is based on heuristics. As such, it is error prone." I agree with that statement, and TPAC notes also say concerning things about the current heuristics. Google's including this in web vitals has certainly made people care more about it, but it has also turned it into an SEO game with websites doing strange things to convince Google that they have a fast site. LCP's relationship with lazy image loading is also problematic.

@RByers
Copy link

RByers commented Nov 2, 2021

but it has also turned it into an SEO game with websites doing strange things to convince Google that they have a fast site.

While there's always some aspect of an arms race with SEO, from the time spent working with performance consultants and data I've seen, I personally believe that this is not significant at the moment. In practice LCP seems to correlate quite well with user experience, but I don't expect you to trust Google's opinion on this. Instead Chrome's LCP data is available publicly in the CrUX report, so we welcome independent analyses quantifying the extent of such issues in practice, as well as proposals for alternatives or improvements that do a better job.

Or is your argument just "measuring user-perceived page load performance perfectly is hard so browsers shouldn't even really try"?

@achristensen07
Copy link

I didn't say we shouldn't even really try. I said "I think this is an admirable goal." I also said that there are some issues with our current attempt at reaching that goal. That was based on comments from several parties at TPAC.

@RByers
Copy link

RByers commented Nov 2, 2021

I said "I think this is an admirable goal."

Yes, thank you for that. Sorry for the snark.

I also said that there are some issues with our current attempt at reaching that goal. That was based on comments from several parties at TPAC.

It is indeed imperfect and probably always will be to some degree. How would you determine where the bar is for "good enough" to be supportive of? Is there an analysis we could do, or a set of P1 known issues which should be addressed?

@anniesullie
Copy link

anniesullie commented Nov 2, 2021

Thanks for the feedback, @achristensen07! Some specific questions about it:

TPAC notes also say concerning things about the current heuristics

We'd love to work to address the concerns. We reviewed the notes from TPAC and filed issues 84, 85, and 86. Happy to follow up on discussion there; please file another issue if there's one we missed.

LCP's relationship with lazy image loading is also problematic.

Can you clarify what you mean here? This was discussed briefly at TPAC, but our understanding is that the problem is how lazy loading can be misused–loading the main image on the page late will delay the visual content from appearing, which would affect most visual page load metrics, LCP included.

@achristensen07
Copy link

Rick, while I realize this is less useful for those who want to measure the entire internet without modification, I would be more in favor of implementing an API where the server gets to specify somehow what content it thinks is important to measure the timing of. That way, we would not need to have heuristics to guess what is in the background.

Annie, I thought I remembered someone saying that some people were turning off lazy loading of images to decrease their LCP time, but looking through the TPAC notes I think there are other ways to resolve this.

@anniesullie
Copy link

while I realize this is less useful for those who want to measure the entire internet without modification, I would be more in favor of implementing an API where the server gets to specify somehow what content it thinks is important to measure the timing of. That way, we would not need to have heuristics to guess what is in the background.

We purposefully built largest contentful paint on the Element Timing API so that the server could specify which content it thinks is important to measure the timing of. We'd love to see that available to developers more broadly as well!

What we see from the usage data of both APIs is that the Largest Contentful Paint API is appropriate for many more use cases than just measuring the entire internet. Even before Google Search announced its intention to use LCP as a ranking signal in May 2020, we saw that largest contentful paint was used on about 8% of page loads while Element Timing was used on about 0.2% of page loads. So while some performance-minded developers do find it useful to to specify which content to measure, we believe the majority of users prefer to have a drop-in heuristic. I think this makes sense when you think of it in the context of popularity of lab heuristics like speed index.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
position: positive venue: W3C CG Specifications in W3C Community Groups (e.g., WICG, Privacy CG)
Projects
None yet
Development

No branches or pull requests