Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

renderTime should be CORS-protected, not TAO-protected #111

Open
noamr opened this issue Dec 31, 2022 · 9 comments
Open

renderTime should be CORS-protected, not TAO-protected #111

noamr opened this issue Dec 31, 2022 · 9 comments

Comments

@noamr
Copy link
Contributor

noamr commented Dec 31, 2022

The intention in Timing-Allow-Origin (TAO) is to protect the timing it took to fetch the image. Not the content of the image.
The intention in protecting renderTime is to avoid exposing something about the content of the image.

With CORS images, one can anyway know everything about the image and doesn't need to decipher the content from render time.
While with TAO, it's the only place where it's used outside of the context of protecting fetch information.

@yoavweiss
Copy link
Contributor

I think we need a more holistic approach here.

HTMLImageElement.decode() is not protected by neither TAO nor CORS. Talking to @mmocny, there are potentially other ways to weed out the decode times of opaque images.

We need to collectively decide if that information should be protected, and if so, by what means. We also need to ensure those means are ergonomic and easy to use, either through document-level CORS-fetch opt-ins for subresources, through public resource declaration, both, or something else entirely.

/cc @clamy @mikewest @annevk @arturjanc

@noamr
Copy link
Contributor Author

noamr commented Jan 2, 2023

I think we need a more holistic approach here.

HTMLImageElement.decode() is not protected by neither TAO nor CORS. Talking to @mmocny, there are potentially other ways to weed out the decode times of opaque images.

We need to collectively decide if that information should be protected, and if so, by what means. We also need to ensure those means are ergonomic and easy to use, either through document-level CORS-fetch opt-ins for subresources, through public resource declaration, both, or something else entirely.

/cc @clamy @mikewest @annevk @arturjanc

I agree about finding a more holistic approach. The first rule of thumb that exists today though in the HTML/Fetch specs, is that TAO can only protect the timing of a fetch and not of something to do with the resource's content, which is generally protected by CORS. Using TAO to protect render time breaks this, and makes it so that we need to save TAO information in HTTP cache and other repercussions.

@smfr
Copy link

smfr commented Jan 3, 2023

For those not up on their acronyms, TAO = Timing-Allow-Origin

@annevk
Copy link
Member

annevk commented Jan 4, 2023

At a high level we shouldn't add to the number of cross-origin channels without opt-in. That includes decode(). Given that it returns a promise it still seems somewhat malleable, but I can't find an existing issue targeting that. (There are some other issues with it though that suggest it's not being maintained.)

I also like the idea of not expanding the scope of the Timing-Allow-Origin header beyond fetch timing data.

@noamr
Copy link
Contributor Author

noamr commented Jan 4, 2023

@mmocny isn't the renderTime observable-ish today for no-cors images, by doing something like the following?

  await new Promise(resolve => img.addEventListener("load", resolve));
  // start time
  await new Promise(requestAnimationFrame);
  // end time

@mmocny
Copy link

mmocny commented Jan 4, 2023

You ask if renderTime is "observable-ish"... I would say no.

Measuring next rAF after image "load" event is not the same as renderTime [1].

Since you mention no-cors (rather, non-TAO today), I think you are asking if you can work around the limitations of LCP only reporting loadTime? Well, I don't think your polyfill suffices, as that only (somewhat) measure "paint" stage on main [2].

However, you can:

  • Use decode() to wait for no-cors image to be ready (as mentioned already), and then also use any strategy to get render time of next frame: rAF for paint time, element timing for renderTime (you would have to measure something that passes tao/cors, like text).
  • Measure LCP.renderTime via FCP.startTime, by ensuring that nothing contentful paints until image is fully loaded, such that both of these timings represent the same frame. FCP reporting does not check for TAO/CORS as it does not represent specific elements.

[1]: That's because requestAnimationFrame does not measure renderTime. On Chromium, "renderTime" is after image decode (typically) and represents frame presentation time, while rAF measures only a much earlier stage on main thread and before (final) decode.

On FF/Safari, there is no "LCP.renderTime", but we can use "FCP.startTime" as a comparison. While I think the timing is equivalent(ish?) to next-rAF-after-load, the frame it measures comes before image decode, AFAIK, and so does not represent the "LCP.renderTime" concept.

See: web-platform-tests/interop#215


[2] One small caveat regarding your snippet is that image "load" event may not get scheduled until after the next animation frame:

  • As image bytes become available, we mark new rendering opportunity.
  • Rendering task gets scheduled before image "load" event is dispatched.
  • As long we are able to schedule rendering work, "paint" will come before image "load" event fires.
    • This isn't always the case, you can see load event fire before rAF sometimes.
  • So, measuring the next requestAnimationFrame after "load" will often be off by at least one frame. If main thread has many tasks queued, this gap can be wider.

At least that's what I've observed in Chromium.

However, a more robust strategy you can use is rAF polling and checking the status of image.complete, since this appears to get marked in the right frame.

const img = document.querySelector('img');

img.addEventListener("load", (event) => {
  console.log("image load", performance.now(), event);
});

function rafPoll() {
  requestAnimationFrame((frameTime) => {
    const nowTime = performance.now();

    // This may come before img.onload event fires
    if (img.complete) {
      console.log('image complete', frameTime, nowTime);
    } else {
      rafPoll();
    }
  });
}

// Start polling after image begins loading
rafPoll();

...again, all that just measures loading + scheduling paint stage, and not decoding + presentation.

@noamr
Copy link
Contributor Author

noamr commented Jan 4, 2023

I understand those timings are slightly different, I wonder if the difference between the polyfill and renderTime is significant in terms of understanding hidden information about the image.

Let's say it's an SVG, and a logged-in user would get a really complex SVG vs. a much simpler one for non logged-in users.

  • would renderTime divulge this?
  • would one of the above polyfills divulge this?

@noamr
Copy link
Contributor Author

noamr commented Jan 11, 2023

At a high level we shouldn't add to the number of cross-origin channels without opt-in. That includes decode(). Given that it returns a promise it still seems somewhat malleable, but I can't find an existing issue targeting that. (There are some other issues with it though that suggest it's not being maintained.)

I also like the idea of not expanding the scope of the Timing-Allow-Origin header beyond fetch timing data.

Created an issue for decode: whatwg/html#8709

@noamr
Copy link
Contributor Author

noamr commented Jan 11, 2023

Note that HTMLImgElement.decode() does not work on SVG elements.
Not sure if canvas drawing being synchronous is something web devs can gather rendering information from.
Also createImageBitmap() can expose render time of cross-origin images.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants