Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transformation matrices' values #17

Closed
zizhengtai opened this issue Nov 22, 2020 · 7 comments
Closed

Transformation matrices' values #17

zizhengtai opened this issue Nov 22, 2020 · 7 comments

Comments

@zizhengtai
Copy link

The RGB-LMS conversion matrices and color blindness matrices used by this repo are very different from those listed on https://ixora.io/projects/colorblindness/color-blindness-simulation-research/ (which you mentioned you consulted in #13), and also those in the original MATLAB file in https://github.com/joergdietrich/daltonize/blob/master/doc/conv_img.m.

I'm quite confused why these are all different. Could you maybe shed some light on this?

@joergdietrich
Copy link
Owner

The matrices given on ixora.io convert from sRGB to LMS using the Hunt and RLAB color appearance model. My matrices use the newer CIECAM02 model (see e.g. https://en.wikipedia.org/wiki/LMS_color_space). You can use the notebook in this repository to follow my computations: https://github.com/joergdietrich/daltonize/blob/master/Compute%20Transform%20Matrices.ipynb

IIRC, the matrices in the original matlab file go from RGB (not sRGB) to LMS using CIECAM97. Furthermore, the simulation of tritanopia in the MATLAB code leaves blue as the invariant color instead of red.

@zizhengtai
Copy link
Author

zizhengtai commented Nov 22, 2020

Thank you for the fast reply.

I thought the matrices given on ixora.io actually converts from Linear RGB to LMS:

This color in the Linear RGB color space can be converted to the XYZ color space using a transformation matrix MsRGB obtained from www.brucelindbloom.com.

And could you confirm whether or not #16 is still a problem? Specifically, I'm working directly with Linear RGB inputs (not sRGB), so I wonder if the rest of the daltonization algorithm (excluding gamma correction) is correct.

@joergdietrich
Copy link
Owner

The matrices given on ixora.io indeed convert from linear sRGB to LMS, as do the matrices that I use. The difference is in the color appearance model we use.

Note that the transformations on ixora.io and in my code still use sRGB after linearization (i.e., the gamma correction), i.e. we use the sRGB to XYZ transformation coefficients. An RGB color space as such does not exist without specifying the primaries and the white point. See https://en.wikipedia.org/wiki/RGB_color_space

The issue in #16 is in my mind still an open issue that I need to look at in more detail. In any case, the error matrix underlying the daltonization process is the same in all OSS daltonization software that I could find. So if my daltonization for tritanopia is off then this holds for all other software that I could find.

Tritanopia is rare and I don't know any tritanopes I could easily recruit to test the code in practice.

@mplaine
Copy link

mplaine commented Jun 4, 2021

I made a quick comparison of different implementations available on the web (see https://docdro.id/ECf2A21).

I would love to hear your thoughts regarding the differences in the results, esp. between Chrome DevTools (cb3) & Firefox DevTools (cb4) vs. Daltonize (cb5, cb6) vs. Colblindor (cb9) & Peacock (cb11). Also, what is the expected outcome when running the simulation multiple times in a row (input1 -> run1 -> output1 -> run2 -> output2)? Should output1 always be identical with output2?

@joergdietrich
Copy link
Owner

That is a very nice comparison! Thank you!

A detailed analysis and comparison is beyond the resources I can invest in this project. So let me pick Peacock (cb10, not 11), where you link to the source code and I can easily follow parts of the source code.

  1. Peacock uses a fixed gamma of 2.2 and does not take the linear transformation at low compressed values into account. See https://en.wikipedia.org/wiki/SRGB#Specification_of_the_transformation for what daltonize uses.
  2. Peacock uses the ITU recommended values to transform from RGB to XYZ using the D65 white point. Daltonize transforms from sRGB to XYZ using the sRGB specification https://en.wikipedia.org/wiki/SRGB#Specification_of_the_transformation (which by the way is identical to the old ITU Recommendation 709)
  3. And then things get muddy because it is not clear
  • which color appearance model is used to go from XYZ to LMS color space (daltonize uses CIECAM02 but other choices exist).
  • which color is left as invariant for dichromates. It usually is white plus blue for protanopes and deuteronopes, and white plus red for tritanopes, other choices exist and SCCB/MoinMoin is probably what happens if you make a an incorrect one.

As you can see, there are a number of subtle choices to be made pertaining to your assumptions of the input color space and the device on which an image is rendered. Diagnosing all the differences is tricky and time consuming. In the end, all the simulations are only approximations of what a dichromate will really see.

Coming to your last question, a self-consistent choice of color transformations and simulation matrices should ensure that multiple operations on an image are identical to a single operation. This is true for daltonize.

@mplaine
Copy link

mplaine commented Jun 14, 2021

Thank you for your detailed comparison between Daltonize (ref. [5], ID "cb6") and Peacock (ref. [10], ID "cb11")!

Could you also kindly explain the main differences between Daltonize and Chrome/Firefox DevTools implementations? The latter two implementations are based on Machado et al. 2009 (see https://developer.mozilla.org/en-US/docs/Tools/Accessibility_inspector/Simulation). More information on Machado et al. 2009 is available at https://www.inf.ufrgs.br/~oliveira/pubs_files/CVD_Simulation/CVD_Simulation.html , including the values used in their simulation matrices.

@nburrus
Copy link

nburrus commented Oct 28, 2021

I've recently dove into that topic and compared a bunch of models and implementations. My 2c are that the Viénot, Brettel & Mollon 1999 algorithm implemented by daltonize.py is probably as good as Machado 2009 for protanopia and deuteranopia (full dichromacy). Machado actually has a hardcoded scale factor to match the first paper of Brettel, Viénot & Mollon in 1997 for dichromacy.

For tritanopia both Viénot 1999 and Machado 2009 are expected to be inaccurate, so I think that the two-plane algorithm of Brettel, Viénot & Mollon 1997 is still the best option. The daltonization error matrix would also need some adaptation for tritatonopia I think, it does not make sense to keep distributing only the red error on the other channels. It's probably the blue error that should get distributed instead.

Regarding the choice for the RGB -> LMS matrix, it's tricky indeed and we lack proper experimental data to pick one. It probably does not matter that much given that the entire pipeline remains an approximation, but personally I would not use a "sharpened" matrix. The LMS Wikipedia page says:

A "spectually sharpened" matrix is believed to improve chromatic adaptation especially for blue colors, but does not work as a real cone-describing LMS space for later human vision processing.

That's also what I got from Fairchild's "Color Appearance Models" book when he introduces the models used for chromatic adaptation:

The models described in this chapter do allow the computation of corresponding colors, but they are not color appearance models. They include no predictors of appearance attributes such as lightness, chroma, and hue

Otherwise non-sharpened matrices are probably similar enough that it won't be a dominant source of inaccuracy, but since most experimental data on CVD simulation in the literature comes from the Smith & Pokorny LMS matrix, I would still pick that one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants