• Kelly@lemmy.world
    link
    fedilink
    English
    arrow-up
    248
    arrow-down
    1
    ·
    edit-2
    18 days ago

    Stupid article needs a before and after comparison.

    Instead it has way too many ads.

    “It’s a bit technical,” begins Birdwell, "but the simple version is that graphics cards at the time always stored RGB textures and even displayed everything as non linear intensities, meaning that an 8 bit RGB value of 128 encodes a pixel that’s about 22% as bright as a value of 255, but the graphics hardware was doing lighting calculations as though everything was linear.

    “The net result was that lighting always looked off. If you were trying to shade something that was curved, the dimming due to the surface angle aiming away from the light source would get darker way too quickly. Just like the example above, something that was supposed to end up looking 50% as bright as full intensity ended up looking only 22% as bright on the display. It looked very unnatural, instead of a nice curve everything was shaded way too extreme, rounded shapes looked oddly exaggerated and there wasn’t any way to get things to work in the general case.”

    This should have been easy enough to illustrate.

    Edit: Here is a greyscale illustration of a similar phenomenon:
    From https://www.odelama.com/photo/Developing-a-RAW-Photo-by-hand/

    Of course in reality it get a bit more complex when we perceive colors as having different brightness too:


    From https://www.vis4.net/blog/avoid-equidistant-hsv-colors/