Talk:Gamma correction/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Power Law Relationship

is the formula given here the same power law relationship described in Gamma characteristic?

Yes.

How about a link in the text here instead of a footnote? Or do we in fact need a seperate page for Gamma characteristic -- could we merge that into here & leave a redirect? -- Tarquin


Regarding the Linear intensity vs. Linear encoding table at the top, with IE 6.0, the text is black, making a portion of the left half of the table impossible to read.


I think we should unite this page with Gamma correction. Wikipedia is not a dictionary, so it would make no sense to have 2 articles dealing with the same issue. --Uri

I agree (I suggested the same thing on the other page ;-). Merge the text here into the other & leave a redirect. (or the other way round. I'm not sure which has precedence) -- Tarquin 07:22 Aug 18, 2002 (PDT)

Perhaps move them both into Gamma (computer graphics)? --Uri
I'd rather not, that's a title that no-one will ever remember. I suspect Gamma correction is most likely to be linked to.
Well, I suppose you're right. I'll do the merge. --Uri

limited signal bandwidth

The two images shown while speaking about monitors with an analogue input are great, since it shows that alternating black and white on one row does not work, due to monitor limitatios. This could be improved, however, in several ways:

1. It should state *which* one is correct (the horizontal lines).

2. It should state *how* to percieve the images (move back really far, if possible, since this is better than squinting, since you are not affecting your eyesight whatsoever by merely stepping back). This also goes for the gamma correction chart on the side (which someone has mentioned below, as they don't 'get it').

3. It should state that if you see two different shades, that you can have an affect on the shade that is wrong. Place the monitor in a lower resolution mode and / or lower refresh rate, and you may notice the shades become the same, as you would be reducing this artifact of analogue inputs.

4. It should explain why pages like this are *wrong*: http://freespace.virgin.net/hugo.elias/graphics/x_gamma.htm (because, even though it is a checkerboard pattern, it it the same thing as verticle lines, if you realize the monitor 'draws' the image one row at a time, completely independent of other rows). *Many* pages use this type of incorrect image. For my CRT, I get 1.3 gamma from these images, when it is actually 2.5! Wow.

137.186.22.215 15:04, 6 May 2005 (UTC)

(Non)Displayable symbols

Hello

With Microsoft IE under Windows (2YK Pro, but not limited to this brand), the operator "proportion" (mark up ∝ ∝"), as other math symbols from the iso-10646 char set, is simply displayed as the default white square("∝" should show a white square if you are using IE under Windows), used for non displayable symbols within the iso-8859-1 latin char set.

Is there a solution with some special font to be downloaded?

I suppose this is a general problem for all pages using iso-10646 symbols.

Jean Paul (gerard.jph@wanadoo.fr) 20040315

I've no idea about Windows, but I replaced the equations with TeX versions, to be rendered as images. Unfortunately, due to a bug in the Wiki software, it won't render a "proportional" symbol, so I used the "similar to" symbol. -- Hankwang 08:54, 16 Mar 2004 (UTC)
You need to install a font that has all the needed symbols, for IE and Microsoft Word that would be Arial Unicode, or Verdana Unicode. You can obtain those fonts from the Word setup. I don't know if Firefox supports special unicode fonts when freshly installed.

What this image supposed to be ?

Q: I don't get what the check-your-gamma-value graphic on the right side of the article is supposed to be. when i look at the image, i see lines on the left side, and blank fields with slightly different grey values on the right side. I can't tell what field has the same grey value as the line fild, as my brain doesn't fool me. am i supposed to blur the image (look unsharply at it)? thanks, --Abdull 19:28, 17 Jan 2005 (UTC)

A: As explained above in 'limited signal bandwidth', you are supposed to step back until your eye percieves the horizontal stripes as a solid color. Then, you can compare the left side and the right side. The squares that appear as one solid color, rather than two different colored squares next to each other, shows the gamma correction of your display. You'll notice your eye is very good at noticing even slight differences in colors. The page should explain this. 137.186.22.215 15:07, 6 May 2005 (UTC)


I edited Cyp's latest addition involving black and white stripes. The first image ("gammaaargh") has nothing to do with gamma correction, because it is composed entirely of black and white pixels and therefore says nothing about the monitor's linearity. The difference in the apparent brightnesses of the two squares is just a function of the monitor's analogue bandwidth. I left it in anyway, for the time being, because it's interesting, but it really belongs in some other article. The second image ("gammatest") is closer to a true gamma test, as it allows you to compare an apparent 50% luminance generated by alternating black and white pixels with solid areas of varying luminance. It says nothing about the linearity at values other than 50%, however. -- Heron 12:29, 30 May 2004 (UTC)


The use of horizontal lines for the images mean that they flicker badly on interlaced displays.

--David Woolley 15:06, 16 October 2005 (UTC)

Why is there gamma correction

What is the reason for gamma correction? Does it have something to do with the Weber-Fechner law (or better: Stevens' power law), which deal with the fact that we freaky humans perceive stimuli logarithmically? --Abdull 11:52, 23 Mar 2005 (UTC)

I believe this is simply because TV cameras tend to produce current proportional to luninous intensity, but CRTs tend to produce a response proportional to that to about the power 2.2. For TVs it was much cheaper to correct at the transmitter than every receiver. For PCs, "IBM PCs" use cheap hardware that simply converts colour numbers to tube drive voltages. Macs try to correct to produce as a subjectively linear scale.

--David Woolley 15:03, 16 October 2005 (UTC)

Wrong Answer: The correct answer: there is gamma correction because every monitor is different. Usually you can adjust your monitor using the controls for gamma correction. Alternatively, you can adjust your video card output for gamma correction.

Wrong data in the article: The hexadecimal values for the boxes are equally spaced in the bottom row, but are not equally spaced in the top row. Maybe this goes back to the problem of engineers counting from "0" instead of "1" as the first number.  :)

Some curiosities

"The gamma function, or its inverse, has a slope of infinity at zero."

This http://en.wikipedia.org/wiki/Gamma_function gamma function? Since it's probably referring to the I ~ V^gamma instead, how do you get a slope of infinity at zero for that? To me the slope seems to approach zero at zero.

Another thing that confuses me is why is any gamma correction applied at all? The top chart with linear intensity and linear encoding clearly shows linear intensity looks wrong to the human eye. Besides, usually gamma is only corrected from 2.5 to 2.2 so even after correcting it's still not linear.

I believe the 2.5 is bogus. Gamma normally gets corrected from 1.0 (that of photo-sensors) to 2.2 (that of CRTs). The reason for correcting is that CRTs are non-linear and it used to be cheaper to correct at source.
If you refer back to Charles Poynton, the gamma of a CRT is really 2.5. The value 2.2 is used at the camera to account for the phenomenon of simultaneous contrast, as TV displays are often viewed in dimly-lit rooms. algocu 16:11, 7 February 2007 (UTC)
Gamma, here, has nothing to do with gamma functions. --David Woolley 22:17, 22 November 2005 (UTC)

Wikipedia or Links, who's got it right ?

Following the gamma correction bar at the right of this article, I get a gamma value of about 2.8. This results in a *very* bright image on my screen, which doesn't look good at all. However, following the instructions at the Links calibration page -- http://links.twibright.com/calibration.html -- I get a gamma value of around 1,4 (set using nvidia-settings in Linux). This looks much better, but which one is actually correct ?

Optimum subjective gamma

My understanding is that the optimum gamma to obtain equal subjective brightness steps is rather lower than 2.2 or 2.5. I seem to remember it to be about 1.6, and I believe it is the correction applied by Macs for non-Web images.

Also, my undertsnding is that the typical CRT has a gamma of 2.2 rather than 2.5. 2.2 is chosen because it is typical of CRTs, not because it is physiologically optimum.

--David Woolley 15:12, 16 October 2005 (UTC)

This caries depending on platform. There's display gamma, video hardware gamma correction and net gamma response of a system displaying an image. There is also gamma correction built into a system.

Monitors have a gamma response of about 2.2-2.5 (I thing the 2.5 is a actually a boost given to compensate for ambient light and non zero black levels)

So when you display on a PC there is typically no gamma correction in hardware, the display typically has a 2.2 response and so to display an image correctly it has to have built in correction or software correction of about 2.2 to 2.5, This is what's built into the sRGB colorspace standard.

The net gamma response of a PC is 2.2 (ish)

On a Mac the net gamma response if 1.8, the display still has a response of about 2.2 but the hardware does gamma correction to make the net response 2.2. Images that look OK on a PC will look too bright on a mac unless the display software compensates for it. This is wht file formats like PNG have a built in gamma factor, so that a PC and mac can display images from either platform corrctly.

Other platforms are different, e.g. SGI have built in default gamma corrections of 1.6 but you can change this using the command line gamma function.

Today most PCs have adjustable gamma correction available through desktop settings. So they could gamma correct to look like a Mac for example but sRGB is now the standard on that platform for all content (and gamma has beneficial effects w.r.t. presision and human contrast sensitivity).

Ebner and Fairchild use an exponent of 0.43 to convert from luminance (linear) to lightness (perceptual), which has very good agreement with Munsell encoding and CIE L*. A monitor gamma of is very close to that typically used in the Microsoft environments. The gamma of 1.8 used by Apple is sub-optimal for encoding, but has better performance for rendering and handling transparency in monitor RGB. BTW, "gamma function" is a definite integral attributed to Euler, similar to the factorial function. Lovibond (talk) 21:54, 23 May 2008 (UTC)

Display appearance

I am using Firefox 1.6a on Windows XP, and the linear intensity scale looks about right, while the linear encoding scale looks black until .04 with .05 le being equivalent to .02 li, In IE it's the same way, except the numbers are all black, while in Firefox they are all gray. Also GammaAaargh.png looks nowhere near uniformly bright. GammaTest, the left side is a constant color that is brighter than any square on the right, also in both browsers, though in IE, the first square on the right is darker. Don't know for sure the differences between browsers, but the differences are slight.

The linear intensity version should have the mid level to the left of centre, and does to me, because the eye doesn't have a gamma of 1. The backgrounds should be exactly the same on all graphical browsers using the same hardware because it is part ot the specificatin of the web as how they should appear. However, if you have colour profiles for your devices, it is possible that one browser isn't correcting for the difference between the display gamma and the sRGB curve and the other one is. --David Woolley 22:42, 22 November 2005 (UTC)

In summary, some web browsers apply their own gamma correction and this varies from browser to browser and platform to platform. It's a thorny issue, complicated by PNG for example which has built in gamma, however this makes it look the same on all platforms because the net gamma is applied to take the image back to a consistent display linear space. In summary it's bloody difficult to show an image in a browser that definitively characterises the gamma response on all systems, doubly so on systems which do 'the right thing' and impossible using PNGs with any browser that is written 'correctly' to account for gamma.

The main purpose of gamma correction

"The main purpose of gamma correction in video, desktop graphics, prepress, JPEG, and MPEG is to code luminance or tristimulus values (proportional to intensity) into a perceptually-uniform domain, so as optimize perceptual performance of a limited number of bits in each RGB (or CMYK ) component." Excerpted from The rehabilitation of gamma. This article needs more emphasis on the real purpose of gamma correcion. -- Shotgunlee 00:14, 1 July 2006 (UTC)

incorrect GammaTest.png

GammaTest.png is not correct.

If you compare the image at the right side of the article (GammaTest.png) to the similar image at

http://www.normankoren.com/makingfineprints1A.html

and to the gamma correction images here:

http://www.photoscientia.co.uk/Gamma.htm

you will see that they do not agree. And I think these people know what they are talking about.

Simastrick 17:35, 20 July 2006 (UTC)

Interesting. The RGB values in GammaTest.png are correct. According to your links and Gammatest.png displayed in GIMP or gThumb, my monitor is 2.2 (as it should be). However, displayed in Opera, Firefox, and my old XV image viewer, it gives the likely incorrect value of about 1.9-2.0. I'm not sure what the various PNG libraries assume for the gamma value of the display. Han-Kwang 21:15, 22 July 2006 (UTC)

{0..1} or [0, 1]?

I think that it would be more precise to write in the first paragraph that V_in and V_out belongs to the range [0, 1], not {0..1}. In mathematics we use curly braces to denote elements of a set. Maybe it would be more clear to non-mathematics and precise at the same time if we write:

   0 <= V_in, V_out <= 1

xgamma mention

Part of the article mentions that Linux operating systems can adjust the gamma using the xgamma command as root. I see several problems with this. First, it's not Linux specific---it's a command any operating system using the X Window System has. Second, you don't need to be root to use it. Finally, is this even appropriate for Wikipedia? I'd correct the text myself, but I don't want to bother if the paragraph is better just stricken completely. 129.110.241.254 15:08, 30 September 2006 (UTC)

Also, it's simply wrong. xgamma sets gamma correction, not gamma. After all, if you could just tell the hardware to go to gamma 2.2 or whatever, it would be all too easy wouldn't it? A gamma correction of 2.1, as the acticle suggests, will give a gamma of around 4 on most system - hardly recommended. Somewhere around 1.136 is usually good, but by it's nature the optimum gamma correction varies widely between systems.

Safari comment

The test pattern section was changed to suggest that it "should not be used when viewed using the Safari Browser on Apple operating systems, as it is the only browser that does not assume the image to be sRGB."

This is not true on either point. Many browsers make no attempt to deal with colorspace, sRGB or otherwise. They treat images as if in monitor space; that is, they send the 8b color data to the screen without modification. On a Mac as on any other platform, adjusting for this test pattern will therefore make the monitor space approximately appropriate for sRGB. Of course, if you adjust the hardware, your monitor profile should change, but this will not affect the display in non-profile-aware apps beyond the hardware effect that you've adjusted.

Dicklyon 23:46, 7 November 2006 (UTC)

Regarding another Safari comment, in which Safari was listed as an example of a browser that is not colour profile aware, this has not been true for quite some time. Furthermore, since almost all Mac browsers are profile-aware, and ALL Windows browsers are profile-unaware, it is particularly odious and misleading to use a Mac browser as your one example of a non-profile-aware browser. Firefox is still profile-dumb and available on both platforms so it seems to me to be quite a better choice as an example both in terms of accuracy and fairness.--67.70.37.75 01:05, 2 March 2007 (UTC)

Yes, good point. When did Safari become profile aware? Which version? Dicklyon 01:46, 2 March 2007 (UTC)

The test image referred to was uploaded as untagged RGB, which is fine, but the resized thumb version is a gamma 2.2 grayscale with completely different numbers. It appears that wikipedia has noticed the lack of color and has done a profile conversion, making the image completely worthless as shown. Can someone else take a look and verify that the thumbnail doesn't match the original, numerically or visually? Dicklyon 07:29, 2 March 2007 (UTC)

Request example

This article needs a prominent example of an image renormalized with high and low gamma. This is easy to make. —Ben FrantzDale 17:40, 10 May 2007 (UTC)

Indeed; feel free to make such an image.--jacobolus (t) 20:21, 20 July 2007 (UTC)
I did it. Yours. Ricardo Cancho Niemietz (talk) 20:13, 12 March 2008 (UTC)

Written for a very specialized audience

I believe that this article uses too technical a vocabulary and assumes too much prior knowledge of the topic to be a proper encyclopedia article.

I base this on the fact that I'm a computer professional with a biology and physics background, yet I still don't know what the "Explanation" means. I recognize the words and I understand the mathematical notation, but it doesn't add up without more background than I have.

Wikipedia is meant to be an encyclopedia, by definition for a general audience. How many non-specialists do you honestly think can understand that section? CarlFink 16:17, 20 July 2007 (UTC)

It doesn´t seem to me to assume too much prior knowledge, but it is written rather less clearly than would be ideal, and is indeed a bit jargony, to the point where it would be hard for an average high school student to understand (which is about where I think we should aim for the first couple paragraphs of explanation). --jacobolus (t) 20:19, 20 July 2007 (UTC)
I expanded and rearranged the entire article in order to approach a less specialized audience. Yours. Ricardo Cancho Niemietz (talk) 20:15, 12 March 2008 (UTC)

I just finished reading this article and I still don't understand gamma correction. What does it do and how does it apply to someone editing an image. Possibly how does it do it (in layman's terms)? The initial explanation should be understood by an average high school student. The detailed technical explanation should be in subsequent paragraphs. Some of my issues: what is a power-law expression, what is tristimulus (as it pertains to images). The graph on the right means nothing to me. I'm not asking to eliminate the technical stuff, but start with an explanation most can understand. I'd offer suggestions but like I said I don't understand the subject 71.193.12.143 (talk) 06:29, 22 August 2008 (UTC)

Include ITU-R BT.709

I have just used this page to get a primer on gamma correction for the transformation from raw sensor data to 8 Bit RGB color. During my research i found out that the ITU-R BT.709 is recommed to reduce sensor noise. I think this should be mentioned in the article. Graph of the ITU-R BT.709: [1] --134.96.49.57 16:10, 2 September 2007 (UTC)

A gamma curve can indeed be useful to match quantization noise to sensor noise, for an optimized SNR. What source have you found about that? I agree we should probably include the Rec709 curve; it would be good to find more about where it is used; Poynton says in video systems, but it would be good to know more explicitly what video systems have been using it and since when. Dicklyon 01:26, 3 September 2007 (UTC)

Digital images' binary pixel values are not gamma compressed

Hello. I revert your edits which pointed to the idea that raw binary pixel values in every digital image files are in fact implicetly gamma compressed. The actual pixel bytes in any file format are not gamma compressed, but files can carry additional out-of-pixel metadata information as ICC profiles which can describe explicit gamma and other nonlinearities. It seems to me that you are an expert in digital photograph but not in low-level graphic programming, which is that is described in the "Methods" section. You systematically assummes that every environment has an implicit gamma compression scheme, which is false. I know many fields in what gamma issue are treated as I said. You can do a simply test: fill a image with an arbitrary grayscale or RGB value; take an snapshot in the clipboard of the whole screen, and paste it in a new document. Hover the eyedropper over the snapshot equivalent areas of the first image, and you'll see that copied video memory pixel values are higher than the original (that is, they has been gamma-encoded with the intermediate calibration-by-software method cited). If the original file pixel values were already gamma compressed, it would be nonsense to the software to perform the encoding again. And please, don't remove anything more before talk. Ricardo Cancho Niemietz (talk) 14:00, 14 March 2008 (UTC)

Ricardo, you are absolutely wrong here. ALL 8-bit RGB and grayscale files, with few exceptions, are gamma-compressed, usually for a decoding gamma of 2.2 or 1.8, but sometimes 1.5 or other values. Nobody would be so lame as to store linear 8-bit data. I don't know where you are getting these ideas; read any paper on sRGB and other color spaces, or docs on JPEG, ICC, etc. If sometimes in graphics programming you store linear instead of gamma-compressed data, please give a citation, and explain how that works in the context of systems where all 8-bit data are assumed to be gamma-compressed. And yes I have done lots of low-level graphics and image programming. Dicklyon (talk) 15:30, 15 March 2008 (UTC)
I edited again to try to reflect the well-known fact that all standard computer image data and file formats are gamma encoded. I'm having a hard time imagining why you keep writing that they are not. Where are you getting this impression? I'm not sure about your Photoshop 6, but in 5 and 5.5 they had explicity control of the gamma values assumed for files without profiles; now they typically just default to sRGB, gamma effectively 2.2. There is no profile that I'm aware of for gamma=1.0, since nobody would use that, but of course one could make one, and it may exist. But it's certainly not within usual practice, especially at 8-bit resolution where a linear encoding would have unusably low dynamic range. Dicklyon (talk) 22:20, 15 March 2008 (UTC)
The only place I know of gamma 1.0 being used is in the internals of panorama tools, and that is not 8 bit color, as far as I know. It seems from my (admittedly limited) experience that Dick is right about this; I can't imagine people using 8-bit color with 1.0 gamma in Photoshop. --jacobolus (t)
When I worked on ANSI/ISO standards committees for photographic standards, Microsoft was pushing a linear wide-dynamic-range encoding with 16 bits per pixel; I haven't heard whether it became a standard, and I don't know if they use it as an option in their HD Photo, but they might. If panoramic tools uses linear internally, it's certainly not in 8 bits at that point, as the image dynamic range would be too severely degraded for users to tolerate. I've worked on a number of systems that use linear internally, as is nesessary when doing color matrixing, but that's always either 16 or 32 bits or float. Ricardo is not unusual among people coming from a graphics background, who have gotten away with ignoring the gamma and pretending that things are linear; most people get to the point where they realized they have to get over that to make progress in making quality images, but you can get away with ignoring it for a while. That's what happened at SGI in the 1980s, and why they standardized on the unusually low gamma value of 1.5; 1.0 was untenable, and 1.5 was the lowest they could get away with, for the least pain in readjusting their thinking (see various scraps of info here). Dicklyon (talk) 14:47, 16 March 2008 (UTC)
Ah, I found it; it went through the IEC as standard 61966-2-2 ([2]). This is 16-bit linear, because there were no other linear RGB spaces around and the graphics guys wanted one; quote from page 4:

...Yet, neither CMYK workflows, the ICC profile format nor the sRGB standard colour space

provide a complete solution for all situations. In particular, the computer graphics and gaming industries desired a standard RGB colour space that was linear with respect to luminance. As a 64 bit encoding, sRGB64 allows for 16 bit per channel encoding, including an alpha channel

for computer graphic operations.

Also see scRGB color space and [3] for more on linear and nonlinear spaces. There are no linear spaces with 8-bit per component. Dicklyon (talk) 17:04, 16 March 2008 (UTC)
Well, if 1/2.2 gamma enconding had been always the standard encoding for still images, Why the hell I *always* wrote the same *stupid* gamma-encoded lookup table routine in all the software I made along years? And why Photoshop (under Windows) also did? Ah, maybe my customers (press agencies like Spanish Agencia EFE, Telam from Argentina, NTB form Norway, etc., and of course their subscriptors, some of the main daily press publishers around the world) didn't like those darker images they saw on their PC screens when the images were not corrected-by-software... Perhaps they were so lame to generate linear encoded photographs 'cos they were unaware of your wonderful sRGB... Ah! sorry: sRGB didn't exist by then... I suppose the gamma-encoding was so standard that there was no need for a new standard... So then, why sRGB arose? Or why IPTC admits explicitely linear encodings? Or why Adobe needed to add extra color profiles in their JPEG implementation? And to see 1.0 gamma on Photoshop: Image menu→Mode→Set profile→Don't use. Great! Ricardo Cancho Niemietz (talk) 10:39, 17 March 2008 (UTC)
But seriously: along many years, mere linear Analog-to-Digital input and Digital-to-Analog output has been made, specilly in pre-digital cameras age. When a given byte is written in video memory or hardware CLUT, the hardware is unaware about the intensity the human and/or the software intends. DACs doesn't perform any nonlinearity issue per se. Hence the calibration LUTs in some (not all) display hardware. And hence the software corrections. Yes, there are many inconsistent and odd ways to manage gamma in the PC-Wintel world. Traditionally, the Macs had been more consistent, but not perfectly, specially when importing digital image from PCs. Get an idea: the MS-Windows XP and all its predecessors lack any native control panel to perform gamma and other calibrations of the monitor. The Windows Bitmap image file format spec doesn't cite any gamma issue (they always assume linear), etc. Yes, this is a jungle; even today, sRGB is not generalized or implemented in many PC graphic software: Macromedia Freehand or Quark XPress (to cite a pair) doesn't manage sRGB at all. Yes, they assume linear most the time! This is what I called the "real world". You removed the following reference http://msdn2.microsoft.com/en-us/library/ms793055.aspx, Commanding gamma-encoded rendering with DirectX under Microsoft Windows. I quote:
«Because the desktop [the output] is typically not in linear color space, gamma correction to the contents of back buffers is required before the contents can be presented on the desktop. (...) to indicate that the back-buffer contents are in linear color space (...) the driver determines that the source surface contains content in a linear color space. The driver can then perform gamma 2.2 correction (sRGB) on the linear color space as part of the blt».
I can understand that you dislike that Microsoft backup the linear color space way to manage RGB: linear in memory, gamma corrected to output. And this is the common way, not the exception (the exception is, really, having already gamma compressed data). But by removing, you only are hiding what you don't want. It seems to me this is not fair play. Ricardo Cancho Niemietz (talk) 11:41, 17 March 2008 (UTC)
Also, you removed the intermediate (transmitted) images from my illustration, replacing them by dashes. I dislike this change so much, due to they helped to understand the circuit better than the bare curves. Remember, you shouldn't write to experts, but to general audience. Ricardo Cancho Niemietz (talk) 11:41, 17 March 2008 (UTC)
Well, we could reach an agreement with the following formula: "to achieve consistency, still digital imaging should be done gamma compressed universally. There is a current trend to adopt sRGB as the main RGB encoding, but still there are many systems that do not use it, nor they asumme a gamma compressed color space at all. This is more noticiable in PC compatible platforms." and by rescuing the reference. Ricardo Cancho Niemietz (talk) 10:39, 17 March 2008 (UTC)

Why you wrote "gamma-encoded lookup tables" is something you'll have to tell us. Perhaps you were starting with linear data, like from computer graphics routines, or linear sensor raw data, and needed to encode them to work right in a computer image. That's why I did it, anyway. I'm not sure what you're saying about Photoshop; where do you infer that it is using linear or doing a gamma encoding? I've not encountered that. It's certainly not what you get by not using profiles. I wasn't aware that IPTC had allowed linear encoding; is that in 8 bit? Can you show me where the docs say that?

"Digital Newsphoto Parameter Record" (http://www.iptc.org/std/IIM/4.1/specification/dnprv4.zip) of the "Information Interchange Model" (IIM) file format from the Comité International des Télécommunications de Presse-Newspaper Association of America (IPTC-NAA). File dnprgl1.pdf, page 1, I quote:
«The Digital Newsphoto Parameter Record (DNPR) provides a standard for the transmission of digitised newsphoto images together with essential information related to the generation of the image data file. One of the parameters included (DataSet 3:120) is the quantisation method. Most scanners create image data using linear reflectance/transmittance as the quantisation method, although some convert internally to the linear density domain. More scanners are becoming available that operate in the TV Gamma domain specifically intended to provide images for display on monitors rather than for printing purposes. Image files received into picture desk systems are often converted into the linear density mode prior to display, manipulation and storage.», etc.
Also the "Background" section. Very, very interesting. It is for any pixel depth. So never was a single 2.2-gamma case. About Photoshop: please do the snapshot experiment. Ricardo Cancho Niemietz (talk) 16:16, 17 March 2008 (UTC)

I do understand the early image scanners dumped linear 6-bit and 8-bit data into memory and tried to work with it; but it was crap; they had to go to 10 and 12-bit ADC and gamma curves to get usable pictures. If you'd like to write a section on the history of image scanning, you could go into that (but probably this is the wrong article for it).

Well, it roots in the history on scanners, of course, but even today there are cases, in proffesional fields. I agree, this is not the article, but is a mistake to show these cases as obsolete ones.Ricardo Cancho Niemietz (talk) 16:16, 17 March 2008 (UTC)

Just because Windows BMP doesn't cite a gamma does NOT mean it can be treated as linear; assuming it's linear is a big error, usually, though of course it could be in strange situations like input from a linear 8-bit scanner (most scanners gamma-encode, so that would be strange). It's almost always displayed direct to a CRT, for which a 2.2 gamma encoding, or thereabouts, is assumed.

If you say: "if I see this image fine on my CRT (without active compensations), anyone can see exactly on any other CRT of the world", well, you are doing something as a kind of "human encoding", and then binary values does not matter. But this was only in a very early stage of digital image processing on PC, when images produced on screen were intended to be shown only on other screen. It can be true for a handfull of videogames of the 80's. But we are talking about professional scanning-processing-printing, cross-platform image interchange. Check the IPCT DNPR for real contexts.Ricardo Cancho Niemietz (talk) 16:16, 17 March 2008 (UTC)

The "intermediate transmitted images" that I removed were not really appropriate; not really images; not an appropriate way to represent the effect of the gamma encoding or linear data; they were displayed in a misleading way. I simplified to a much more transparent diagram in which one doesn't have to wonder what those intermediate light images are supposed to be trying to convey.

Again: address non-expert audience too. You and me know what is in the dashes; casual readers do not.Ricardo Cancho Niemietz (talk) 16:16, 17 March 2008 (UTC)
Exactly; and the intermediate images would be very misleading to the casual reader, since they implied that image file data might be displayed linearly somehow. Dicklyon (talk) 18:01, 17 March 2008 (UTC)
They don't imply nothing; they are only "magical views" of the transmitted stage, what's behind the scenes. They clarify a lot. Third opinions welcome. Ricardo Cancho Niemietz (talk) 18:31, 17 March 2008 (UTC)

Show me again the Microsoft thing; if I recall correctly, they had a way to store and convert between linear and gamma encoded; I never encountered or needed that in my Microsoft programming, but I can accept that you did. Please explain the context in which it was useful, and tell us whether the linear version in the back buffer was stored in 8-bit space (I bet not).

Don't bet: DirectX does not manage 16-bit per component, 48bpp RGB. Microsoft tells about linear, not me. I don't invent nothing. If you read carefully the reference I put, you'll discover that the desktop (that is, the output image in the video memory) is usually gamma-compensated. That is, software-compensated, etc.Ricardo Cancho Niemietz (talk) 16:16, 17 March 2008 (UTC)

As to your formula for agreement, I reject it totally. It contains an unsourced assertion of a trend, a claim that many systems do not assume gamma-compressed image data (I guess that's what it was trying to say), and a "should". Let's just say what is. Dicklyon (talk) 15:05, 17 March 2008 (UTC)

Check DNPR of IPTC again. Please, do a proposal for agreement yourself, and discuss. Ricardo Cancho Niemietz (talk) 16:16, 17 March 2008 (UTC)
Ricardo, if you'd like to do a section for professionals on the use of linear luminance and linear density (that is, log luminance) in the professional news photo industry, I would agree that's a good idea; maybe there's a better article for it, but you could start with a section here; after all it does relate to the need for gamma correction. But to mix it into the stuff about the normal use of images in normal computers, which are ALWAYS gamma encoded (neither linear luminance nor linear density) is just going to confuse readers. I'm not sure what you're saying about Photoshop; I have used it for years, in various versions, and never encountered any evidence of a "linear" mode unless a special profile was made to do that. Dicklyon (talk) 18:01, 17 March 2008 (UTC)
"Normal" images? "Normal" computers? "Always"? You'll need a good source to support your words, some that overrides that of IPTC, with the same level of broad consensus and standard fashion. TV & video papers not taken into account: digital video is already always 2.2 gamma-corrected. Still image, DTP & prepress papers, please. About Photoshop: maybe Win and Mac versions behave different. But still, please do the snapshot experiment, and more likely you'll discover the software file-to-screen correction. Under Windows, it's systematic. Ricardo Cancho Niemietz (talk) 18:31, 17 March 2008 (UTC)
Re Photoshop, I have no idea what experiment you are proposing. Re "normal" computer images, I will come up with more sources if you need them; the scope of these is generally much broader than the relatively narrow new-photo business that the IPTC is about. Dicklyon (talk) 20:30, 17 March 2008 (UTC)
At least you've clarified that you're coming at this from the point of view of linear film scanners in the press business, so we can address things better contextualized. The CG business also does linear, as explained well in this book; it points out that the PHIGS and CGM standards are linear 8-bit graphics encodings, and that they don't work well, and that JPEG uses 8-bit "perceptual" or gamma encoded. The latter is what I took to be "normal" as the others have been unused for years, as far as I know. I can't find any books that talk about the linear-density encoding, except that the HDR book talks about the advantage of log encoding over gamma encoding for very wide DR images; they do mention linear (p.16) to clarify that it is not used. There's pretty much nothing about IPTC in books, since it's a standard for a rather narrow field. Everything in books about images is based on what I called "normal" computing, assuming gamma encoding either implicity or explicitly. Like Table 2.8 here that gives the gamma curves for some standard RGB color spaces. Are there any linear RGB color spaces that are standardized? Only scRGB as far as I know, and that's 16-bit only, I think. Dicklyon (talk) 20:58, 17 March 2008 (UTC)
I've just discovered, from your Microsoft link and this one that the Windows API does in fact do a bunch of stuff with 8-bit linear encodings of RGB, as you were saying. So, we need to represent that in a way that does not confuse it with the "normal" gamma-encoded RGB that is found in image files such as JPEG, PNG, and TIFF. Interestingly, I can't find any book that mentions D3DPRESENT_LINEAR_CONTENT, so maybe this option just gets ignored? In any case, let's not let Windows cause us to crap up the contents of these articles. Perhaps we need to split off a separate article on gamma and RGB in Windows? Dicklyon (talk) 21:08, 17 March 2008 (UTC)
See p.403 of this book for the de facto standard gamma=1.8 for the graphics and prepress industry that developed because that was the gamma of images on the Macintosh, and Fig. 17.7 for popular encoding nonlinearities. I don't find any mention of linear encoding in that book (or in most books on color imaging). Dicklyon (talk) 21:18, 17 March 2008 (UTC)

Ricardo, I don't think any of your edits from today really help this article. The excessive pickiness about avoiding suggesting that the overwhelming majority of images are gamma-encoded seems odd, and the code sample you added just clutters up the page without revealing anything a user is likely to care about. I don't understand the section re-ordering, and the rest of your changes are purely stylistic changes for change's sake (as far as I can tell). I'm tempted to just revert the lot of them. --jacobolus (t) 22:11, 17 March 2008 (UTC)

I agree. And since they seem to be mostly motivated by a misconception and a Windows focus, I'm going to go ahead and revert them all. Dicklyon (talk) 23:15, 17 March 2008 (UTC)
Ricardo answers:
JPEG is a mere transport layer. It doesn't care about pixels' intensity encoding (linear transmitance/reflectance, linear density, gamma compressed or others). We read in the JPEG Standard (JPEG ISO/IEC 10918-1 ITU-T Recommendation T.81) paper, chapter 1 "Scope", Page 1:
«NOTE – This Specification does not specify a complete coded image representation. Such representations may include certain parameters, such as aspect ratio, component sample registration, and colour space designation, which are application dependent.»(here)
Thus, there are absolutely no word devoted to pixels' intensity encoding in the entire paper, as «colour space designation» is explicitely excluded from it. Also, we read in the JFIF File Format v.1.02 specification, «JPEG File Interchange Format features», «Standard color space», page 2:
«The color space to be used is YCbCr as defined by CCIR 601 (256 levels). The RGB components calculated by linear conversion from YCbCr shall not be gamma corrected (gamma = 1.0). If only one component is used, that component shall be Y.»(here)
That is, a transparent (dont' care) conversion between RGB ↔ YCbCr values. So any source claiming that JPEG/JFIF is a gamma-encoded file standard (including your reference «Digital Video and HDTV» book) is wrong or untrue. Period.
While IPTC is focused on news & press, it does't mean that ONLY in this field the linear-vs-gamma issue is managed the way this paper said. It reflects a broad established use in general DTP, which news&press shared. It isn't a "relatively narrow" field at all: they employ JPEG even some time before Photoshop supported it. In reverse, your "universal sRGB" approach is almost true only in a very limited "my scanner/camera →my Mac/¿PC? →my printer" circuit. Proffesional still image interchange circuits (as those of news&press) need to properly tag every image with appropiate color space (being sRGB one of them) thru color profiles as those of ICC, Adobe, etc. (or the IPTC Quantization Method DataSet when IIM/DNPR is employed).
About HDR, the only phrase in the book you cite «High Dynamic Range Imaging» (page 16) «It is generally recognized that linear scaling followed by quantization to 8 bits per channel per pixel will produce a displayable image that looks nothing like the original scene. It is therefore important to somehow preserve key qualities of HDR images when preparing them for display.»(here) doesn't imply that linear encoding is not used at all, but simply it isn't adequate, and it must be somehow modified «when preparing them for display». So in reverse, it implies linear storing and software correction to display: my thesis. Indeed, page 79, «Table 2.8 list several RGB standars, which are defined by their conversion matrices as well as their nonlinear transform specified by the γ, f, s and t parameters.» Nonlinear transform from (page 74) «(...) the encoded value v is normalized between 0 and 1, and is quantized in uniform steps over this range. (...) This is in contrast to a gamma encoding, whose relative step size varies over its range.» In «2.10 Brightness encoding» (page 73) summary, «In the case of a quantized color space, it is preferable for reasons of perceptual uniformity to establish a nonlinear relationship between color values and the intensity of luminance.» The word is preferable, not mandatory. This is the rationale behind color spaces as sRGB, no doubt. So if sRGB is needed, then bare RGB is linear. In «2.9 Display gamma» (incomplete), page 71, talks about LCD is different than CRT, but behave the same to "provide some backward compatibility", and also states that «Many display programs perform incomplete gamma correction (i.e., the image is corrected such that the displayed material is intentionally left nonlinear).». That is, software correction again before display.
We are not discussing if Microsoft is actually done the things "well". I quote: «Traditionally, pixel pipelines assumed the colors to be linear so blending operations were performed in linear space. However, since sRGB content is gamma corrected, blending operations in linear space will produce incorrect results. Video cards can now fix this problem by undoing the gamma correction when they read any sRGB content, and convert pixel data back to the sRGB format when writing out pixels. In this case, all operations inside the pixel pipeline are performed in the linear space.»(here) Today, Microsoft is prevalent over Apple or UNIX/Linux platforms at personal computer market. Even if its internal linear RGB approach is "wrong" (from your point of view), it affects millions of users, programmers and vendors worldwide, so this fact can't be ignored (and not in a separated article). For a Win32 API's GDI equivalence to that of DirectX D3DPRESENT_LINEAR_CONTENT flag, check the COLORADJUSTMENT structure (here) of SetColorAdjustment function (here) and HALFTONE flag of SetStretchBltMode function (here). The HALFTONE flag activates color management features of image blitting. When images are transfer to the screen and this flag is set, a gamma-correction is performed by the system, following the gamma values set. Notably, these features aren't available on Win95/98/ME platforms; so another reason to perform gamma correction-by-software at application level under Windows...
A curiosity: this book (your note, page 403) says talks about "LINEAR sRGB" (the complete quote is: «...an example of a specific non-linear transfer function, encoding linear sRGB values to non-linear sRGB values»), just at the next paragraph you noted. When you said "I don't find any mention of linear encoding in that book", Is it due to you didn't read the entire page? Or maybe do you have your enemy sleepeing in your own bed? ;-)
About the Photoshop experiment: please read again the first paragraph of this entire section (that which starts with "Hello...").
So: PLEASE revert your edit to mine (or better rewording of mine) to explain that, according to sources: a) gamma is a CRT native feature, not sticked to general "digital image computing"; b) Some digital cameras that adhere sRGB generate gamma encoded image files in JPEG format, providing compatible built-in sRGB color profiles apart from JPEG/JFIF spec, and c) giving the linear encoding in computing (under Windows, if you prefer) the same usage level (at least!) that gamma encoding; so both linear still image files and sRGB image files exist out there. These are the facts. Welcome to reality.
Finally, and again: I'm not a native English speaker, so my parlance is rarely perfect, but you can trace easely the papers I provided (from realiable sources as ISO, CCITT, ITU, IPTC, Microsoft and C-Cube) which demostrates my thesis even at ISO-standardized level. To everybody: Third opinions welcome. If you are a Mac user, perhaps you should to survey the "insignificant" Windows world, and if you are a Win user, perhaps you should to learn some more about graphics in this platform, before take party.
Yours. Ricardo Cancho Niemietz (talk) 13:55, 18 March 2008 (UTC)
Ricardo, thanks for your comments. As to the Photoshop experiment, yes, you'll see different numbers in the screen image than in the file, because Photoshop is color managed. Whether they are larger or smaller depends on how you monitor profile relates to the image profile, or to the default profile for images that don't contain one. Have you set up your environment for linear default profile for some reason? If so, please do show us how, and why, you did so.
I have no reason: I only use Windows! Even for the same profiles and files, Windows version of PS writes to video buffer different than Mac version for sure. Remember that Microsoft assumes linear RGB internally, so programs (mine, from Adobe or any other) must deal with it. The least common denominator VGA's linear DACs helps this a lot. I didn't invent VGA, nor Windows; as a programmer, I suffer them the most the time... :-) But I must take it into account. Always. A practical use: I work with BMP format many times. This format has no assumption about gamma (it is what Microsoft calls "device-independent": linear!). Suposse: if PS opens such a file and converts it to sRGB without notice, it deforms the raw binary content. If you save the implicit change as BMP and opens it again, PS is not aware of the previous change (BMP header doesn't hold any color profile or gamma info to flag as sRGB), so it could apply again the sRGB transformation, etc. A handful of image file formats (as GIF, PCX, BMP, TGA) lack any color management (CM) info or gamma assumption. Even in TIFF and JPEG, the CM tags are optional. A TIFF or JPEG file without these tags are treated as linear under Windows for sure. Ricardo Cancho Niemietz (talk) 18:48, 18 March 2008 (UTC)
No, that's absurd, and could not work if Windows always assumed linear data. And a linear DAC in a VGA display implies gamma-encoded data. Microsoft's so-called DIB data is of course actually very device dependent (only the format is device independent), but it has always been used with gamma-encoded data, primarily. You are confused. Please tell us your Photoshop color setup exactly so that I can try to interpret what's happening on your system; if you have some assumed color space for images with no profile, and automatic conversion to sRGB, then yes you can get messed up; but how do you tell it to assume linear? That's highly unusual. What's your default profile assumption for images without a profile? Dicklyon (talk) 19:11, 18 March 2008 (UTC)
Remember again the Microsoft sources about "linear". It implies everything: BMP, VGA DACs, GUI APIs, DirectX, etc. Your statement «it has always been used with gamma-encoded data, primarily» needs a good citation (against Microsoft sources). I already sent to you my current PS config via private e-mail some days ago. "Color management rules" to "no active" (to avoid implicit sRGB by default). But this is only PS; I have more graphic software in my PC! Ricardo Cancho Niemietz (talk) 20:40, 18 March 2008 (UTC)
OK, I looked at your PS 6 setup. Even though you have color management disabled when reading files (so it will neither respect embedded profiles nor convert to your working space), you have ColorMatch RGB working space, a gamma=1.8 space, and PS always does apply the convertion from working space to monitor space via your system monitor profile (see [4] which says "in Photoshop 6, monitor compensation is always turned on: Photoshop 6 displays everything through your monitor profile, so it better be as accurate as you can make it."), so that explains why you see bigger numbers on screen. Not because your data are linear, but because they're interpreted in your "working space" as having a somewhat lower gamma than your monitor. Dicklyon (talk) 21:48, 18 March 2008 (UTC)
It would be useful if you could point out any actual use of JPEG for storing linear 8-bit data. Certainly no image from any digital camera, and no image in any online photo collection, is like that. Maybe you could show us a linear 8-bit JPEG used in news or DTP. I've never seen one, and doubt that they exist, since I've never seen anyone talk about doing such a dumb thing. But, maybe I'm wrong, so show me, like you did with Microsoft's API.
There are some scanners out there that generate linear (or if you prefer, untagged) TIFF and JPEG files. In the telephotography-to-digital transition of news agencies, direct linear A-to-D and D-to-A was usually performed (as IPTC papers state). I have some samples at home, but sorry me if I don't send to you: they are property of Agencia EFE and Associated Press. But along years a high volume of such files were created; I bet most of them are currently archived "as is". Get an idea: 99% of JPEG encoders/decoders are based on the freely available soruce code by the Independent JPEG Group. As is, the encoder doesn't create the Adobe proprietary, so-called "APP13 marker", which can carry CM info as color profiles. It adheres to bare JPEG/JFIF standards that, as I proofed before, are gamma unaware. I wrote extensions to the original code that deals with APP13, and for sure that others did it too. But still there are many implementations that don't manage or ignore the APP13 marker, writting untagged JPEG files. Hardware implementations, as those in digital cameras, use mostly sRGB, but digital cameras are relatively recent, so in press archives there are a heap of linear (untagged) JPEG files, covering at least from 1990 to circa 2000. I left Agencia EFE partnership in 2002, and they was still in use by then. Ricardo Cancho Niemietz (talk) 18:48, 18 March 2008 (UTC)
There is a huge difference between linear and untagged. Can you show us any actual linear JPEGs? Your statements that sometimes agencies interpret untagged JPEGs as linear is not convincing, since it would be absurd to do so. Dicklyon (talk) 19:11, 18 March 2008 (UTC)
Remember again the IPTC and JPEG/JFIF sources. And finally I'll send you a test card by AP in linear JPEG some of these days via private e-mail. You still do not trust me about the agencies... up to you. Ricardo Cancho Niemietz (talk) 20:26, 18 March 2008 (UTC)
The fact that gamma originally came from CRTs should not be used to dilute the fact that gamma is an important part of almost all modern image encodings, for good reasons unrelated to CRTs.
Nobody said that this is not true. But it is dangerous to show "digital image=gamma encoded". Let's say that there are an "ideal" situation and "real" situations (many, under Windows). Do you remember my "trend" proposal? I tried then to picture that the linear issue is mainly part of the past and sRGB is going to be the most widely accepted encoding. But up today, sRGB is not universal at all. Ricardo Cancho Niemietz (talk) 18:48, 18 March 2008 (UTC)
sRGB is not the issue. It is just a standard designed to capture the typical legacy situation before profiles and color management and standardization. It correctly captured the fact that most legacy 8-bit images were gamma=2.2 encoded. Dicklyon (talk) 19:11, 18 March 2008 (UTC)
"Many" better than "most"; it depends on the field. Remember IPTC again. When sRGB arose, their implementors choose a way; not an "incorrect" but a practical one. And it is not bad. But to infere that any file in the world is gamma-encoded is excesive. There are other fields in which linear imaging is done, as satellite or medical imaging, for which sRGB or the like is the worst option. Ricardo Cancho Niemietz (talk) 20:26, 18 March 2008 (UTC)
No, it's most, or almost any. Your text implied that linear was commonplace, as was the default when not tagged. Not so. Still you prevent no evidence that would help me believe that the IPTC linear encoding is ever used with 8 bit data. For satellites and medical, gamma compression is also often used, as it equalizes the relative shot noise from a sensor; that's on raw data, not sRGB. Dicklyon (talk) 21:38, 18 March 2008 (UTC)
"Linear sRGB" is the concept of the linear values before gamma encoding to get to the actual sRGB standard data. It is not generally an "encoding", except in scRGB.
There was never such thing as a "linar" standard, only bare linear RGB managed in many different ways by many hardware/software vendors along time. Color management is a relatively new trend. In early 90's, things like gamma, white point, color temperatue, standard illuminant, color chromaticies, etc., were only part of workstation's world, not part of personal computers. Microsoft ignore them, mainly, so third party tools do things in many different ways. The real life I told before. Ricardo Cancho Niemietz (talk) 18:48, 18 March 2008 (UTC)
Bare RGB has pretty much never been linear, except for sometimes raw data from a sensor in a camera or scanner. Dicklyon (talk) 19:11, 18 March 2008 (UTC)
"Bare" is simply "bare"; to interpret it as linear or gamma-encoded is application dependent (remember JPEG spec). Ricardo Cancho Niemietz (talk) 20:26, 18 March 2008 (UTC)
If you can delineate situations that depart from the usual use of gamma in 8-bit encodings, with sources, then by all means we should mention those in the article.
The IPTC and Microsoft sources should be sufficient. I know you dislike them, but they are current papers, not obsolete. I propose you the following picture: CRT precedents of gamma (TV/video only) / Power law and physical explanations / The gamma heritage: early computers, digital imaging and digital cameras / Adventages of gamma-encoding / A view of issues in different platforms (better on Mac, worst on PC/Windows) / Current solutions in poor environments (including Windows): software-correction and hardware-correction / Present and future: standard values, color management and sRGB. (I surrender to show the source code I put.) And always still image and video image as separate items; it is important, due to digital video is always 2.2 gamma-encoded (but still there are some "animation" formats as animated GIF, Autodesk FLC and Microsoft "RLE codec" AVI that lacks any assumption about gamma... a jungle, as I said). If you agree, we can start to work together. I yield to you the honour to start! :-D Ricardo Cancho Niemietz (talk) 18:48, 18 March 2008 (UTC)
They merely say it can be done, not that it is done. I have no problem with those as sources, just with your interpretations. GIF files, like all other untagged image files displayed in browsers, are always interpreted as gamma encoded, whether their authors are aware of it or not; no browser will interpret an untagged image as linear. Dicklyon (talk) 19:11, 18 March 2008 (UTC)
Browsers aren't the better example of "good" color management. The sources say "it can be done", and "it was done", and it "is done". I repeat, I didn't invent anything. Simply, you are an unbeliever. Rely on these sources, they are good sources. They reflect practice, not theory. Ricardo Cancho Niemietz (talk) 20:26, 18 March 2008 (UTC)
Maybe the problem is that I keep thinking of photos, and you're thinking of graphics. So let me rephrase: nobody would use 8-bit linear for photographs. If I am wrong, please send an example. Dicklyon (talk) 21:38, 18 March 2008 (UTC)
Ricardo, None of those sources say anything about practice that I can tell. There is still no evidence of non-gamma-corrected images being used at all in "practice", much less evidence that such files are common. Dick, you can probably answer the part about the use of YCrCb in JPEG; I'm no expert on that. I've never heard of any kind of graphics (as opposed to photos) being stored commonly with linear gamma, either. In any case, I don't think that Dick's preferred language for the article necessarily implies that linear gamma is impossible. Just that it isn't used in practice. --jacobolus (t) 09:37, 19 March 2008 (UTC)

Digital images' binary pixel values are gamma compressed

Notice that this section heading contradicts Ricardo's assertion in the heading of the section above. I think the preponderance of evidence discussed above supports this. If there are documented cases of interest in which binary pixel data are NOT gamma compressed, then those should be mentioned. The allowance in IPTC to use linear encoding and 8-bit data, would be worth a mention if someone could come up with a file encoded that way, or could cite some document by anyone actually claiming or suggesting to use it that way. Or the fact that Windows allows linear 8-bit data in some situations could be mention if someone thinks it's relevant. But to say things that undermine the understanding that essentially all digital images in use today and the last 30 or more years are gamma compressed, or to suggest that images without an explicit gamma designation are linear, or to suggest that still photography is treated differently from video in these respects, would be a bad idea. Enough said. Dicklyon (talk) 02:42, 19 March 2008 (UTC)

Images

Might we not also want to add the images at Wikipedia:Featured picture candidates#Is my monitor calibrated correctly? to this article? - Jmabel | Talk 16:45, 22 April 2008 (UTC)