This question relates to an application we are developing and how to
best design the UI for least ambiguity. The app in question is an image
processing system that will allow the user to 1) specify gamma
correction for imported images (eg. loaded from disk, scanner, etc) and
2) specify gamma correction for exported images (eg. saved to disk,
The user can specify gamma, black point & white point, and our software
constructs a LUT. All fairly standard stuff. As a nicety, the UI also
graphs the gamma curve onscreen in the setup dialog.
For input gamma correction, the LUT is constructed in the standard way,
O = pow(I, 1/gamma)
The dilemma arises, however, when deciding how to construct the LUT for
output correction. There seem to be two schools of thought:
a/ Use the same formula as above, so that a gamma correction of 'n' has
the same affect for both input and output devices (ie. gamma > 1.0
b/ Use the inverse formula O = pow(I, gamma), so that a gamma correction
of 'n' for an input device has the inverse affect of a gamma correction
of 'n' for an output device.
I have seen different software implement both these behaviours, and it
seems to me that there is a lot of ambiguity in the available texts as
to what the "correct" behaviour should be.
I'd appreciate any thoughts or experiences that you can share.