© Paul Cooijmans


Normalization is the process of converting the actual score distribution of a test to a hypothetical normal distribution, after which the mean and standard deviation can be conveniently used to indicate any score's position on the underlying scale. It is assumed that if a distribution is normal, its underlying scale is linear. The adult deviation I.Q. scale is a well-known example of a scale after normalization.

In order to normalize, one computes quantiles or proportions outscored for every raw score, and converts those via normal distribution tables to z-scores. When people talk about the standard deviation of I.Q. test scores, almost always the standard deviation after normalization is meant, not the original raw score standard deviation, which need not correspond linearly to the I.Q. standard deviation altogether. It is rarely or not so that the raw score mean and standard deviation are directly projected onto the I.Q. scale, as the layman might think; raw scores tend to be too non-linear for that.

An objection to this procedure is that it "depresses" the I.Q.s in the high range, as compared to the old mental/biological age ratio childhood scores which run in much higher numbers, even over 300, especially for very young children. But of course a higher I.Q. number does not mean more if its rareness is not also higher. In the absence of an absolute or physical way for measuring intelligence, we do not know if normalization "depresses" I.Q.s at all.

- [More statistics explained]

The Imperial Seal