Standard error of measurement

© Paul Cooijmans

Explanation

Standard error of measurement is the standard deviation of the expected error; that is, the standard deviation of an individual's scores if it were possible to take the test repeatedly without a learning effect between the test administrations. A rule of thumb for interpreting standard error: One's true score on the test in question lies with 95 % probability between plus or minus two standard errors from one's actual score. This is called "the 95 % confidence interval". With plus or minus one standard error one has the 68 % confidence interval, with plus or minus three standard errors one has the 99.7 % confidence interval.

In interpreting standard error one may also consider that its value really only applies to the middle part of the test's score range, and loses meaning at the edges. In general, standard error is only meaningful where the scale on which it is expressed is linear, which is not everywhere and always the case.

Standard error (σerror) is computed by combining a test's reliability (rxx) with its raw score standard deviation (σ):

σerror = σ × √(1 - rxx)

As an indicator of reliability, Cronbach's alpha is used when available. When Cronbach's alpha is not available, which is the case when a test's items are not scored uniformly, the mean of two different split-half reliabilities is used. When only one split-half reliability is available, which is the case with very short tests, that one split-half reliability is used.

- [More statistics explained]

The Imperial Seal