Resolution reflects the number of consecutive possible raw scores that fit in a unit of spread; it is computed thus:
Resolution = Raw score quartile deviation / Smallest raw score unit
This means that for most tests, the resolution is equal to the quartile deviation, and for tests that consistently give half points it is twice the quartile deviation, for instance.
The previous scaling to a number from 0 to 1 via a divider has been abandoned for being too arbitrary.
Resolution reflects the number of consecutive possible raw scores that fit in a unit of spread; it is computed thus:
Resolution = √(raw score quartile deviation)/√24
For tests that consistently give scores in half points, the quartile deviation should be doubled before computing the resolution. 24 is by and large the greatest raw score quartile deviation found in tests, although very few are over 16 (in a normal distribution, a quartile deviation of 24 implies a standard deviation of 36). Greater quartile deviations are mainly found in scales or scaled scores such as those of the SAT, GRE, protonorms, and protoranks, wherein 66.7 is about the largest encountered. The reason for this procedure leading to a number between 0 and 1 is to make the statistic comparable to the several other measures of test quality. By having them all in the same order of magnitude, they can easily be combined into a general higher-level measure of test quality. Without this need for combining, the quartile deviation itself would already be sufficient as a measure of resolution.