Robustness is the extent to which a test is resistant to score inflation over time. It is measured as the correlation between the scores and a time indicator, normally the through-numbered months of the scores (numbered such that January 1995 = 1, and so on). Other possible time indicators are the chronological ranks or the years of the scores, but months have proven to give the best result. Notice that robustness is a negative indicator (lower correlations mean greater robustness).
The previous scaling from 0 to 1 has been abandoned for being too arbitrary.
Robustness is the extent to which a test is resistant to score inflation over time. It is based on the reversed correlation between the chronological ranks and the scores of the test submissions. When chronological ranks are not known, another time indicator may be used, such as numbering the months from a given point in the past (e.g. January 1995 = 1, et cetera) and using the month numbers of the test submissions. The latter method may actually be better than the first, as it uses regular time intervals. Robustness is computed thus:
Robustness = √(rscores × time indicators × -1 + 1) / √2
Higher numbers mean greater robustness, and a value of .71 means the scores are perfectly stable over time. Lower values mean the scores are rising over time, higher values (than .71) mean the scores are descending over time.