|Note that automated translations may lead to false tanslations.||
Checking for Errors when Calculating the C-score
Because the calculation of the C-score is rather complex, it must be thoroughly checked for errors. To check the correctness of your analyses, these hints may be helpful:
The C-score must be neither negative nor higher than 100. Even a single score outside this range indicates an error of the calculation. Check carefully your calculation algorithm (in your statistics program, MS Excel, Quattro or whatever you use).
Test your scoring algorithm by using the data from this fictious response patter. If yozur results deviate, your scoring algorithm may contain a mistake.
The mean C-score typically ranges somewhere between 10 and 30. Yet lower and higher average C-scores are possible. Very lower scores may indicate problems with the interviewing procedure: Have the interviewers been properly trained? Was there enough time for all for answering the questions? Did the participants feel pressured to respond quickly? Were the participants tiered because of preceding tasks they had to do? Did they have to fill out a speeded achievement test before the MJT (so that they felt that the MJT was also something like a speeded test requiring "correct" answers)?
The hierarchy or order of the stage preferences should be as expected (monotonously increasing from stage 1 to 6 with slight inversions of order between stages 1 and 2 as well as between stages 5 and 6 being acceptable).
The Stage inter-correlations should form a quasi-simplex.
The C-score should systematically correlate with the preferences of the six stages as explained in the parallelism hypothesis.
Points #4 to #6 refer to the criteria for validation and certification of the MJT. If your data deviate from any of these criteria, you should be warned. Search thoroughly for possible causes of these deviations. Here are some of the major sources of error I have seen:
Mistakes in the process of transferring the raw data from the questionnaires into the computer (Were the data sorted by hand? Did the assistant confuse stages with item numbers? Were some questions left out or keyed in twice? etc.).
Mistakes in the computer program which scores the data (Are some variables confused? Are some variables left out or misplaced? etc.).
Idiosyncrasies of the design of your study. The valdidation criteria (see #4 to #6 above) require that there is substantial variance in your sample. Remember: If there is no variance there can be no correlation!
New (Oct. 29, 2012): By "substantial" we mean that the inter-quartile range should be at least >20 C-points. Inter-quartile range is defined as the difference between the highest and the lowest quartile: Q75 - Q25. If the MJT version which you use has already shown to be valid (click here for validation studies) and if you have done similar studies with inconspicuous results, you should be worried only if your data contradict clearly the validations criteria (#4 to #6 above).
Please consult other sources for more details (e.g., "Introduction into the MJT...")