Moral Competence Test (MCT)
1977 - 2020
Moralische Kompetenz Test (MKT)
The first article on the MCT (MUT/MJT) has been published in 1978: Lind (1978)
Prof. Lawrence Kohlberg, Ph.D., Center for Moral Education, Harvard University:
"The methodology of Lind and his colleagues [...] I believe this to be a highly promising approach." (in: Lind et al., 2010, p. xvii).
Prof. Peter H. Rossi, Ph.D., University of Massachusetts, Amherst, Social and Demographic Research Institute:
"I was delighted to [...] learn of your EQ method [he refers to my method of "Experimental Questionnaires" on which the MCT is based, GL]. Your method has a lot more theory behind it than we have put behind the idea of the Factorial Survey and, with your permission, I would borrow some of your ideas." (personal communication).
Prof. Michael Gross, Ph.D., Department of International Relations, University of Haifa, Israel:
"The [MCT] produces two sets of scores in an effort to distinquish between the affective and cognitive aspects of moral judgment, that is, between the moral preferences which one has and the ability to use them consistently. In this way the MCT offers a significant improvement over the single score interview technique which conflates these two elements." (p. 248) more
Prof. Dr. Manfred Schmitt, University of Trier, Department of Psychology:
"The advantages of an experimental questionnaire [...] make the MCT attractive and, in my opinion, superior to the DIT." (p. 12; my translation, GL) more
Prof. Dr. Horst Heidrink, University Hagen, Department of Psychology:
"The large size of the Vst [validity coefficient] in both studies can be interpreted as a clear support for the MCT, and also for the validity of Kohlberg's theory." (p. 91; my translation) more
For these and other references using and founding the MCT see more
Moral competence is the ability to resolve problems and conflicts on the basis of one's moral principles through deliberation and discussion, instead of through violence and deceit, or through submitting to others.
Operational definition (MCT):
In order to solve problems and conflicts through thinking and discussion, one must be able to judge arguments by their moral quality instead of by their opinion-agreement or other non-moral critiria. This is what the MCT make visible.
Dear MCT user,
thank you for your interest in the Moral Competence Test (MCT), formerly known as Moral Judgment Test (MJT).
The MCT breaks new grounds in measurement methodology (see chapter 4 of Lind 2019):
More advise for research design, instruction for the participants (especially when re-tested) and reporting MCT findings, as well as about possible errors of interpreting MCT findings are listed and explained below.
If you do studies with the MCT, I would appreciate very much if you could let me have your raw data for my MCT data base after you have used them and your reports in English or German.
(Last revision: March 1, 2020)
Glossaries with important terminology
Before you use the MCT
Before you use the MCT, you should have some basic understanding of moral psychology and should make yourself acquainted with the theory behind the MCT (Lind 2019). Otherwise you may risk to misinterpret your findings.
If you have specific questions, you can consult the section "Frequently Asked Questions" below for a quick answer.
Before you start planing your research or self-evaluation study, you may be interested to read the following advice, which is based an over 40 years of research and evaluation in the field of moral psychology and education especially with the MCT:
Corrections of the MCT
1976 - The Moral Competence Test (MCT / MJT) has been first published und used 1976.
1977 - Some minor revisions of the arguments have been made on the basis of an empirical study with high school graduates ("Abiturienten-Befragung"). This 1977 version of the MCT is the standard version. This revision improved the validity of the test. Since then, a few linguistic changes have been made which do not affect the validity of the test (see below).
2001 - The response scale text has been changed from "Completely acceptable" and "Completely unacceptable" to "I strongly accept" and "I strongly reject." This is to make sure that "acceptance" means an action of the participant, and not a property of an argument.
2006 - Dr. Klaus Zierer developed a grade-school version of the MCT (3r to 4th grade). This version consists of a new story (only one story). It is not directly comparable with the standard MCT.
2008 - Christine Naegele worked on a text simplification of the MCT (German master version), yet did not complete it. Meanwhile we found out that the standard version also works well with 8 - 11 year olds if small changes are made:
2009 - Change from “The doctor complied with the wish of the woman.” to: “The doctor decided to give her an overdose of morphine”. Because we cannot know whether the doctor “complied” or whether he did it for another reason.
2012 - Some wording of the English version of the MCT has been improved without touching words with a moral connotation. This revision was proposed and carried out by Vitaliy Troyakov, M.A., Doctoral Student in Clinical Psychology, Massachusetts School of Professional Psychology, with approval by Dr. Lind.
2013 - German version: Replacement of “dass” (that) by “weil” (because) in the workers story (suggestion by Konstanze Schillinger, 10 years old). It makes reading easier.
2014 - In the English version, in item #15 the word "ignore" was missing, as George Reeves Stevens has rightly pointed out to me. Thanks, George! The downloadable English version has been corrected accordingly. (Amazingly hundreds administered it, and thousands filled it out, without noticing the ommission.)
2014 - Change of name from Moral Judgment Test (MJT) to Moral Competence Test (MCT). In German from Moralisches Urteil-Test (MUT) to Moralische Kompetenz-Test (MKT).
False Use of the MCT
Kohlberg's Stage scores (and the derived scores Moral Maturity Scoresa, MMS, WAS) are a confounded measure of moral orientations and moral competence. The indicate the highest stage which a participants uses consistently in the Moral Judgment Interview (Colby, Kohlberg et al., 1987).
The MCT provides a pure measure of moral competence. ... more
If a dilemma is ommitted, or replaced, from the MCT, or if new dilemmas are added, the MCT must again be validated and certified as valid. Otherwise the new test's validity is unknown and the new test must not be called anymore "MCT". The new test must be given a new name. The reader must be informed if parts of the original MCT are used, and that the resulting scores cannot be compared with the standardf MCT's C-score.
If changes of an argument of the MCT seem necessary, please inform the author. The revised test must be newly submitted to a validation study and be re-certified.
If in a research report the C-index is not reported, the reader of should be informed about this ommission.
Using the MCT for high stakes purposes is considered as a serious abuse of the MCT. The MCT has not been constructed and validated for these purposes. Moreover, the use of the MCT for selecting or grading people provokes attempts to falsify and fake the MCT, which diminishes the test's value for research and program evaluation.
Frequently Asked Questions
What does the MCT measure?
The MCT measures two aspects of human behavior:
The MCT is the only test which lets us assess both aspects simultaneously while NOT confusing these two aspects into a single, mixed score, as is done in other scoring systems, like that of Lawrence Kohlberg and of James Rest.
What is the standards version of the MCT and what is the standard administration and instruction of the test?
The standard version of the MCT consists of two dilemmas (Doctor's Dilemma [mercy killing], and Workers' Dilemma [breaking into a firm]) constructed in its present form in 1977, and then only slightly modified for stylistic reasons. The standard version and the certified foreign language versions have been rigorously validated, and used in many studies around the world including several thousand participants.
The standard administration is this:
The MCT rests on Lind's Dual-Aspect Theory of moral judgment behavior (see Lind, 2002; 2019), which borrows one of its two central psychological concepts -- the concept of cognition and affect being two inseparable, but distinguishable aspects (rather than two separable components or substances) -- from Spinoza, Piaget, and Kohlberg (though Kohlberg's writing seems to fluctuate between an one-component (= one substance) point of view on the one hand and a multiple component point of view on the other). The other psychological concept, the concept of moral judgment competence is taken directly from Kohlberg (1964), who defines this as "the capacity to make decisions and judgments which are moral (i.e., based on internal principles) and to act in accordance with such judgments" (p. 425). Interestingly, Darwin already has talked about "moral competencies" (see above). Yet only Kohlberg has attempted to measure it, trespassing the border between the cognitive and the affective domain, a border erected by many psychological theorists (e.g., Bloom et al., 1956; Rest & Narvaez, 1995).
The methodology of the MCT, the concept of Experimental Questionnaire (Lind, 1980; 2006, 2019), has a cognitive science background, rooting in N. Anderson's concept of cognitive algebra, G. A. Kelly's personal construct Theory, W.S. Torgerson's concept of response-stimulus scaling, L. Guttman's measurement as structural theory, Egon Brunswik's diacritical method (Lind 2017), and L. Kohlberg's postulate of moral competencies or structure as manifest pattern of behavior (Kohlberg 1984, p. 407).
More references see here.
No, the MCT is a competence test. Psychologists basically distinguish two kinds of psychological dispositions which they measure: competencies (or abilities or cognitive structures) on one hand and attitudes (inclinations, motivations, values) on the other. The most distinctive feature of these two kinds of psychological measures is whether or not the test's measures can be simulated "upward." Clearly, participants should not be able to fake competence scores upward, but attitudes measures they can.
No, not at all, although some textbooks say so.
The MCT is fundamentally different from the DIT. The MCT is a moral competence test (see above), though it also allows the researcher to assess simultaneously participants' moral orientations (attitudes, preferences). In contrast to most, if not all other tests of moral development, the MCT contains a moral task, namely the task for the participants to apply their moral orientations consistently regardless of the opinion-agreement of the arguments to be rated. The design of the test is experimental, three-factorial, with pro and contra arguments balanced.
In contrast, the Defining-Issues-Test (DIT) by James Rest measures only the preference for post-conventional moral reasoning: "The P score of the DIT provides a percent score that indicates the amount of post-conventional thinking (in contrast to other kinds of thinking) preferred by the participant." (Narvaez, 1998, S. 15). The DIT contains no moral task. The DIT's P-score does not let one assess the preference for low-stage moral orientations.
It is not difficult to simulated DIT scores (P-score) upward (Emler et al. 1983) because it is no competence test.
A narrower comparison of the two scoring techniques (P-score versus C-score) only for the DIT has been made by Rest et al. (1997). Because the DIT does not contain a moral task and is not designed as a multi-factorial, N=1 experiment like the MCT, using the C-score with the DIT is not meaningful.
Rest, J.R., Thoma, S.J., & Edwards, L. (1997). Designing and validating a measure of moral judgment: Stage preferences and stage consistency approaches. Journal of Educational Psychology, 89(1), 5-28.
In sum, both tests differ fundamentally with regard to their methodology. The MCT is based on psychological theory and uses multivariate experimental design in order to make moral competence visible in the respondents' pattern of ratings of carfully designed arguments. For moral psychology, consistency of responses is a sign of moral competence, not of the test.
No. Kohlbergian Stage Score are a mixture of cognitive-structural and affective-content aspects of moral behavior, wheras the MCT produces separate pure scores for each of the two aspect of moral behavior (cf. Lind 2019), Kohlberg's scoring system produces one combined score for both aspects: A persons gets assigned the highest out of six "Stages" only a) if he or she prefers the moral orientations typical for Stage 6 over all other Stages, and b) if she or her does so with a certain consistency. Theoretically, the Stage scoring system produces only six values, assuming that a person's moral judgment is always on only one stage, amd not spreaded over several Stages. Practically, Kohbleg and his associated do not adher to their theory, but calculate fractionat Stages and so-called Moral Maturity Scores, ranging from 100 to 600 (or sometimes, 500).
Only partly. The content and the experimental design of the MCT are based on research and theory about the nature and development of moral behavior (see Lind 2019). In principle, this idea -- basing a test on a well-researched theory and using experimental design -- can be profitably used on all fields of psychological measurement. Yet, often such a theoretical basis missing, so that no experimental design is possible. A moral atitude test cannot be turned into a morel competence test just by a special kind of scoring if there is no moral task included in it. When C is calculated for a moral preference test like the DIT, the C means only some kind of cross-situational consistency of moral preferences but not competence (Lind 1996)..
The original German version of the MCT (formerly called "Moralisches Urteil Test", MUT) and validated foreign language versions can be obtained from the author. Contact:
In your request, please explain briefly your institutional affiliation and the purpose of the use of the MCT.
The MCT can be used freely by members of public institution of education and research if not used commercially. For all others, written permission by the author(s) must be obtained.
No! This would be an abuse of the MCT. The MCT has been designed to answer important research questions like "What fosters moral judgment competence?" "How relevant is moral judgment competence for other kinds of behavior like cheating, helping, learning or decision-making?" And it has been designed for evaluating programs of moral and character education. (see Lind, 2002; 2019) ... more.
The scores for the two aspects of moral behavior -- moral competence and moral orientation -- are computed from individual response pattern. However, the C-scores of individual persons cannot be easily interpreted for individual participants. Besides the target trait (moral competence), test taking-behavior can be determined by many other factors which can hardly be identified and singled out: fatigue, attitude toward the test and the test administrator, associations created by the particular dilemma, time pressure, priming by achievement tests given before, etc. Most of these factor can cause the C-score (the indicator for moral judgment competence) to decrease more or less. So we could err considerably if we take the C-score of an individual as hir or her "true score" and judge him/her accordingly.
Therefore, indvidual MCT scores should be used very carefully or not be used at all. Never should their names be made public or should individual scores be reported to the participants.
Yet, when the C-score is averaged across several people (N > 10 and more), these factors usually cancel each other out. This is especially the case when the MCT is used to measure the effect of an educational program like the effect of a KMDD-intervention or of a school year. In this case, the focus is on the difference between average pretest and average posttest C-scores. Unless there is a systematic bias in the research design, most disturbing factors are similar for the pretest as for the posttest measurement, so that the difference C-score gives us a reliable measure of effect size, evene when the score at both times are depressed.
My advice: Do not even look at an individual's C-score. Inadvertently, you will make a false judgment on the persons who filled out the MCT.
Only if none of the technical explanation for low C-scores cited above apply, your results may indicate severe problems of the tested curriculum:
The construction of new dilemmas is possbile but very difficult. Please note that the standard MCT is applicable in most cases even though it may lack "face validity" in a particular context.
If you want to construct a new dilemma for the MCT, please read the guidelines.
After construction you can get your new dilemma certified (see certification procedure) in order to label it "certified MCT-extended." New dilemmas without a certificate should not carry the label "MCT" or "MCT-extended."
The criteria for validating new dilemmas for the MCT are as rigorous as for the standard MCT, to ensure that the new dilemma measures moral judgment competence. In order to get a new dilemma certified, the raw data of the validation study must be sent to the author.
The MCT is too difficult for my participants. I want to construct a new test with an easier topic ...
... Results are not very promising, I honestly think that the test was too dificult for them. I'm also not entirely satisfied with the translation. I've read that Kohlberg had a story about some little girl so it might be better to develop a test with a subject more interesting to them and closer to their age.
Yes, the MCT is a difficult test because it is a competence test: For most people it is a very big challenge to deal with counter-arguments, especially if the democratic culture is not developed high yet. If people are very religious, the C-sores are generally very low. See the research by Iuliana Lupu (2009) about Romanian students, by Soudabeh Saeidi (2011) about Iranian students, and by Abdul Wahab Liaquat (2012) about Pakistanian students. In these countries religion plays a big role and the moral competence is low: http://www.uni-konstanz.de/ag-moral/mut/mjt-references.htm . Could this be the case in your country, too?
To protect privacy, we use a special code instead of the names of the participants. The code consists of the house number (last two digits, e.g., 05), the day of birth (e.g., 24, when the birthday is Oct. 24), the first two letters of mother's first name and the first two letters of father's name or, if the father is not known, grandfather's first name.
With repeated measurement, usually the problem is not a learning effect (i.e., artificial elevation of scores due to test knowledge) but fatigue and frustration that lowers the scores. When used for evaluating educational or therapeutic interventions in a pretest-posttest design, some subjects may respond with test-taking fatigue or frustration because of the fact that the test is administered twice within a rather short period of time (a few weeks or months apart). Such reactions often lead to lower C-scores and an underestimation of intervention or therapy effects.
According to our experience, this problem can be solved through proper instruction ... more
If you use the standard MCT without any modifications in the ordering of the items, a scoring service is available on request for a fee. For this the raw data must be submitted for scoring in this form:
In an early guideline, the stage code for the third and fourth pro argument in the doctor dilemma was false; it must be corrected as indicated in the table below:
MCT-data which have been scored in Konstanz are not affected.
Missing data: What if a participant has not fillied out all 26 questions of the standard MCT?
If you use an online-version of the MCT which checks automatically for missing data and reminds the participant to complete his or her answers, missing data cannot occur.
Otherwise, missing data can be a problem for the scoring of the MCT. In my experience, missing data are usually not made on purpose but are caused by distractions and fatigue. Therefore you should make sure in your instruction of the participants that they do not forget to answer out all questions. Also you should allow for sufficient time for answering the MCT. In some cases missing data can be caused by the wording of the MCT if the participants are very young or have little reading proficiency (you are allowed to explain difficult words to the participant). Note that the wording of arguments must not be changed. A change would require the modified test to be validated and certified again. However, the wording of the story can be carefully modified to enhance readability.
If the questions about the decision of the protagonist is omitted, the C-score can still be calculated. However, omission of these two questions are a problem if you want to calculate scores that involve "opinion agreement."
If only one or two responses to the 24 arguments are missing, we replace missing data by the individual mean score that are calculated on the bases of the other 22 or 23 responses of that participant. This seems to be the most neutral way to replace missing data. (Do not forget to document the number of cases with missing data in your research report.) To make sure that this replacement has no biasing effect, you should run your most central analyses both with and without the modified data and compare the findings.
As a matter of convention, we discard all participants (cases) from analysis who have more than 2 missing data. (Do not forget to mention this in your research report.) Their C-score cannot be validly interpreted. In some instances, it may be interesting to analyze this phenomenon. If it cannot be explained as a technical problem, it may indicate a psychological process which deserves attention.
Yes, the MCT is highly valid because it has been put to more rigorous validity analysis than most if not all other tests of moral development.
The MCT has been submitted to more rigorous validation process than most psychological measures. The criteria chosen for checking its validity are so demanding that even minor defects of the test would have been detected. These criteria have also proven to be very effective in securing the validity of new dilemmas and the cross-cultural validity of more than thirty foreign language versions of the MCT (see Lind, 2016; certification procedure).
Moreover, it should be noted that the MCT has not been submitted to "item-selection" in order to increase the likelihood of confirming any of the predictions to be tested with the MCT. For example, no items have been omitted or included in order to maximize correlation with age. Thus the MCT is not biased for or against a specific assumption.
Note that validity is not just an attribute of a test but of the whole measurement procedure including its interpretation: "Validity is an integrated evaluative judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores or other modes of assessment" (Messick, 1989, p. 13, emphasis added). Hence, the MCT can claim validity only if one administers it according to the standard procedure described above, and if the user has sufficient psychological knowledge about the Dual-Aspect-Theory of moral behavior and development (see above) to be able to interpret MCT scores adequately.
Over the past 40 years the MCT has shown to be very useful for testing theoretical assumptions about moral behavior and development and about the effect size of certain educational programs (Lind 2002).
Messick, S. (1989). Validity. In R.L. Linn, ed., Educational measurement (3rd ed.), pp. 13-103. New York: Macmillan.
Yes, the MCT is highly reliable, not only in the conventional way but also in more meaningful ways, too:
- The MCT is reliable in the sense that neither its administration nor its scoring involve a "human factor" as is the case in open interviews.
- The MCT is reliable in the sense that the test instruction and the test stimuli do not change at all.
- The MCT is reliable in the sense that it independent from the sample studied. Its scores do not change from sample to sample, as is the case when sample statistics are used to calculate individual test scores like in Guttman-scales, or Rasch-scales, z-transformation scores and scores based on standard deviations in a sample.
- The concept of internal consistency does not apply because the MCT regards consistency information of the response pattern as a sign of a person's moral judgment competence but not as an attribute of the test. That is, inconsistency is not considered as "measurement error" or "unreliability" but as a sign of the participant's "manifest pattern of behavior" (Kohlberg 1984, p. 407; see theoretical background).
- The concept of stability does not apply because the MCT is an instrument to measure developmental change and change due to educational interventions. Such instruments must not be unalterable but sensitive to real changes.
- Hence, the MCT has not been submitted to "item-analysis" to maximize internal consistency or stability which which would have inevitably lowered the validity and the usefulness of the test.
The MCT has been used with particvipants of age 8 on upward, if the participant has average reading and comprehension capabilities. For younger children or for children and adolescents with educational disadvantages, the MCT can and should be modified. This is especially necessary when the participants are not completely proficient in the language of the test, and when the participants lack sufficient education.
These modifications can be made without diminishing the validity of the MCT:
- Use larger print
Fake news on the MCT
Comment: The MCT is a behavioral test of moral competence but not a "self-report measure," as the authors falsely state. It measures how able the participants are to rate arguments pro and contra some decisions in regard to the arguments' moral qualiy instead of their opnion agreement (Lind 2016). A self-report measure is a test which requires the participants to describe their own moral competence. It is a pitty that scientists do not understand the difference. Moreover, the MCT does not measure "moral judgment" but moral competence.
Comment: The MCT is objective but is not on Kohlberg's Stage theory (see Lind 2019). Some core postulates of Kohlberg's Stage model of moral development have been empirically refuted: Stages do not form structural wholes but all form of moral reasoning can be found in people on all Stages (Keller 1992). Moreover, moral competence has been shown to regress when it cannot be used for a longer time (Lind 2002; Hemmerling 2014).
Comment: All measurement instruments are "response methods": The participant respond to the questions of the researcher.
Comment: The MCT measures moral competence through its C-score or C-index, not through calculating a score for moral preferences.
Comment: Actually, of all measures, which claim to measure moral competence, the MCT produces the lowest scores, because it is the only one which poses a difficult task to the respondent.
Comment: The MCT does not claim to measure stages, so it cannot lack validity in that respect. It is not clear which scores the author has calculated, hence we cannot know what the r = 0.10 means. Moreover, the correlation between subtests is not considered as an index of validity in methodological literature, and such correlations depend strongly on the variance of the scores in a given sample, and on the moral judgment competence of the participants. None of these factors has been considered by this author.
Comment: (a) The MCT has not been designed for individual diagnosis or selection purposes but for research and program evaluation. (b) Sincde the MCT's inception 1977, hundreds of studies have been done which show that the MCT is a valid measure of moral judgment competence. (c) However, one must not, as the authors seems to do, confuse moral competence with moral preferences. (d) Indeed, moral preferences or orientations do not correlate with age because they do hardly differ among people. (d) Moral competence does not correlate consistently with age, because it is not a function of biological maturation but of high quality education (LInd 2002; 2016).
Comment: This is wrong. THe MJT (now called MCT) measures moral competence whereas the DIT measurea the preference for principled moral thinking. A particular feature of the MCT is that it contains a difficult task, namely the task to rate arguments not by their opinion agreement but by their moral quality. Rest rejects the use of counter-arguments to the participants because he thought of them as being "artificial." (Rest 1979, p. 89: Development in judging moral issues. Minneapolis, MI: University of Minnesota Press.) He writes: "The artificiality of the [con] statement interfered with its usefulness in studying modes of reasoning. For the most part, information from these statement was useless and had to be eliminated from the analysis." (p. 89) see also above.
Comment: Actually, these fears are not supported by empirical evidence. The MCT has been applied with children as young as 8 years of age. For participants with reading problems, the test administrator may give help in understanding certain words. Moreover, since the MCT is no speed-test and participants can take as much time as they need, reading problems do not seem to affect the test scores.
Comment: (a) The "cognitive dimension" of the MCT does NOT result from weighting the acceptability of six statements (which six should that be?], nor does the affective dimension of the MCT result from calculating "modal preferences". Actually, the cognitive aspect of moral judgment behavior, i.e., moral judgment competence, is calculated through an intra-individual analysis of variance components; and the affective aspect is indexed by the average preference of the six moral orientations as defined by Kohlberg. (b) The MCT has not been designed to measure Kohlbergian Stages because its underlying dual-aspect theory is not compatible with the stage theory.
Comment: No other researcher has ever reported similar "findings", though the MCT has been in use since 1977 in hundreds of studies with thousands of subjects. In no other study has anything like this been found. In most studies not a single unscorable protocol has been found. If unscorable protocols were found, they were only a very few. Online interviews are always complete because the program reminds the participant if he or she skips an item. Only a very few data sets were not scorable. These authors report also "stage-skipping" and "stage-regression". Because the MCT does not measure Stages at all, it is unclear which test these authors have used.
Comments: (a) A return rate of 50% is unusually high for a survey study. Only in our first survey with the MCT, the return rate was higher (70%). Typically, return rates of survey studies tend to be much lower. (b) The MCT seems to keep the return rates high. Many respondents tell us that answering the MCT is much more interesting than answering many of the other scales which we included in our test batteries.
Comment: Actually, since 1977 the MCT has shown to be so valid and fruitful for research. So it needed no changes but only minor corrections. In contrast, the Defining Issues Test by Rest et al. underwent a major revision of the test content and several major revisions of the scoring system (from P-score to P-2 score, N-score and U-score), which makes it hard to compare DIT-findings over various generations of research.
Comment: This critique mirrors a lack of reading of research literature. See compiled references on this web-site. In detail: (a) Several renown moral experts were involved in constructing the MCT through stage-rating its items. (b) The MCT is the only true measure of moral competence; how could it be compared with other such tests? (c) The MCT as used in a longitudinal study of university students in five different countries; no other test has been used in a similar way. In this and on other studies stimulating life experiences (like opportunities for responsibility-taking and guided reflection) were assessed in many life areas outside the syllabus (Lind, 2000; 2019; Schillinger, 2006; Lupus, 2009; Saeidi 2011). There are no other studies which did such comprehensive assessment of the learning environment. In DIT studies mostly characteristics of the learner was assessed, and only few characteristics of his or her environment. (d) Many moral education programs have been evaluated with the MCT, including pretests, posttests, follow-up studies, and control groups, and, of course, detailed reports have been given (see, e.g., Lind, 2002 and more here). (e) In contrast to most, if not all, other tests of moral development, the MCT is itself an experimental test of behavior. Moreover, there are several studies linking moral judgment competence as measured with the MCT to the ability to behave morally in other settings (see here). Finally, it has been shown in two experiments that the MCT's C-score cannot be faked upward like DIT's P-score (see Emler et al 1983).
Comment: Nunner-Winkler does not understand the MCT. (a) The MCT has been constructed to measure moral competence, not moral attitudes. Many biases which the author counts as possible threats to validity apply only to attitude tests. The measurement of competencies can be biased by different threats (see above). (b) Yet, these biases can be detected. There are three very rigorous criteria for checking on the validity of MCT data, which allow us to detect most severe biases in the data.
Comment: Contrary to the authors' expectation, all studies show that people's moral competence is very low, mostly in the range of a C-score between 0 and 30 (of 100 possible points). Moral competence is something different from the preference for certain types of moral reasoning. Therefore, a high preference for postconventional moral reasoning does not prove a high moral competence.
How it began
In 1975, I started to think about ways to assess the competence aspect of moral and democratic behavior. Being trained as an experimentally minded psychologist and educational researcher, I had learned that morality and democracy belong to the "affective" domain of human behavior, not to the "cognitive" or competence domain, which was exclusively occupied by so important things like mathematics and languages. So, at that time, it seemed pretty revolutionary to speak of competencies in connection with morality and democracy or even to try to think about ways to measure it, had it not been for Piaget and Kohlberg who did not trust this superficial separation of affect and cognition.
Initially, I conceptualized the Moral Competence Test* as a more economic alternative to Kohlberg's laborious interview technique, then it turned out that it can produce unique information about moral thinking and behavior which we could not obtain before. I developed the MCT on the basis of ideas which I took from philosophers (e.g., Habermas, Apel), psychologists (Piaget's notion of affective-cognitive parallelism, Kohlberg's definition of moral judgment competence, G. A. Kelly' personal construct theory, H.H. Kelley's idea of subjective variance analysis) and cognitive test theorists (Torgerson's concept of response scaling, N. Anderson's cognitive algebra; Guttman's facet analysis). The idea was not only to measure human attitudes and behaviors but to asses the cognitive structures that underlie it. We cannot open up the brain and look insight it, but we can study these cognitive structures by observing a person's reaction patter to carefully designed stimuli pattern. As a multi-factorially designed N=1 experiment, the MCT does exactly this. The MCT, we could say, is a low-tec and low-budget predecessor of brain scanning. Meanwhile we can say that the efforts invested into its construction and validation has paid well.
For two years, we had submitted the initial versions of the MCT to an intensive validation process, including a) an expert rating of the arguments by Roland Wakenhut, Thomas Krämer-Badoni, Gertrud Nunner-Winkler, Rainer Döbert, Tino Bargel, Barbara Dippelhofer-Stiem and others), b) a loud-responding to a pre-version of the MCT by about ten students to check on the test's affective quality (yes, all respondents did show emotional reactions), and c) several rather large empirical validation studies which let us check on the construct validity of the MCT: preference hierarchy, affective-cognitive parallelism, quasi-simplex structure of the stage inter-correlations, and non-fakeability of the C-score. This very extensive and rigorous validation process was made possible because the MCT was used in the international longitudinal study on university education (project "Hochsschulsozialisation") generously financed by the Deutsche Forschungsgemeinschaft as part of the Sonderforschungsbereich 23.
In meanwhile, the MCT has been applied to several hundred thousands of subjects and 25 new language versions of the MCT have been constructed, rigorously validated and certified. Work on three more versions is in progress. For guidelines please see the link below. Other tests have been developed borrowing its basic design idea: the Moral Judgment Questionnaire by Roland Wakenhut for studies of military personnel, the "Moralisches Urteil-Praeferenz-Test" (MUP) by Ralf Briechle to study high school students, the UKT by Hinder, a moral vingette questionnaire by Juan LaLlave, and the MCT-extended version by Patricia Bataglia.
The MCT is a rich source for scientific discovery. Some examples: At the outset, the MCT allowed us to study empirically Piaget's hypothesis of affective-cognitive parallelism, Kohlberg's postulate of quasi-simplex structure and Rest's notion of stage preference hierarchy. The findings were so clearly positive that we now use these three concepts as validation criteria for new test versions. Only recently, we (re-)discovered the phenomenon of moral segmentation with this instrument. For two decades, we had no reason to believe that the two MCT-dilemmas elicited very much the same kind and degree of moral judgment competence (we found, as expected, some slight differences, namely that the C-score was somewhat higher and stage 6 somewhat more preferred in the mercy-killing dilemma than in the workers dilemma). In the late nineties, studies by Cristina Moreno, Susana Patino, Patricia Bataglia and others in Latin American countries found that this segmentation is related to a special kind of religiosity in these countries. Several studies (by Bernd Kietzig, Iuliana Lupu, Soudabeh Saeidi) are under way to test and further explore this hypothesis. In 1995, a study by Herberich (1996) confirmed Norm Sprinthall's theory that opportunities for responsibility-taking and guided reflection would be important ingredients of an effective learning environment: Students with such opportunities showed higher moral competence. In some fields of studies like medicine, where such opportunities are scarce, we found longitudinal regression of the C-score, corroborated by Klaus Helkama's longitudinal interview-study in Finland and Slovackova's MCT study in the Czech Republic. Marcia Schillinger has explored these findings in more depth in a comparative study of the learning environments of medical, business and psychology education in Germany and Brazil.
More and more the MCT shows its strength also in program evaluation. It has played a key role in evaluating and continuously improving the method of dilemma discussion over the past 20 years. The fact that we see now effect sizes of r = 0.70 and higher in our intervention studies owes much to the MCT. Our findings in Konstanz have been confirmed in a carefully randomized study by Sanguan Lerkiatbundit and his colleagues in Thailand. In Greece, Katerina Mouratidou also uses the MCT in teaching evaluation with good results. If we would not have an instrument to measure the competence aspect of moral behavior, we would not be able to detect the effects of educational programs aimed at fostering moral competencies. Kohlberg's interview does this to some degree, too. But often it is too laborious for this purpose. Most if not all other instruments deal only with moral preferences. Recently, we created an open-source internet-version of the MCT to be used for educational program evaluation (http://www.uni-konstanz.de/itse/ ). This helps us to monitor the effectiveness of our teaching in regard to moral-democratic competencies and, of course, also other teaching aims with high quality data at very low costs.
I hope that we can protect the MCT against abuse, misuse or mindless use so that the MCT will not loose its value for research and program evaluation. The MCT was constructed to improve our knowledge about moral behavior and development and to improve our methods and programs for moral and democratic education. The MCT has never been intended to be used as an instrument for differential diagnostics and personnel selection, and has not been constructed for that purpose.
If te MCT would be used for selection and grading of people, it -- like many other tests -- would have long been rendered useless for research and program evaluation. Books would have been published and workshops been setup on how to cheat the MCT.
So please do not publish the MCT in books, articles nor on the Internet (in research reports it is OK). Please refer anybody interested in using the MCT to me. I will not hesitate to give the MCT away for free if it will be used in scientific research or program evaluation by public institutions.
to validation & certification procedure:
to Scoring Guide, Downloadable Publications and Various Language Versions of the
Moral Competence Test (MCT):
I want to thank all of you who have used the MCT and contributed to the understanding of the C-index. I welcome any response and comment on the MCT.
The MCT can be used for free in public institutions for research and teaching. For use of the MCT by private institutions and commercial projects (program evaluation and alike), written permission by the author is required. For more information on the MJT see http://www.uni-konstanz.de/ag-moral/
The author kindly requests a file with the raw data from all MCT-studies for his archive of MCT studies. Please deliver them in the text format specified here.
See the list of certified MCT versions .