Constructing Multiple Choice Items

Multple choice items have the appeal of making the scoring process more efficient and feedback more timely. Multiple-choice items are versatile, in that they can measure all three facets of student academic characteristics: knowledge, skill, and affect.

Varying item difficulty and targeting all cognitive domains (highst to lowest) can be accomplished through multiple-choice items. Carefully written items can be diagnostic, as they can demonstrate how students might be misunderstanding a concept.

Generally speaking, however, it is far easier to write poor multiple-choice items than effective ones. The time saved in the grading process only comes after the significant time investment involved in collaboratively writing effective multiple-choice questions.

Further, students have the opportunity to select a correct answer that is given to them, that they might never have determined on their own. Multiple-choice item types cannot assess a student's ability to creatively synthesize content related to academic learning targets.

Many teachers in our district have expressed concerns over this type of test item, expresing difficulty in creating "good" multiple-choice questions. It is our goal to share some conventional wisdom concerning the effective construction of multiple choice items.

Vocabulary / Terms

A multiple choice question will consist of these parts:

1.) The Stem: the setup of the question conisting of a question, paragraph, passage, or perhaps an incomplete phrase, followed by;

2.) The Options: which consists of the answer and the distractors
  • Answer - the correct, "keyed" answer
  • Distractors - alternatives to the answer that are appealing to students who do not fully understand the content

The distractors should compete well with the answer. We can evaluate this during our analysis efforts when we look at the frequency of item selections by students. It is also important to note that this analysis will enable us to evolve our assessments, making them better over time by editing certain questions and eliminating others.

Creating the Stem

1.) Identify the one indicator that the question is meant to assess.

2.) The stem should pose a singular problem, but that problem may contain multiple steps to arrive at the answer.

3.) Typically, a direct question is superior to an incomplete statement as it tends to be less confusing (especially in lower grade-levels).

4.) Item difficulty can be increased by using a "best answer" approach, as opposed to a "correct answer" approach.

5.) Avoid being wordy and ensure a proper level of vocabulary relative to students' reading ability.

6.) Avoid negatively phrased stems, as they often introduce needless confusion and thus unwanted bias;
  • Example: Which of the follwing is not the least likely method for avoiding inflation?

7.) Include the large majority of question infomation in the stem, and not the answers.

8.) Avoid giving information in the stem that contains the answer to other questions on the assessment.

Coming Up with Distractors

1.) Aside from the answer, come up with three solid distractors. Having additional distractors often leads to them being poorly written.

2.) The distractors should be similar to the answer in terms of length and grammatical structure.

3.) The distractors should be plausible, such that students who do not completely understand the topic will be attracted to them.

4.) The distractors should be "diagnostic"; if a majority of students picked the wrong answer 'C', it's most likely due to a particular misunderstanding of the concept being measured.

4.) In general, avoid the "all of the above" qualifiers, as it evaluates test-taking strategies more than content understanding.

5.) Avoid using common phrases (perhaps from a textbook) that can serve as clues to the correct answer.

Application of the Concepts

Click here to engage in an analysis of these multiple-choice construction principles.

Resources Worth Investigating

Haladyna, T. M. (1999). Developing and validating multiple-choice test items (2nd ed.).
Mahwah, NJ: Lawrence Erlbaum Associates.

Popham, W. J. (2002). Classroom assessment: What teachers need to know (3rd
ed.). Boston: Allyn & Bacon.

Click here to return to the Basics of Test Building section.