Skip to Main content

Section 6.5: Designing Quizzes/Exams/Tests

Educational experts underrate them. Instructional designers disregard them. Course authors overlook them. Learners fear them. We may cloak them as games or puzzles. We may put off writing them until there is not time enough to do them well. Whether we call them tests, assessments, quizzes, drills, examinations, competence monitors, or demonstrations of mastery, they, nonetheless, remain essential for gauging a learner’s progress. And they represent an opportunity for clever designers to engage learners and provide objective feedback (Horton, p. 215).

Horton provides the following good and bad reasons for administering a test:

Reasons for administering a test
Good Reasons Bad Reasons
  • Let learners gauge progress toward their goals.
  • Emphasize what is important and thereby motivate learners to focus on it.
  • Let learners apply what they have been learning – and thereby learn it more deeply.
  • Monitor success of parts of the e-learning so that the instructor and designers can improve it.
  • Certify that learners have mastered certain knowledge or skills as part of a legal or licensing requirement.
  • Diagnose learners’ skills and knowledge so they can skip unnecessary learning.
  • Fulfill the stereotype that all e-learning courses have tests and all tests are unpleasant.
  • Reinforce the instructor’s power over learners. Pay attention or else.
  • Torture learners. Training is supposed to be painful. Tests can insure that it is.
  • Artificially bolster learners’ self-esteem by giving them easy tests with gushingly positive feedback.
  • Use a testing tool you paid a lot of money for.
  • You can’t think of any other way to add interactivity.

Has a student ever complained to you or on the IDEA form tests were unfair or did not cover the material presented in class? It is important to monitor the results of tests to find potential problem areas where the majority of the class is struggling with a particular concept or question. When many students are not answering a question, this could be a sign students do not understand, or they do not have enough time to answer the question (Horton). Horton provides the following recommendations to help prevent common complaints (p. 272):

  1. Make sure questions are within the scope of stated objectives or unit of learning.
  2. Make sure that questions which are dependent upon skills or knowledge are mentioned in prerequisites.
  3. Avoid culturally biased questions that rely on knowledge that one culture might possess, but another might not. Or complex, tricky language that is especially difficult for second-language readers.
  4. Avoid unnecessary jargon, metaphors, and slang.
  5. Make sure time limits on tests will allow all students to finish the test and do not penalize second-language learners or those with vision or reading problems.
  6. Test your tests:
    1. Which objective does this question test?
    2. Where in the lesson, lecture, material was the learner taught this objective?
    3. Can someone with subject-matter knowledge but minimal reading skills answer the question?
  7. Develop assessment questions based on Bloom’s Taxonomy (Armstrong, P., 2010, Bloom’s Taxonomy. Vanderbilt University Center for Teaching.):
    1. Test questions should be designed to evaluate your students’ ability to think at any of the six different levels of abstraction as described by Bloom, and often the same content information can be assessed at different levels of cognition.  Link to sample test questions utilizing the six levels of learning.

Palloff and Pratt (2010) assert “Bloom’s Taxonomy lays out levels of outcomes in terms of increasing complexity, which build on one another and to which activities and assessments can be mapped” (p. 18).

Bloom's Taxonomy