Section 6.5: Designing Quizzes/Exams/Tests
Educational experts underrate them. Instructional designers disregard them. Course authors overlook them. Learners fear them. We may cloak them as games or puzzles. We may put off writing them until there is not time enough to do them well. Whether we call them tests, assessments, quizzes, drills, examinations, competence monitors, or demonstrations of mastery, they, nonetheless, remain essential for gauging a learner’s progress. And they represent an opportunity for clever designers to engage learners and provide objective feedback (Horton, p. 215).
Horton provides the following good and bad reasons for administering a test:
Good Reasons | Bad Reasons |
---|---|
|
|
Has a student ever complained to you or on the IDEA form tests were unfair or did not cover the material presented in class? It is important to monitor the results of tests to find potential problem areas where the majority of the class is struggling with a particular concept or question. When many students are not answering a question, this could be a sign students do not understand, or they do not have enough time to answer the question (Horton). Horton provides the following recommendations to help prevent common complaints (p. 272):
- Make sure questions are within the scope of stated objectives or unit of learning.
- Make sure that questions which are dependent upon skills or knowledge are mentioned in prerequisites.
- Avoid culturally biased questions that rely on knowledge that one culture might possess, but another might not. Or complex, tricky language that is especially difficult for second-language readers.
- Avoid unnecessary jargon, metaphors, and slang.
- Make sure time limits on tests will allow all students to finish the test and do not penalize second-language learners or those with vision or reading problems.
- Test your tests:
- Which objective does this question test?
- Where in the lesson, lecture, material was the learner taught this objective?
- Can someone with subject-matter knowledge but minimal reading skills answer the question?
- Develop assessment questions based on Bloom’s Taxonomy (Armstrong, P., 2010, Bloom’s Taxonomy. Vanderbilt University Center for Teaching.):
- Test questions should be designed to evaluate your students’ ability to think at any of the six different levels of abstraction as described by Bloom, and often the same content information can be assessed at different levels of cognition. Link to sample test questions utilizing the six levels of learning.
Palloff and Pratt (2010) assert “Bloom’s Taxonomy lays out levels of outcomes in terms of increasing complexity, which build on one another and to which activities and assessments can be mapped” (p. 18).