Randomized Multiple Choice Examination

Ian McLeod, Ying Zhang, Hao Yu and Valery Didinchuk
University of Western Ontario
Email: aim@uwo.ca

Key Words: Cheating; Perl; Sample survey design; Student evaluation; Teaching large classes

Abstract

The use of multiple choice exams in which the order of the questions as well as the order of the possible answers are randomized independently for every student are discussed. This design greatly reduces the possibility of cheating and has no serious drawbacks. Carefully tested and documented open-source software for creating and marking these exams is provided under the GNU General Public License. Other software, such as that often supplied with textbooks, does allow for randomizing in various ways but what is new here is that all exams are randomized and marked by our software. It is expected that this approach is widely applicable not only to most forms of multiple choice examinations but also in other applications such as addressing the problem of form-fatigue in sample surveys.

1. Introduction

The multiple-choice test is widely used in all school and university subjects and at all educational levels for measuring a variety of teaching objectives. A high quality multiple choice test requires carefully planning, constructing and editing the test's items and questions including the possible response choices. Previous empirical studies

In an effort to overcome cheating, instructors often prepare several versions of these exams. Students still have opportunity to cheat especially if they know which the system or pattern of seating in the room. With the advent of economical digital photocopiers, it is very easy now to produce multiple choice examinations in which the order of the questions as well as the order of the answers is scrambled for every examination. The digital photocopier can print, collate and staple all exams. Each exam is given a unique Exam Code from 000 to 999. If more than 1,000 exams are required then some duplicates may be used. To mark these exams a key file is kept that indicates for each Exam Code the random permutations used. The student is required to enter the Exam Code on the scantron sheet along with their responses. A grade report is subsequently produced which indicates the correct answer and the student response for each question. A response analysis is also produced that indicates the relative difficulty and suitability of each question.

2.The Software

http://www.perl.com/CPAN-local/

http://www.activestate.com/

3. Experimental Validation

4. Concluding Remarks

References

Allison, D. E. (1984), Test anxiety, stress, and intelligence-test performance. Measurement and Evaluation in Guidance 16, 211-217

Brenner, M. H. (1964), Test difficulty reliability and discrimination as functions of item difficulty order. Journal of Applied Psychology 48, 98-100.

Hopkins, Kenneth D. (1998), Educational and Psychological Measurement and Evaluation. Allyn and Bacon.

Marso, R. N. (1970), Test item arrangement, testing and performance. Journal of Educational Measurement 7, 113-118.

Haladyna, Thomas M. (1999), Developing and Validating Multiple-Choice Test Items. LEA.

Sax, G. and T. R. Cromack (1966), The effects of various forms of item arrangements on test performance. Journal of Educational Measurement 3, 309-311.

Tuck, J.P. (1978), Examinee's control of item difficulty sequence. Psychological Reports 42, 1109-1110.