June 14, 2015
June 14, 2015
June 17, 2015
Computers in Education
26.387.1 - 26.387.13
Computerized Testing: A Vision and Initial Experiences In a large (200+ students) class, running exams is a logistical nightmare. Such examsrequire conﬂict exams and ﬁguring out how to address the full range of Bloom’s taxonomylearning goals in a manner that can be eﬃciently graded to give quick student feedback.Typically, these exam hassles lead instructors to have a few, large, multiple-choice intenseexams, which can be suboptimal for student learning. In this paper, we pursue a diﬀerent vision, enabled by making a computer a central part ofthe testing process. We envision a computerized testing center, proctored 60-80 hours/week.When a course assigns a (computerized) exam, the professor speciﬁes a range of days for theexam and the student reserves a time of their convenience. When the student arrives, theyare ushered to a machine that has been booted into the speciﬁed exam conﬁguration (manydiﬀerent exams are being run in the testing center concurrently). The student logs in andis ushered through their exam. Each exam consists of a random selection of parameterizedproblems meeting coverage and diﬃculty criteria, so each exam is diﬀerent. The networkingon the machine is conﬁgured to prevent unauthorized communication. The system displaysand controls the remaining time for the exam. We see two main advantages of this approach. First, we centralize all of the hasslesof running exams, so course staﬀ no longer have to manage the scheduling, staﬃng, andpaper shuﬄing of running exams. As such, we drastically lower the eﬀort of running exams,making more frequent, lower stakes testing and second chance testing practical. Second, we greatly broaden the kinds of questions that can be machine graded. Mostlarge classes rely at least partially on scantrons for automation, but many of the questionsthat we want to ask aren’t multiple choice. With a computer involved, you can ask (andauto-grade) any question that can be objectively scored; you can ask students to designcircuits, do graphical problems like drawing acceleration vectors, write code, write equations,draw force diagrams, align genetic sequences, etc. Furthermore, as modern engineering ispracticed in a heavily computer-supported environment, we can have them use industrystandard software to solve design and analysis problems. This is particularly compelling inprogramming classes, where students can compile, test, and debug their problems beforesubmitting them for grading. In this paper, we’ll describe our experiences with a prototype computerized testing laband running all of a 200-student computer organization class’s exams using computerizedtesting. We’ll discuss the mechanics of operating the testing lab, the work required by theinstructor to enable this approach (e.g., generating a diversity of equivalent diﬃculty prob-lems), and the student response, which has been strongly positive: 75% prefer computerizedtesting, 12% prefer traditional written exams, and 13% had no preference.
Zilles, C., & Deloatch, R. T., & Bailey, J., & Khattar, B. B., & Fagen, W., & Heeren, C., & Mussulman, D., & West, M. (2015, June), Computerized Testing: A Vision and Initial Experiences Paper presented at 2015 ASEE Annual Conference & Exposition, Seattle, Washington. 10.18260/p.23726
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2015 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015