June 15, 2019
June 15, 2019
October 19, 2019
Computers in Education
When exams are run asynchronously (i.e., students take it at different times), a student can potentially gain an advantage by receiving information about the exam from someone who took it earlier. Generating random exams from pools of problems mitigates this potential advantage, but has the potential to introduce unfairness if the problems in a given pool are not identical difficulty. In this paper, we present an algorithm that takes a collection of problem pools and historical data on student performance on these problems and produces exams with reduced variance of difficulty (w.r.t. naive random selection) while maintaining sufficient variation between exams to ensure security. Specifically, for a synthetic example exam, we can roughly halve the standard deviation of generated assessment difficulty levels with negligible effects on cheating cost functions (e.g., entropy).
Sud, P., & West, M., & Zilles, C. (2019, June), Reducing Difficulty Variance in Randomized Assessments Paper presented at 2019 ASEE Annual Conference & Exposition , Tampa, Florida. 10.18260/1-2--33228
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2019 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015