Baltimore , Maryland
June 25, 2023
June 25, 2023
June 28, 2023
Educational Research and Methods Division (ERM)
13
10.18260/1-2--43019
https://peer.asee.org/43019
206
Chinedu Emeka is a PhD Candidate in Computer Science at the University of Illinois at Urbana-Champaign. His research interests include Computer Science Education and improving assessments for CS and other STEM students. Mr. Emeka also has a passion for teaching CS, and he has received two awards for his teaching.
David Smith is a PhD candidate at the University of Illinois at Urbana-Champaign in the area of Computers and Education. He has experience teaching multiple computer science courses and has played a central role in creating curricula that is used for teaching and testing hundreds of introductory CS students at the University of Illinois. Prior to joining University of Illinois he completed his B.S. in Computer Science at Western Washington University.
Craig Zilles is an Associate Professor in the Computer Science department at the University of Illinois at Urbana-Champaign. His research focuses on computer science education and computer architecture. His research has been recognized by two best paper
Matthew West is a Professor in the Department of Mechanical Science and Engineering at the University of Illinois at Urbana-Champaign.
Dr. Geoffrey L. Herman is the Severns Teaching Associate Professor with the Department of Computer Science at the University of Illinois at Urbana-Champaign.
Timothy Bretl is a Severns Faculty Scholar at the University of Illinois at Urbana-Champaign, where he is both Professor and Associate Head for Undergraduate Programs in the Department of Aerospace Engineering. He holds an affiliate appointment in the Coordinated Science Laboratory, where he leads a research group that works on a diverse set of projects in robotics and education (http://bretl.csl.illinois.edu/). He has received every award for undergraduate teaching that is granted by his department, college, and campus.
In this research paper, we examine various grading policies for second-chance testing. Second-chance testing refers to giving students the opportunity to take a second version of a test for some amount of grade replacement. Second-chance testing as a pedagogical strategy bears some similarities to mastery learning, but second-chance testing is less expensive to implement and avoids some of the pitfalls of mastery learning. Second-chance testing provides students with an opportunity and incentive to remediate deficiencies. Previous work has shown that second-chance testing is associated with improved performance, but there is still a lack of clarity regarding the optimal grading policies for this testing strategy.
We conducted a quasi-experimental study to compare two second-chance testing grading policies and determine how they influenced students across multiple dimensions. Under the first policy, students could gain back a modest percentage of the points they lost on their first attempt, but they had insurance (i.e., their grades could not go down even if they scored worse on the retake). Under the second policy, students could gain back the vast majority of points they lost on their first attempt, but they did not have insurance (i.e. their grades could go down if they scored worse on the retake).
We varied the grading policies used in two similar sophomore-level engineering courses. We collected assessment data and administered a survey that queried students (N = 513) about their overall sentiment, studying habits, preparation and anxiety under the two grading policies. We also interviewed seven instructors who use second-chance testing in their courses to collect data on why they chose specific policies. Finally, we conducted structured interviews with some students (N = 11) to capture more nuance about students’ decision making processes under the different grading policies.
Surprisingly, we found that the students’ preference between these two policies were almost perfectly split. Students that preferred the policy with insurance cited that this policy better addressed their test anxiety. Students that preferred the no-insurance policy with larger come-back potential indicated that this policy better encouraged them to study to remediate deficiencies before the second exam.
We discuss implications for practice and conclude with recommendations on strategies for deploying second-chance testing in various contexts based on course demographics and instructors’ goals.
Emeka, C. A., & Smith, D. H., & Zilles, C., & West, M., & Herman, G. L., & Bretl, T. (2023, June), Determining the Best Policies for Second-Chance Tests for STEM Students Paper presented at 2023 ASEE Annual Conference & Exposition, Baltimore , Maryland. 10.18260/1-2--43019
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2023 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015