Portland, Oregon
June 23, 2024
June 23, 2024
June 26, 2024
Educational Research and Methods Division (ERM) Technical Session 8
Educational Research and Methods Division (ERM)
22
10.18260/1-2--48121
https://peer.asee.org/48121
62
I am a Lecturer in the Computer Science department at Smith College. I received my PhD. from the George Washington University under the direction of Professor Rahul Simha. I currently teach a variety of undergraduate courses and have taught graduate courses in the past.
My research is currently focused on STEM, especially on the areas of identifying misconceptions, creating scalable and informative assessments, and in the use of active learning techniques such as learning-by-teaching, and peer learning.
In addition, I work on Human-Computer Interaction and how it might allow us to interact with virtual worlds and robots.
I enjoy collaborating with colleagues in other fields where I get to combine CS with Biology or Physics and play with their data.
Topics of interest include:
Flipped Classroom techniques to teach programming
The benefits of games and puzzles in learning
Construction of fair, scalable assessments
Multimodal teaching with an emphasis on getting students to articulate their understanding
3D-Shape reconstruction and analysis
The use of Embedded Systems and Machine Learning to automate (Biology) Laboratory tasks.
Booming enrollment in computer science has raised the need for efficiently gradable assessments, among which are Multiple-Choice Question (MCQ) assessments. MCQs have drawbacks, among which are: random guessing, limited instructor insight into conceptual misunderstandings, student frustration at not being able to explain for partial credit. Recent years have seen several MCQ refinements that feature a two-tier structure that elicits justifications in addition to a correct-choice answer. These justifications can themselves be structured as choices to enable rapid grading. This paper introduces an additional refinement to the two-tier MCQ to acquire more information about a student's understanding by also eliciting explanations for why the wrong answers are wrong. In addition, this paper introduces a simple new metric, the Justification effect, or J-effect, that is easy to extract and apply in order to detect students that need help and identify questions that have design issues. The use of this approach allows instructors to easily provide automated yet rich feedback to students and to pinpoint issues with test implementations, all while remaining easy to design, implement, answer and grade. The study explores the testing and implementation of three different trials over two semesters.
Frank Bolton, P., & Lehr, L. R., & Simha, R., & Lawson, M. (2024, June), The Justification Effect on Two-Tier Multiple-Choice Exams Paper presented at 2024 ASEE Annual Conference & Exposition, Portland, Oregon. 10.18260/1-2--48121
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2024 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015