Seattle, Washington
June 14, 2015
June 14, 2015
June 17, 2015
978-0-692-50180-1
2153-5965
Software Engineering Constituent Committee Division Technical Session 1
Software Engineering Constituent Committee
20
26.230.1 - 26.230.20
10.18260/p.23569
https://peer.asee.org/23569
1516
Raymond S. Pettit teaches courses in programming, artificial intelligence, objected oriented design, algorithms, theory of computation, and related subjects in ACU’s School of Information Technology and Computing. Prior to joining the ACU faculty, he spent twenty years in software development, research, and training the Air Force Research Lab and NASA’s Langley Research Center as well as private industry. His current research focuses on how automated assessment tools interact with student learning in university programming courses.
John Homer, Abilene Christian University
Dr. John Homer is an associate professor in the School of Information Technology and Computing at Abilene Christian University. His current research focuses on risk assessment, computational theory, and automated assessment tools.
Kayla Holcomb, Abilene Christian University
Ms. Kayla Holcomb is an undergraduate research assistant majoring in computer science in the School of Information Technology and Computing at Abilene Christian University. She is currently assisting with research in computer science education and digital assessment tools.
Nevan Simone, Abilene Christian University
Mr. Nevan Simone is an undergraduate research assistant majoring in computer science in the School of Information Technology and Computing at Abilene Christian University. He is currently assisting with research in computer science education and digital assessment tools.
Susan Mengel, Texas Tech University
Dr. Susan Mengel is an associate professor in the Computer Science Department of the Edward E. Whitacre Jr. College of Engineering at Texas Tech University. Her research interests include computer science education, computer security, and information retrieval.
Have Automated Assessment Tools in Programming Courses Actually Proven to be Helpful?In recent years, we have seen an increase in the use of Automated Assessment Tools (AAT) inearly programming courses. Many instructors have questions as to whether or not these tools areactually helpful in increasing learning. And there is much effort expended initially to create ahigh-quality tool and good individual assignments for the tool to run. Compared with all of theusage that these tools are getting, there are is not much published data as to their usefulness.There are a few papers of this type that this paper examines. Other published data is availablewhich measures students’ perceptions of using AAT’s, as well as research showing instructors’perceptions of using AAT’s.Given the amount of instruction that is taking place online, either through traditional universityonline offerings, MOOC’s, or non-university affiliated websites, there are not a great deal ofpapers showing increase in learning. As the number of students per class increases, so does thegrading load. For assignments that take a constant amount of time (by a human grader) to gradeper student, manual assessment techniques don’t scale up well unless the number of graders isalso increasing at the same rate (while maintaining grading quality and consistency).A second, and nobler, force encouraging the use of AAT’s is the desire for instructors to givestudents access to both additional practice and immediate feedback. Most often, this practice is inthe form of homework assignments, which may be mandatory or optional. While it is difficult fora human grader to give immediate feedback, an AAT can do this in a trivial amount of time. Theautomated tool also will grade all students consistently without allowing bias based on specificstudents. Having one tool grade all assignments also takes care of the difficulty faced whenhaving many human graders grade a large number of assignments and the normalizationproblems that always have to be addressed.Some researchers have attempted to perform more rigid studies use control and experimentalgroups and measure the learning difference between the two groups. While this seems like thebest approach to gauging the effectiveness of AAT’s, these experiments are difficult to perform.It is difficult to change only one variable in these types of experiments, as the number ofhomework assignments likely changes, or the amount of time the instructor is spending atdifferent times of the semester likely changes.Other researchers have attempted to gauge effectiveness of AAT’s by surveying students as totheir perceptions of the tools usefulness or measuring the instructors’ perceptions of the use ofAAT’s.In this paper, we examine and present results from these three types of published data about theusefulness of AAT’s.
Pettit, R. S., & Homer, J. D., & McMurry, K. M., & Simone, N., & Mengel, S. A. (2015, June), Are Automated Assessment Tools Helpful in Programming Courses? Paper presented at 2015 ASEE Annual Conference & Exposition, Seattle, Washington. 10.18260/p.23569
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2015 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015