Salt Lake City, Utah
June 20, 2004
June 20, 2004
June 23, 2004
9.358.1 - 9.358.14
Session # 2620
CS1 and CS2 Pr ogr amming Exams for Assessing Lear ning and Teaching G. Stockman, P. Albee, L. Dillon, J . Oleszkiewicz Michigan State Univer sity
In the Computer Science and Engineering Department at Michigan State University (CSE/MSU), we use timed programming exams in our introductory programming courses to assess both individual student programming skills and course instruction. Administration and design of these exams presented challenging problems. In this paper, we describe these problems and how we solved them in our programming exam system. Additionally, we describe the exams themselves and the particular outcomes under assessment. These courses at CSE/MSU use C++ for programming; however, the issues and methods discussed apply to any programming language.
Working on programming projects is perhaps the most common method for students to learn the skills necessary for programming. The use of individual programming projects in teaching is grounded in modern pedagogical theories, such as problem-based and active learning.1, 2 Programming projects may be graded to help in assessing student progress in learning and effectiveness of instruction, and also to motivate students to carry out the projects and to provide them constructive feedback. However, using programming projects in assessment is problematic. Some students spend an unusual amount of time on programming projects or receive too much help in doing the work. Moreover, inappropriate copying of code developed by others is also common.
Written exams often provide the primary means for assessment in large introductory programming courses. Unfortunately, it is difficult to determine how well questions on written exams correlate with programming skills. Exams in large introductory programming courses are often multiple choice or short-answer. Such questions typically test knowledge of specific aspects of programming features, rather than ability to devise a solution and realize the solution in code. Moreover, feedback from students indicates that they feel their performance on such exams is not a good measure of their programming skills. They find multiple choice questions to be “tricky” and complain of difficulty expressing them selves in short answers. In fact, communication skills may be more prominent factors in determining how well students perform on written exams than are programming skills. For these reasons, CSE/MSU has started using controlled programming examinations in the introductory programming courses for the purpose of assessing programming skills of individual students and adequacy of instruction in programming. We use programming exams to augment more traditional assessment techniques, including individual and small group programming projects and written examinations. There are two ancillary benefits of using programming exams in assessment. First, feedback from our
Proceedings of the 2004 American Society for Engineering Education Annual Conference & Exposition Copyright © 2004, American Society for Engineering Education
Albee, P., & Dillon, L., & Oleszkiewicz, J., & Stockman, G. (2004, June), Cs1 And Cs2 Programming Exams For Assessing Learning And Teaching Paper presented at 2004 Annual Conference, Salt Lake City, Utah. https://peer.asee.org/13021
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2004 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015