Asee peer logo

Work in Progress: Developing Single Point Rubrics for Formative Assessment

Download Paper |

Conference

2016 ASEE Annual Conference & Exposition

Location

New Orleans, Louisiana

Publication Date

June 26, 2016

Start Date

June 26, 2016

End Date

June 29, 2016

ISBN

978-0-692-68565-5

ISSN

2153-5965

Conference Session

Works in Progress: Assessment and Research Tools

Tagged Division

Educational Research and Methods

Page Count

24

DOI

10.18260/p.27221

Permanent URL

https://peer.asee.org/27221

Download Count

4255

Request a correction

Paper Authors

biography

John K. Estell Ohio Northern University

visit author page

John K. Estell is a Professor of Computer Engineering and Computer Science at Ohio Northern University. He received his M.S. and Ph.D. degrees in computer science from the University of Illinois at Urbana-Champaign, and his B.S.C.S.E. degree in computer science and engineering from The University of Toledo. His areas of research include simplifying the outcomes assessment process through use of performance vectors and evaluation heuristics, first-year engineering instruction, and the pedagogical aspects of writing computer games. John has held a variety of leadership positions, including currently serving as an ABET Commissioner and as Vice President of The Pledge of the Computing Professional; within ASEE, he previously served as Chair of the Computers in Education Division. He is a past recipient of Best Paper awards from the Computers in Education, First-Year Programs, and Design in Engineering Education Divisions, and has also been recognized for his contributions to the ABET Symposium. Dr. Estell is a Senior Member of IEEE, and a member of ACM, ASEE, Tau Beta Pi, Eta Kappa Nu, Phi Kappa Phi, and Upsilon Pi Epsilon.

visit author page

author page

Heather Marie Sapp Ohio Northern University

biography

David Reeping Ohio Northern University Orcid 16x16 orcid.org/0000-0002-0803-7532

visit author page

David Reeping is an undergraduate research assistant with a major in Engineering Education and a minor in Mathematics. He is a Choose Ohio First scholar inducted during the 2012-2013 school year and the recipient of the Remsburg Creativity Award for 2013 and The DeBow Freed Award for outstanding leadership as an undergraduate student (sophomore) in 2014. David is a member of the mathematics, education, and engineering honor societies: Kappa Mu Epsilon, Kappa Delta Pi, and Tau Beta Pi respectively. He has extensive experience in curriculum development in K-12 and develops material for the Technology Student Association's annual TEAMS competition. His research interests involve the analysis and refinement of the first year engineering experience, authentic projects and assessments, and P-12 engineering.

visit author page

Download Paper |

Abstract

This Work in Progress abstract describes initial efforts at a small, private Midwest university to develop rubrics in order to assess student work for a term project in a second semester introductory programming course. This programming course has used the project theme of developing K-12 educational software for many years; however, this was done without having a client to satisfy. Consequently, the instructor could only provide feedback on the technical aspects of the implementation, and most of this feedback was summative. With the recent establishment of both an engineering education program and STEM outreach activities, the opportunity arose for providing the programming students with a meaningful client-driven design experience. The clients are education majors who are designing lesson plans for STEM outreach targeting fourth through sixth graders. The programming students are formed into teams to work with a particular client and develop an application in support of the client’s lesson plan. With the introduction of a client actively working with each team, the logistics are such that formative assessment techniques can be employed.

For the first year of this updated version of the term project, traditional analytic rubrics – where each dimension contains a descriptor for each level in the rating scale – were used. Post-activity assessment indicated that there was a communications-related disconnect between the clients and the teams; this resulted in a critical evaluation of the project’s structure to determine ways to improve formative feedback. Upon reflection, it was determined that developing and using single-point rubrics would provide the means for a formative assessment practice that would better serve each student’s learning experience. The single-point rubric is characterized by providing only one descriptor for each dimension; specifically, the criterion for proficient performance. The remaining performance levels for the dimension are left blank on the form. This makes rubric construction easier, as only the performance expectations now need to be stated; the laundry list of failures is no longer included. Moreover, student interpretation of the rubrics is streamlined since the rubric now precisely states what is considered proficient. Finally, the format makes formative assessment easier, as the reviewer is encouraged to provide either constructive feedback in areas needing work, or positive feedback where the standards have been exceeded, or both, by writing in the blank spaces provided under the Mastery, Developing, or Lacking rating scale columns for each dimension. The rubrics were developed by surveying available literature and tools for examples of assessment in application design, teaming, and entrepreneurship. These, along with existing assessment tools, were then synthesized into appropriate criteria for the single-point rubrics developed by the investigators. Initial feedback of their use has been positive, as the client reviewers were observed to be more engaged in the process and the students received both clear standards for their initial efforts and formative feedback to guide their subsequent efforts.

Current research efforts involve the refinement and validation of this first set of single point rubrics. Additionally, it was observed that external reviewers were unfamiliar with the single-point concept; accordingly, training materials need to be developed that will help both explain the application of single-point rubrics and assist in the norming of their use for greater inter-rater reliability. The long-term goals of this research are to make the developed single-point rubrics with the associated materials available to the wider engineering education community and to provide a process that others can use to develop their own single-point rubrics.

Estell, J. K., & Sapp, H. M., & Reeping, D. (2016, June), Work in Progress: Developing Single Point Rubrics for Formative Assessment Paper presented at 2016 ASEE Annual Conference & Exposition, New Orleans, Louisiana. 10.18260/p.27221

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2016 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015