Salt Lake City, Utah
June 20, 2004
June 20, 2004
June 23, 2004
2153-5965
7
9.239.1 - 9.239.7
10.18260/1-2--13819
https://peer.asee.org/13819
1844
Session 2149
Assessment Rubrics for TAC-ABET Interpersonal Skills
Elaine M. Cooney, Kenneth Reid Purdue School of Engineering and Technology Indiana University Purdue University Indianapolis
Introduction
Measuring non-technical skills (sometimes called “soft skills”), such as the ability to function on teams (ABET Technology Criteria 2000, Criterion 1.e.), or the ability to communicate effectively (Criterion 1.g.) can be a challenge to technology faculty trained in engineering technology, but not necessarily experts in communication or leadership. These skills have traditionally been measured by engineering technology faculty the same way they are evaluated in the workplace: “I know it when I see it.” While this method may lead to a letter grade (“That presentation was pretty good – I’ll give it a B”), this is not truly assessing the student, the presentation or the degree program. Meaningful assessment of the student or of the presentation should include constructive feedback, and assessment of the degree program should include qualitative measurement of the necessary characteristics of a good presentation. Good assessment practices also recommend that data be “triangulated”, or measured in more than one way.
Gloria Rogers1 has recommended a variety of assessment techniques for a comprehensive assessment plan. All assessment options have advantages and disadvantages, so that the “ideal” methods to measure any one objective should offer the best balance between the program needs, validity, and affordability (in time, effort, and money). She goes on to say that it is “crucial to use multi-method/multi-source approach to maximize validity and reduce bias of any one approach.” Of the many assessment methods Rogers recommends, the two methods that are used in this project are behavioral observations and performance appraisals. The crux of the matter is to take the behavioral observations or performance appraisals and get hard data that can be recorded and tracked.
Rubrics can be used to translate observations to objective data. A rubric is a scaled set of criteria that defines a range of what acceptable performance looks like. “The criteria provide descriptions of each level of performance in terms of what students are able to do and values are assigned to these levels.”2 According to Bresciani, rubrics can be used in assessment to evaluate the effectiveness of entire programs, or individual student assignments, presentations or papers.3
This paper presents four rubrics developed to assess student assignments/behavior: written report, oral presentation, design project, and team work. These rubrics are not intended to be used to grade student work (although some instructors may choose to use them to help generate grades), but are instead to help track how students as a cohort are meeting the program objectives. The rubrics have been tested by several evaluators for both associate and baccalaureate level student work.
“Proceedings of the 2004 American Society for Engineering Education Annual Conference & Exposition Copyright © 2004, American Society for Engineering Education”
Reid, K., & Cooney, E. (2004, June), Assessment Rubrics For Tac Abet Interpersonal Skills Paper presented at 2004 Annual Conference, Salt Lake City, Utah. 10.18260/1-2--13819
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2004 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015