New Orleans, Louisiana
June 26, 2016
June 26, 2016
August 28, 2016
Well-written Student Outcomes (SOs) are a vital part of a successful improvement process. However, the SOs are relatively broad statements on what students are expected to know. Performance Indicators (PIs) provide more specific actions that may be used for direct measurement of SOs, and they are useful tools for assessing the degree to which students successfully achieve subsets of each SO. During a recent reaccreditation by ETAC/ABET, several engineering technology programs demonstrated successful use of PIs for outcomes assessment and improvement processes.
Rubrics have been developed as tools to provide direct measurement of student performance in each of the SOs. The rubrics were designed to be used primarily in upper-level courses that were well-aligned with the student outcomes. Instructors selected student work representative of a particular SO in their course. The selected work depended on the type of course, and it typically included items such as oral presentations, written lab reports, or problem solutions from exams, quizzes, or homework assignments. It was most effective to complete rubric scores for student work while grading or as soon as possible afterward.
The precise wording of each PI was central to the successful use of the rubrics. Each rubric was limited to one page with three to five concise PIs that captured the vital aspects of the SO. Proper selection of the verbs in each PI was a very important aspect of defining the expectation of students. Each PI was evaluated with performance levels on a scale of one to four: 1 – Not acceptable, 2 – Below standards, 3 – Meets standards, 4- Exemplary. This simplified scale helped to maintain consistency among instructors, and it forced a decision between acceptable (meets standards) and unacceptable (below standards) performance. Each performance level contained a brief, thorough description of the expectations, clarifying the differences between the levels. The intent was to provide enough detail to distinguish between levels, while giving flexibility for use in evaluating student work in different projects and courses.
The total number of students, as well as the percentage of students, scoring 4, 3, 2, and 1 was used to evaluate aggregate performance of the group. Data from students not passing a course was not included; since they needed to retake the course, assessment data was collected when they passed. An initial benchmark was to have 70% of students scoring 3 or 4, indicating that at least 70% of the students met or exceeded acceptable standards. If less than 70% of students scored 3 or 4, overall student performance was below the benchmark, indicating potential for improvement in that particular PI. After obtaining baseline data from an initial evaluation, the 70% benchmark could then be changed, if appropriate. As the assessment process evolved, different SOs had different benchmarks to reflect the level of difficulty in the specific assessment tool.
Jones, D. K., & Abdallah, M. (2016, June), Successful Use of Performance Indicators to Assess Student Outcomes Paper presented at 2016 ASEE Annual Conference & Exposition, New Orleans, Louisiana. 10.18260/p.25961
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2016 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015