Asee peer logo

Integrated FCAR Model with Traditional Rubric-Based Model to Enhance Automation of Student Outcomes Evaluation Process

Download Paper |

Conference

2016 ASEE Annual Conference & Exposition

Location

New Orleans, Louisiana

Publication Date

June 26, 2016

Start Date

June 26, 2016

End Date

June 29, 2016

ISBN

978-0-692-68565-5

ISSN

2153-5965

Conference Session

Assessment & Accreditation in ECE

Tagged Division

Electrical and Computer

Tagged Topic

Diversity

Page Count

18

DOI

10.18260/p.27316

Permanent URL

https://peer.asee.org/27316

Download Count

674

Request a correction

Paper Authors

biography

Fong K. Mak Gannon University

visit author page

FONG MAK, P.E. received his B.S.E.E. degree from West Virginia University in 1983, M.S.E.E. and Ph.D. in Electrical Engineering from the University of Illinois in 1986 and 1990. He joined Gannon in 1990. He was the Chair of Electrical and Computer Engineering at Gannon University from 2001 till 2014 and the Program Director for the professional-track Gannon/GE Transportation Embedded System Graduate Program for 2001-2014. He is now the professor of the department.

visit author page

biography

Ramakrishnan Sundaram Gannon University

visit author page

Dr. Sundaram is a Professor in the Electrical and Computer Engineering Department at Gannon University. His areas of research include computational architectures for signal and image processing as well as novel methods to improve engineering education pedagogy.

visit author page

Download Paper |

Abstract

The Electrical and Computer Engineering (ECE) department at Gannon University has been through two successful ABET accreditations, in 2005 and 2011, with the use of the Faculty Course Assessment Report (FCAR) model. In the 2005 cycle of accreditation review, essential FCAR methodology was used; whereas, in the 2011 cycle, the concept of key assignments, with the well-defined process to generate justifiable objective evidence, was used to augment and further improve the FCAR assessment model adopted. In either cycle, student outcomes (SO) are directly assessed with supporting evidence for the well-defined performance vector termed EAMU where E stands for Excellence, A for Average, M for Minima, and U for Unsatisfactory. However, in either cycle of processes, there were no refined performance indicators (PI) defined for each SO. In the assessment model for the current cycle, a set of PIs are defined for each SO. However, we rapidly realize that if, for example, a set of three PIs are defined for each SO, the evaluation effort will be at least three times more time consuming.

To further improve the assessment model used, the traditional rubric-based assessment model is augmented by classifying courses in the curriculum to three levels: introductory, reinforced, and mastery. It is customary for the traditional rubric-based assessment model to include only the courses in the mastery level for the program outcomes assessment. The drawbacks of looking only at courses at the mastery level are: (1) lack of information needed at the lower level to identify the root cause of the deficiency when the symptom occurs at the higher level courses; (2) lack of the mechanism to compute a clear indicator such as the Student Outcomes (SOs) performance index based on Performance Indicators (PI) of that SO in order to facilitate the automation of the evaluation process.

In this paper, a novel approach is presented to demonstrate how a traditional rubric-based approach can be integrated with the FCAR assessment approach to allow computation of the SO performance index from roll-up data. The performance index is calculated based on the weighted average of relevant PIs for the three different levels of courses. Analytic results on how the SO performance index measured up against the heuristic rules used previously are discussed. Last but not least, results of how the SO performance index can be used to address the overall attainment of the SO expectation are shown.

Mak, F. K., & Sundaram, R. (2016, June), Integrated FCAR Model with Traditional Rubric-Based Model to Enhance Automation of Student Outcomes Evaluation Process Paper presented at 2016 ASEE Annual Conference & Exposition, New Orleans, Louisiana. 10.18260/p.27316

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2016 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015