Asee peer logo

Accreditation Reciprocity: Interchangeability Challenges Between Broadly Defined and Narrowly Defined Student Assessment Methods

Download Paper |

Conference

2012 ASEE Annual Conference & Exposition

Location

San Antonio, Texas

Publication Date

June 10, 2012

Start Date

June 10, 2012

End Date

June 13, 2012

ISSN

2153-5965

Conference Session

TAC/ABET-related Outcome-based Assessment Methods and Models

Tagged Division

Engineering Technology

Page Count

12

Page Numbers

25.126.1 - 25.126.12

Permanent URL

https://peer.asee.org/20886

Download Count

22

Request a correction

Paper Authors

biography

Kristine Paradis Bastian Indiana University-Purdue University, Indianapolis

visit author page

Kristine P. Bastian is a graduate student earning her M.S. in technology degree in the Department of Engineering and Technology from Indiana University-Purdue University, Indianapolis (IUPUI). Bastian has a B.A. degree with high honors in industrial/organizational psychology (Purdue School of Science); honors minor in leadership (Purdue Organizational Leadership and Supervision); minor in interior design technology (Purdue Design Technology); and a human resource management certificate (Purdue Organizational Leadership and Supervision). Bastian owns an architecture/interior design company in Indiana, as well as an extensive 25 years of managerial experience in project management, product marketing, engineering prototype management, and purchasing management. Her interests are in change management and process improvement, and she is currently working on earning her Green Belt certification in Six Sigma. This is Bastian’s first year as an ASEE student member, ASEE conference presenter, and attendee.

visit author page

biography

Eugenia Fernandez Indiana University-Purdue University, Indianapolis

visit author page

Eugenia Fernandez is an Associate Professor of computer and information technology and Chair of the Department of Computer, Information, and Leadership Technology at IUPUI. She is a member of the Indiana University Faculty Colloquium on Excellence in Teaching, a Fellow of the Mack Center at Indiana University for inquiry on teaching and learning, and an Editor of the Journal of Scholarship of Teaching and Learning. Her research focuses on the scholarship of teaching and learning, with emphasis on student learning with technology.

visit author page

biography

Elaine M. Cooney Indiana University-Purdue University, Indianapolis

visit author page

Elaine Cooney is Chair of the Department of Engineering Technology and professor of electrical and computer engineering technology at IUPUI. She is an ABET Senior IDEAL Scholar. Cooney is the former Director of assessment for the Purdue School of Engineering and Technology at IUPUI. Her areas of scholarship include engineering technology education assessment, analog circuits and signals, and RFID. Cooney is the author of RFID+ The Complete Review of Radio Frequency Identification. Currently, she is researching best practices in teaching and assessing critical thinking and problem solving.

visit author page

Download Paper |

Abstract

Accreditation reciprocity: Interchangeability challenges between broadly defined and narrowly defined student assessment methods AbstractAs most accrediting bodies have moved to outcomes based assessment, many universities acrossthe nation use various formats and processes to evaluate student work in demonstrating essentiallearning outcomes, i.e. the knowledge, skills and abilities that have been deemed vital tostudent’s academic and social maturation. Technical knowledge, quantitative skills, corecommunication skills, critical thinking abilities are just a few examples of the items facultyassess. Programs accredited by ABET-TAC often must collect data and assess student’s workfor regional accreditation purposes as well. A large mid-western urban university has identified6 broadly defined critical areas for campus wide assessment purposes termed Principles ofUndergraduate Learning (PUL’s). ABET-TAC requires more narrowly defined11 criticalevaluative criteria (ABET-TAC a-k) for Engineering Technology.There is overlap and some gaps in the two evaluative sets of criteria. Both require substantialtime and effort for faculty to track and mindfully evaluate student’s work. The challenge,therefore, exists to maximize time and cost efficiencies and collect data that can be usedinterchangeably for the two accreditation programs when one measures broadly defined studentlearning outcomes (PUL’s) for regional accreditation and one measures more narrowly definedstudent learning outcomes (ABET-TAC a-k) for the School of Engineering and Technology. Ourresearch examines the relationship among three general PUL’s across the university’s campusand three ABET-TAC criteria for one Engineering Technology program. Specifically, weexamine the ABET-TAC criteria for the Electrical & Computer Engineering Technologyprogram. The research consists of contrasting and comparing the following PUL’s: comprehend,interpret and analyze ideas and facts; critical thinking; and intellectual depth, breadth andadaptiveness. The three ABET-TAC criteria include: ability to conduct, analyze, and interpretexperiments and to apply experimental results to improve processes; ability to identify, analyze,and solve broadly –defined engineering technology problems; ability to write technical report,present data and results coherently in oral and graphic formats. Finally, the research addressesthe comparative results for the afore mentioned educational outcomes for lower levelundergraduate courses (100-200 levels combined) and upper level undergraduate classes (300-400 levels combines), respectively.Collectively, 570 student educational outcome records from the PUL data set and the ABET dataset will be examined for Spring 2010, Fall 2010 and Spring 2011 semesters; all personalidentifiers have been removed from the data sets for the PUL and ABET-TAC criteria inaccordance with Institutional Review Board (IRB) requirements. It is expected there will be astrong positive correlation between the two data set criteria using an ANOVA to examine dataset similarities across lower level and upper level courses for the 3 semesters. The significanceof this study will be to provide sufficient supporting evidence to demonstrate how faculty timeand efficiencies would benefit by creating a more interchangeable student learning outcomeassessment tool for gathering similar data for accreditation support.

Bastian, K. P., & Fernandez, E., & Cooney, E. M. (2012, June), Accreditation Reciprocity: Interchangeability Challenges Between Broadly Defined and Narrowly Defined Student Assessment Methods Paper presented at 2012 ASEE Annual Conference & Exposition, San Antonio, Texas. https://peer.asee.org/20886

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2012 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015