Honolulu, Hawaii
June 24, 2007
June 24, 2007
June 27, 2007
2153-5965
Chemical Engineering
14
12.1481.1 - 12.1481.14
10.18260/1-2--2564
https://peer.asee.org/2564
1231
Ron Terry is a Professor of Chemical Engineering at Brigham Young University and an Associate in BYU's Office of Planning and Assessment. His scholarship is centered on pedagogy, student learning, and engineering ethics and has presented/published numerous articles in engineering education. He is one of BYU's co-investigators for the NSF funded National Center for Engineering and Technology Education.
Vincent Wilding is a Professor of Chemical Engineering at Brigham Young University. His research interests include thermophysical properties, phase equilibria, and environmental engineering. He received his B.S. degree in Chemical Engineering from Brigham Young University in 1981 and his Ph.D. in Chemical Engineering from Rice University in 1985.
Randy S. Lewis is Professor of Chemical Engineering at Brigham Young University and an Adjunct Professor of Chemical Engineering at Oklahoma State University. He received his BS and PhD degrees in Chemical Engineering from Brigham Young University and Massachusetts Institute of Technology, respectively. His research interests include biomaterials development and the utilization of renewable resources for the production of chemicals.
Danny Olsen is the Director of Institutional Assessment and Analysis at Brigham Young University and has worked within the scope of institutional research and assessment at BYU since 1986. Previously, he worked in various computing and analytical capacities in the manufacturing, banking, and defense industries. Dr. Olsen completed a Ph.D. in Instructional Science with emphasis in research, measurement, and evaluation and a Master’s Degree in Information Management both at BYU.
The Use of Direct and Indirect Evidence to Assess University, Program, and Course Level Objectives and Student Competencies
Abstract
The Chemical Engineering Department at Brigham Young University (BYU) has partnered with BYU’s Institutional Assessment and Analysis unit to implement a number of assessment tools. These tools involve both direct and indirect evidence measures to assess university, program, and course level objectives and student competencies. Direct measurement tools include a mandatory-pass senior competency exam, instructor end-of-course proficiency evaluations, composite assessment of communication skills across several courses, and the California Critical Thinking Skills Test. Indirect tools include student end-of-course proficiency surveys, in-course minute paper surveys, the National Survey of Student Engagement, and university-conducted surveys of seniors, alumni, and employers.
This paper discusses a suite of direct and indirect assessment tools and their use to facilitate a comprehensive evaluation of student learning and of the learning environment necessary for a continuously improving educational process.
Literature Review
Assessment has the dual purpose of providing evidence that learning objectives are being met and providing feedback to guide the improvement of educational activities. A good assessment program utilizes a variety of tools to facilitate breadth and depth of analysis yet is efficient so that the time and effort spent on assessment is optimized.
In 2003 the Middle States Commission on Higher Education published a valuable guide on assessment entitled, Student Learning Assessment: Options and Resources.1 This guide discusses a variety of direct and indirect assessment tools, their strengths and limitations, and provides insight for the development of assessment programs.
Direct assessment measures are those which provide direct evidence that a learning objective has been met. Such evidence demonstrates the degree to which a student has mastered a particular subject, has acquired a specific skill, or developed a certain characteristic. These measures are most commonly applied at the course or program level, but can also be applied at the institution level. Examinations are by far the most common tools for direct assessment. Also valuable are portfolios of sample work such as writing samples and evaluations of oral presentations. Direct assessments are a necessary part of an assessment program, but they do not of themselves give a complete analysis. Direct assessments can show what was learned, but fail to show how or why the learning took place. Indirect measures are better suited to this task and are indispensable means of providing insight into the learning environment in order to improve the learning process.
Indirect measures typically focus on predictors that are correlated to learning, but do not measure learning itself. The most common indirect assessment tools are surveys which solicit input from
Terry, R., & Wilding, W. V., & Lewis, R., & Olsen, D. (2007, June), The Use Of Direct And Indirect Evidence To Assess University, Program, And Course Level Objectives And Student Competencies In Chemical Engineering Paper presented at 2007 Annual Conference & Exposition, Honolulu, Hawaii. 10.18260/1-2--2564
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2007 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015