June 22, 2008
June 22, 2008
June 25, 2008
Educational Research and Methods
13.207.1 - 13.207.9
Apples and Oranges? A Proposed Research Design to Examine the Correspondence Between Two Measures of Engineering Learning
In 2004, ABET commissioned Engineering Change, a study of the impact of Engineering Criteria 2000 (EC2000) on the preparation of undergraduates for careers in engineering. One legacy of that study is a database of EC2000-specific self-reported student learning outcomes at 40 institutions, including precollege characteristics and engineering program outcomes for more than 4,300 graduates of the class of 2004. A second dataset, the Multiple-Institution Database for Investigating Engineering Longitudinal Development (MIDFIELD), compiles institutional data, including demographic and academic transcript records and Fundamentals of Engineering (FE) scores, from nine universities from 1987-2005. In this paper, we propose a design to combine data from the two databases to assess the correspondence between the self-reported student learning outcome measures in the Engineering Change study and the MIDFIELD dataset's information on program-level performance on the FE examination, the only objective test of students’ engineering knowledge.
Throughout its history, U.S. higher education has been mindful of questions about educational quality and institutional accountability. Formal accreditation mechanisms emerged in the early 20th century. Although the public has periodically engaged in these discussions, those who fund higher education – state and federal government, business and industry, and philanthropic foundations – have wielded the greatest influence.1 Financial accountability is a dimension of these concerns, but the evaluation and assessment of educational effectiveness has emerged over the past two decades as an important corollary.
The current period of emphasis on accountability in the U.S. began in the 1980s and is roughly contemporaneous with expressions of heightened concern about the quality of engineering education programs and practices. The pressure for greater accountability, and the national conversations about the appropriate metrics for judging and ensuring educational quality that ensued, influenced the policy context for these discussions and the deliberations of accreditors. The Council for Higher Education Accreditation (CHEA), which recognizes individual accrediting agencies, now endorses assessment of student learning outcomes as one dimension of accreditation. Its endorsement, however, followed changes in the accreditation criteria in many regional and professional agencies that had already reduced their emphasis on quantitative measures of available resources and mandated that judgments of educational effectiveness be based on measurable outcomes.2 Today, the higher education community generally accepts the need for assessment data to inform decision-making and acknowledges the need for rigorous methods that can provide this information to programs, colleges and universities, accreditation agencies, and state and federal governments.
This paper proposes a research design for a study of the correspondence between two publicly available assessment tools: the Fundamentals of Engineering (FE) examination and the student
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2008 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015