Asee peer logo

Measuring Fidelity of Implementation in a Large-Scale Research Study (RTP)

Download Paper |

Conference

2019 ASEE Annual Conference & Exposition

Location

Tampa, Florida

Publication Date

June 15, 2019

Start Date

June 15, 2019

End Date

June 19, 2019

Conference Session

Best Practices in Research & Assessment Tools for Pre-College Engineering Education

Tagged Division

Pre-College Engineering Education

Page Count

25

DOI

10.18260/1-2--33089

Permanent URL

https://peer.asee.org/33089

Download Count

470

Request a correction

Paper Authors

biography

Cathy P. Lachapelle Museum of Science, Boston

visit author page

Cathy Lachapelle leads the EiE team responsible for assessment and evaluation of our curricula. This includes the design and field-testing of assessment instruments and research on how children use EiE materials. Cathy is particularly interested in how collaborative interaction and scaffolded experiences with disciplinary practices help children learn science, math, and engineering. Her work on other STEM education research projects includes the national Women's Experiences in College Engineering (WECE) study. Cathy received her S.B. in cognitive science from the Massachusetts Institute of Technology and her Ph.D. in educational psychology from Stanford University.

visit author page

biography

Christine M. Cunningham Museum of Science, Boston Orcid 16x16 orcid.org/0000-0003-1922-7101

visit author page

Dr. Christine Cunningham is an educational researcher who works to make engineering and science more relevant, accessible, and understandable, especially for underserved and underrepresented populations. She is currently a Professor of Education and Engineering at Penn State University where she focuses on developing research-based, field-tested curricula, professional development, and research. For sixteen years, she worked as a vice president at the Museum of Science where she was the Founding Director of Engineering is Elementary, a groundbreaking program that integrates engineering concepts into preschool, elementary, and middle school curriculum and teacher professional development. Her recent book, Engineering in Elementary STEM Education, describes what she learned. Cunningham has previously served as director of engineering education research at the Tufts University Center for Engineering Educational Outreach, where her work focused on integrating engineering with science, technology, and math in professional development for K-12 teachers. She also directed the Women’s Experiences in College Engineering (WECE) project, the first national, longitudinal, large-scale study of the factors that support young women pursuing engineering degrees. At Cornell University, where she began her career, she created environmental science curricula and professional development. Cunningham has received a number of awards; in 2017 her work was recognized with the prestigious Harold W. McGraw Jr. Prize in Education. Cunningham holds joint B.A. and M.A. degrees in biology from Yale University and a Ph.D. in Science Education from Cornell University.

visit author page

Download Paper |

Abstract

[Study] is a randomized, controlled study of an educational intervention for students in grades 3-5 [Treatment] where schools were randomized to teach either the [Treatment] or [Comparison] curriculum. [Treatment] is a project-based learning curriculum designed according to social constructivist learning principles; [Comparison] is designed as direct (didactic) instruction with hands-on activities.

Key to measuring the true effect of an intervention like [Treatment] is measurement of its fidelity of implementation (FOI) by teachers to improve and assure internal validity [1], [2]. Social work and health fields have undertaken considerable work to define implementation fidelity [3], [4]. Science education researchers have built upon this work to study the implementation of science curricula, e.g., [5], [6]. In seeking to understand FOI, we rely on a framework defined by Carroll et al. [3], with some modification based on the work of O’Donnell [1]. In this framework, the causal effect of an intervention on outcomes depends primarily upon FOI [3]. FOI includes specifics of what content was addressed, frequency and duration of lessons, and student participation—the “structure” of implementation [4]. The relationship between the intervention as intended and adherence may be modified by other factors, including the complexity of the intervention, facilitation strategies (teacher guide, PD workshops, incentives, etc.), and participant response (teacher and student attitudes and judgments of value). Carroll et al. [3] categorize quality of delivery by the teacher (also called an “intervention process” [6]), or “the way in which services are delivered” [4]—as a potential moderator; however we follow the majority of studies in considering this an element of FOI.

Mowbray et al. [4] advocated that researchers identify and develop valid and reliable measures for “fidelity criteria” of an intervention. The identification of critical components is the first step in developing fidelity criteria. Critical components must include both structural components (specifying elements of adherence) and process components (specifying quality of delivery and program differentiation). Program differentiation—the implementation of critical components unique to the intervention—is particularly important because it affects whether evaluation of the outcomes will find an effect of the intervention beyond that of the comparison [1].

In this paper, we describe our instruments for measuring fidelity, and give evidence for reliability and validity of use for [Study]. The most important of these are the Engineering Logs, for which teachers were prompted to indicate which portions of each [Treatment] or [Comparison] lesson they completed, as well as the duration and date of each lesson, and some indications of how they taught each portion of the lesson, to measure whether teachers were using a pedagogy more in line with [Treatment] or [Comparison]—a measure of program differentiation. Our measure of the quality of teaching implementation, StudentJournalQuality, was coded from a random subset of 6 student engineering journals from each class. In this paper, we provide both qualitative and quantitative analyses demonstrating suitability of these measures.

Lachapelle, C. P., & Cunningham, C. M. (2019, June), Measuring Fidelity of Implementation in a Large-Scale Research Study (RTP) Paper presented at 2019 ASEE Annual Conference & Exposition , Tampa, Florida. 10.18260/1-2--33089

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2019 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015