June 23, 2013
June 23, 2013
June 26, 2013
Educational Research and Methods
23.889.1 - 23.889.7
Measuring Computing Self-EfficacyAbstractComputing is a field of study that is growing in importance everyday. Unfortunately,studies have shown there to be a huge dropout rate within computing-related majors. Thefollowing project discusses the creation of an instrument intended to ultimatelyinvestigate whether computing skill self-efficacy is tied to the retention findings.An instrument was developed throughout the 2012 spring semester that tested the self-efficacy of first-year engineering and computing students enrolled in a fused course. Theinstrument asked students a series of questions pertaining to many different computingtasks. Questions utilized a 100-point range on a Likert scale with 10-unit intervals; 0being “cannot do at all” and 100 being “highly certain can do.” The preliminary resultssuggested the new instrument held promise in its ability to accurately assess computingself-efficacy, yet the sample size was too small to validate the instrument.The instrument was subsequently given to a larger group of students (N = 271). Thesample consisted students who were attending the Ultimate Intel Experience Internship.The instrument was disseminated using the online tool, Survey Monkey, on the first dayof the internship. Demographic information was also collected, specifically previouscomputing experience through the question: “Have you ever taken a course to learn howto program?”The instrument was validated using both content and criterion-related validity. Contentvalidity came by way of two resources. First, we researched past studies on the field ofcomputing and computing-related self-efficacy. Second, we conducted fifteen-minute in-person interviews with computing professors at a large southwest university to make sureour questions were relevant and related to the study. Criterion-related validity wasconducted by using our measure of previous programming coursework.The statistical software package, SPSS, was used to analyze the data. Exploratory factoranalysis revealed two factors. The first factor contained all seven items, which wesubsequently named "computing". The second factor contained the only two itemspertaining to hardware. The relative factor loadings allowed us to disregard the hardwarefactor in an effort to focus on the one overall computing factor. A very basic correlationcheck of the new computing factor z-scores with our first broad question, “How confidentdo you feel solving a computing task?”, showed that our seven items correlated highly tothe general focus of computing (r = 0.711). A check of reliability revealed excellentinternal reliability (α = 0.929). Finally, we performed a t-test of each individual item tosee if there was a significant difference in the students who had taken a computingcourse. All items were significant to at least p ≤ .05 meaning that students with priorexperience in computing rated their level of self-efficacy significantly higher than theother students as suspected. The instrument was successfully validated using content andcriterion-related validity.
Kolar, H., & Carberry, A. R., & Amresh, A. (2013, June), Measuring Computing Self-Efficacy Paper presented at 2013 ASEE Annual Conference & Exposition, Atlanta, Georgia. 10.18260/1-2--22274
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2013 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015