Virtual On line
June 22, 2020
June 22, 2020
June 26, 2021
Engineering Design Graphics
An accredited undergraduate architectural design technology program adopted an American Design Drafting Association certification exam to help assess student learning in architectural graphics, a key component of architectural design technology. The exam is administered in a junior level architectural design technology course. All those enrolled in the course must past the exam in order to earn credit for the course, and all who have earned credit for the course, pass the exam. Almost all who don’t pass the exam the first time have retaken the exam before the end of the semester in which the course was offered and in which the exam was administered. There has been the rare exception in which a student will take an incomplete for the course and retake the exam and clear the incomplete the following semester. Passing the exam however is not the goal of this course requirement. The purpose of administering the exam is that of ascertaining knowledge and skills. So, rather than just examining the data and trying to make improvements in the students’ knowledge and skill in the areas that appeared to need more attention based on individual student and class student performance, a more deliberate approach was undertaken. Very recently, the program posed a question to the exam administrating entity as to whether other certification exam data were available to which the program could use to make comparisons. Because the data were available, the program undertook an initiative on behalf of its students to compare the students’ performance, and thus the performance of the program, to that of a comparator population—all those who have sat for the exam. Exam data were collected, compiled, and analyzed with the aid of descriptive statistics. Because the assumptions were met and the data were available, the annual program exam session averages and the annual historical exam averages for the comparator population were used to calculate annual z-scores for the 20 competencies that comprise the major parts of the exam. The purpose of the z-scores was to determine whether there was a significant difference between the annual session averages and the averages that were the result of members comprising the comparator population sitting for the exam. While all the students in the program pass the exam, their performance in and among select competencies was not consistent. Their knowledge and skills also varied when comparisons were made to that of the knowledge and skills exhibited by the comparator population. The data will now be offered to the program’s advisory committee for its consideration and recommendations. The data will also be examined by faculty who teach prerequisite courses for their consideration. The focus of these reviews will be on that of student learning and the knowledge and skills they possess upon completion of the program.
Chin, R. A., & Agarwala, R. (2020, June), Using the Results of Certification Exam Data: A More Deliberate Approach to Improving Student Learning Paper presented at 2020 ASEE Virtual Annual Conference Content Access, Virtual On line . 10.18260/1-2--35475
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2020 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015