June 24, 2017
June 24, 2017
June 28, 2017
Computers in Education
Gamification has been emerging as a pedagogical tool over the past few years, and numerous studies report positive outcomes from games applied in educational environments. However, researchers rarely discuss the gamification development process. Little work has been done to analyze their gamification in terms of usability, game elements, etc.. In addition, in most previous studies, students are the end user, and only get involved in the final test to provide data on motivation, engagement or learning outcomes. Under the circumstances, the following questions are left unaddressed: how to evaluate the effectiveness of a gamification product in education? What do students learn when they create and critique gamification products? This paper proposes a peer-based gamification critique process based on peer-developed game products. We expect such a process will provide valuable feedback from an end-user perspective and that the outcomes will help to answer the above questions. This present study is an extension of a previous research cycle in which end users (students) developed gamification products to help students learn challenging concepts in industrial engineering courses. We selected four final gamification products for further evaluation: “Avengers”, “Bake-off-453”, “Gulf games” and “DungeoNIOSH”. These games are intended to teach the concepts of: “Discrete probability distributions”, “Gulf of evaluation vs. Gulf of execution”, “Interaction effects” and “NIOSH Lifting equation”. The first two are basic concepts in statistics, and the last two relate to the human factor/ergonomics domain. In this study, we had two student teams conduct a critique of these gamification products as their Capstone project. Peer-based critiques consist of three main steps after matching the teams with their interested game products: firstly, critiquing the gamification product from a game perspective, including metrics like what types of game elements are included, interactivity, motivation, engagement level, et al. secondly, evaluating the gamification products on aspects of education and focusing on the learning effectiveness; finally, critiquing gamification products based on usability guidelines and principles. Student teams were instructed to specify each criterion and cover all three aspects. At the end of this paper, two case studies are presented, showing the final critique criteria developed by student teams. Most importantly, we will collect valuable insights from end users, i.e., what they can learn from the critiquing process, what lessons we can learn from their feedback. These will provide us with meaningful information to help evaluate gamification products designed to enhance engineering concept learning.
Li, J., & KIM, E., & Schultis, A. M., & Kapfer, A. J., & Lin, J., & Yake, P. A., & Erjavec, D. M., & Dabat, B., & Rothrock, L. (2017, June), Peer-based Gamification Products Critiquing: Two Case studies in Engineering Education Paper presented at 2017 ASEE Annual Conference & Exposition, Columbus, Ohio. 10.18260/1-2--28735
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2017 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015