Virtual
April 16, 2021
April 16, 2021
April 17, 2021
Diversity and Diversity, Inclusion, and Access
11
10.18260/1-2--38276
https://peer.asee.org/38276
513
Lawrence Angrave is an award winning Fellow and Teaching Professor at the department of computer science at the University of Illinois at Urbana-Champaign (UIUC). His interests include (but are not limited to) joyful teaching, empirically-sound educational research, campus and online courses, computer science, engaging underrepresented students, improving accessibility and creating novel methods that encourage new learning opportunities and foster vibrant learning communities.
Real-time captioning is often an effective communication access tool for the deaf and hard of hearing (DHH) by converting inaccessible speech into accessible text. Consequently, captioning is prevalent in many settings, especially in education. However, traditional captioning delivery mechanisms (e.g., streaming captions on a computer display) require DHH students to split their attention between, for example, the lecturer, slides, and captions. This naturally causes delayed information processing and increased cognitive load for Deaf students (Kushalnagar, 2014). In fact, research has indicated that Deaf students relying on real-time captions have significantly lowerperformance in the classroom in comparison to their hearing peers (Marschark, 2006).
Numerous efforts to address this challenge have been made over the past two decades. Of particular interest is the use of augmented-reality (AR) headsets to deliver the captions directly into the user’s line-of-sight as opposed to a separate display set to the side (e.g., Jain, 2018). Along these lines, AR headsets that project American Sign Language (ASL) interpreters onto the lens have also been explored with promising results (e.g., Miller, 2017), including commercialization (SignGlasses, www.signglasses.com). However, many of these systems are designed for use in very controlled environments (e.g., the classroom) with the captioning or interpreting service paid for by an institutional accommodations office. This overlooks the critical fact that, at least in post-secondary settings, a significant portion of a student’s educational experience takes place outside of the classroom via interactions with instructors and peers(e.g., office hours, study groups). The effective transplanting of traditional captioning approaches to these spontaneous scenarios is difficult: the involved task of setting up a dedicated captioning display makes it impractical and having a constantly on-call human captioner or ASL interpreter is financially unscalable, both for the institution and student.
To build upon previous work in this area and address the aforementioned challenges, we are introducing ScribeAR (scribear.illinois.edu), a lightweight platform for delivering real-time captions. Designed as a platform-agnostic web app, ScribeAR is compatible with a variety of devices, whether they be traditional computer displays or AR headsets. Furthermore, ScribeARis designed to integrate various transcription services, both human and automated. Our objective is to provide a flexible platform that can adapt to students’ captioning needs as they engage in various aspects of their education. For instance, a student might utilize high-quality human captioning during a lecture (where tolerance for transcription errors is low). After class, the student might then switch to less-accurate automated captioning to discuss lecture details with a fellow student (here the tolerance for errors is higher due to opportunities for clarifications inherent in a back-and-forth dialogue). With ScribeAR, this switch occurs seamlessly without a change in the captioning device or software, thereby lowering the practicality barrier.
We discuss the ScribeAR architecture and implementation in comparison to similar systems. We also describe the advantages of our new approach in the context of enhancing STEM and engineering education via AR-based tools. Lastly, we give an overview of our planned methodologies for evaluating ScribeAR in authentic educational settings at the University of Illinois and other institutions. This includes exploring possibilities to extend ScribeAR to other student populations, such as those with learning disabilities, who may also benefit from captioning.
References: Kushalnagar, R. & Kushalnagar, P. Collaborative Gaze Cues and Replay for Deaf and Hard of Hearing Students. in Computers Helping People with Special Needs(eds. Miesenberger, K., Fels, D., Archambault, D., Peňáz, P. & Zagler, W.) 415–422 (Springer International Publishing, 2014).
Marschark, M. et al.Benefits of Sign Language Interpreting and Text Alternatives for Deaf Students’ Classroom Learning. The Journal of Deaf Studies and Deaf Education11, 421–437 (2006).
Jain, D., Chinh, B., Findlater, L., Kushalnagar, R. & Froehlich, J. Exploring Augmented Reality Approaches to Real-Time Captioning: A Preliminary Autoethnographic Study. in Proceedings of the 2018 ACM Conference Companion Publication on Designing Interactive Systems7–11 (Association for Computing Machinery, 2018).
Miller, A. et al.The Use of Smart Glasses for Lecture Comprehension by Deaf and Hard of Hearing Students. in Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems1909–1915 (Association for Computing Machinery, 2017)
Angrave, L., & Lualdi, C. P., & Jawad, M., & Javid, T. (2021, April), ScribeAR: A New Take on Augmented-Reality Captioning for Inclusive Education Access Paper presented at 2021 Illinois-Indiana Regional Conference, Virtual. 10.18260/1-2--38276
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2021 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015