Seattle, Washington
June 14, 2015
June 14, 2015
June 17, 2015
978-0-692-50180-1
2153-5965
Computers in Education
Diversity
10
26.386.1 - 26.386.10
10.18260/p.23725
https://peer.asee.org/23725
1004
Shuang Wei is a Ph.D. student in the department of Computer Graphics Technology, Purdue University. She received her Master of Science degree from the same major and a Bachelor degree in digital media from HIT University (China). Her research focuses on multimedia education, information visualization, and human computer interaction.
Dr. Yingjie Chen is an assistant professor in the Department of Computer Graphics Technology of Purdue University. He received his Ph.D. degree in the areas of human-computer interaction, information visualization, and visual analytics from the School of Interaction Arts and Technology at Simon Fraser University (SFU) in Canada. He earned the Bachelor degree of Engineering from the Tsinghua University in China, and a Master of Science degree in Information Technology from SFU. His research covers interdisciplinary domains of information visualization, visual analytics, digital media, and human computer interaction. He seeks to design, model, and construct new forms of interaction in visualization and system design, by which the system can minimize its influence on design and analysis, and become a true free extension of human’s brain and hand.
Associate Professor, Second Language Studies, Associate Professor Linguistics, Director Oral English Proficiency Program, Co-Editor Language Testing
COMPUTER VISION AIDED LIP MOVEMENT CORRECTION TO IMPROVE ENGLISH PRONUNCIATION This paper explored the possibility of improving the English pronunciation for non-Englishspeakers by correcting their mouth-lip movement through visual feedback methods. Compared towriting and reading, speaking is more difficult for non-English speakers since mostly they havefewer opportunities to speak English in their own country. While people are learning English pronunciation, for some words, the mouth-lip movement (e.g.,opening size and duration) is important and it could affect the final pronunciation. We developeda visual pronunciation-training prototype to provide visual feedback for people to correct theirmouth movement while speaking English words. Using computer vision technology, the systemrecords and extracts the mouth shape movement of standard pronunciation into moving contours.When a user practice speaking an English word with this application, his/her mouth movementwill be recorded. The standard mouth moving contours are overlaid on top of the user’s mouthvideo. The user can then correct his/her own mouth movement by comparing the difference ofhis/her mouth with the standard mouth movement. We conducted user evaluation to test if this visual feedback approach can really improve Englishpronunciation by using the application. We recruited 22 international students with pronunciationproblems. Pretest-posttest control group within subjects design was used to evaluate the method.In the design, there were two groups: a control group, which was using standard pronunciationvideo to help pronunciation training and an experimental group, which was using the prototypesystem to help pronunciation training. Subjects in each group took a pretest and a posttest. Inpre- and posttests, the subject was tested on the same test items. The effect of the treatment canbe measured by comparing the results from the pre- and posttests. The difference betweennormal video training and visual pronunciation training can be quantitatively measured bycomparing the results from the control and experimental groups. The evaluation results show thatthe computer vision aided lip movement correction method is effective in improving Englishpronunciation on a small portion of English words, but could not be generalized to all Englishwords.
Wei, S., & Chen, Y., & McGraw, T., & Ginther, A. (2015, June), Computer-Vision-Aided Lip Movement Correction to Improve English Pronunciation Paper presented at 2015 ASEE Annual Conference & Exposition, Seattle, Washington. 10.18260/p.23725
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2015 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015