Salt Lake City, Utah
June 23, 2018
June 23, 2018
July 27, 2018
Computing and Information Technology
11
10.18260/1-2--29696
https://peer.asee.org/29696
3608
Software Engineering Education researcher and Senior at Robert Morris University. Interested in machine learning and artificial intelligence, specifically as it applies to Image Recognition.
Sushil Acharya, D.Eng. (Asian Institute of Technology) is the Assistant Provost for Research and Graduate Studies. A Professor of Software Engineering, Dr. Acharya joined Robert Morris University in Spring 2005 after serving 15 years in the Software Industry. His teaching involvement and research interest are in the area of Software Engineering education, Software Verification & Validation, Software Security, Data Mining, Neural Networks, and Enterprise Resource Planning. He also has interest in Learning Objectives based Education Material Design and Development. Dr. Acharya is a co-author of “Discrete Mathematics Applications for Information Systems Professionals” and “Case Studies in Software Verification & Validation”. He is a member of Nepal Engineering Association and is also a member of ASEE and ACM. Dr. Acharya was the Principal Investigator of the 2007 HP grant for Higher Education at RMU through which he incorporated tablet PC based learning exercises in his classes. In 2013, Dr. Acharya received a National Science Foundation (NSF) grant for developing course materials through an industry-academia partnership in the area of Software Verification and Validation.
Facial expression recognition is a crucial part of Psychology as a person's facial expression accounts for 55 percent of the effect of a spoken message. This makes facial expression the single biggest indicator of individual communication. Traditionally, Psychologists trained human observers to identify changes in facial muscles and use a Facial Action Coding System to map muscle movements to an emotion. Though this system helped ensure objectivity and had descriptive power, its major drawback was in effectively training human observers. With the advent of faster computers and the use of pixels/megapixels for picture elements, machine learning researchers became interested in automating facial expression recognition. Most researchers continued adopting the same Facial Action Coding System, used in Psychology, to train their statistical models. Though advances have been made in automating facial detection and finding facial landmarks, facial expression recognition results have stagnated. This stagnation is blamed on lack of training data and the difficulty of training a model to recognize subtle changes in facial muscles. In this paper the authors describe how software engineering best practices assists in developing and implementing a methodology to leverage larger facial detection and facial landmarking datasets, as well as their improved accuracy over the Facial Action Coding System. This methodology is potentially more descriptive of faces in unconstrained environments. The authors present the software artifacts, the methodology, and the findings of a comparative study.
Josey, J. D., & Acharya, S. (2018, June), A Methodology for Automated Facial Expression Recognition Using Facial Landmarks Paper presented at 2018 ASEE Annual Conference & Exposition , Salt Lake City, Utah. 10.18260/1-2--29696
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2018 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015