Emotion Recognition of a Speaker Using Facial Expression Intensity of Thermal Image and Utterance Time

uthors
Yuuki Oka, Yasunari Yoshitomi, Taro Asada, Masayoshi Tabuse
Corresponding Author
Yasunari Yoshitomi
Available Online 1 December 2016.
DOI
https://doi.org/10.2991/jrnal.2016.3.3.3
Keywords
Emotion recognition, Mouth and jaw area, Thermal image, Utterance judgment.
Abstract
Herein, we propose a method for recognizing human emotions that utilizes the standardized mean value of facial expression intensity obtained from a thermal image and the standardized mean value of the time at utterance. In this study, the emotions of one subject could be discerned with 76.5% accuracy when speaking 23 kinds of utterances while intentionally displaying the five emotions of “anger,” “happiness,” “neutrality,” “sadness,” and “surprise.”

Copyright
© 2013, the Authors. Published by ALife Robotics Corp. Ltd.
Open Access
This is an open access article distributed under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).

Download article (PDF)