Authors
Yasunari Yoshitomi, Taro Asada, Ryota Kato, Masayoshi Tabuse
Corresponding Author
Yasunari Yoshitomi
Available Online 1 June 2015.
DOI
https://doi.org/10.2991/jrnal.2015.2.1.2
Keywords
Facial expression recognition, Area of mouth and jaw, Thermal image, and
Utterance judgment
Abstract
To develop a robot that understands human feeling, we propose a method
for recognizing facial expressions. A video was analyzed by thermal image
processing and the feature parameter of facial expression, which was extracted
in the area of the mouth and jaw by a two-dimensional discrete cosine transform.
The facial expression intensity, defined as the norm of the difference
vector between the feature vector of the neutral facial expression and
that of the observed one, was measured. The feature vector made by facial
expression intensity and time at utterance was used for recognizing facial
expression.
Copyright
© 2013, the Authors. Published by ALife Robotics Corp. Ltd.
Open Access
This is an open access article distributed under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).