TY - GEN
T1 - Cross-pose facial expression recognition
AU - Guney, Fatma
AU - Arar, Nuri Murat
AU - Fischer, Mika
AU - Ekenel, Hazim Kemal
PY - 2013
Y1 - 2013
N2 - In real world facial expression recognition (FER) applications, it is not practical for a user to enroll his/her facial expressions under different pose angles. Therefore, a desirable property of a FER system would be to allow the user to enroll his/her facial expressions under a single pose, for example frontal, and be able to recognize them under different pose angles. In this paper, we address this problem and present a method to recognize six prototypic facial expressions of an individual across different pose angles. We use Partial Least Squares to map the expressions from different poses into a common subspace, in which covariance between them is maximized. We show that PLS can be effectively used for facial expression recognition across poses by training on coupled expressions of the same identity from two different poses. This way of training lets the learned bases model the differences between expressions of different poses by excluding the effect of the identity. We have evaluated the proposed approach on the BU3DFE database and shown that it is possible to successfully recognize expressions of an individual from arbitrary viewpoints by only having his/her expressions from a single pose, for example frontal pose as the most practical case. Overall, we achieved an average recognition rate of 87.6% when using frontal images as gallery and 86.6% when considering all pose pairs.
AB - In real world facial expression recognition (FER) applications, it is not practical for a user to enroll his/her facial expressions under different pose angles. Therefore, a desirable property of a FER system would be to allow the user to enroll his/her facial expressions under a single pose, for example frontal, and be able to recognize them under different pose angles. In this paper, we address this problem and present a method to recognize six prototypic facial expressions of an individual across different pose angles. We use Partial Least Squares to map the expressions from different poses into a common subspace, in which covariance between them is maximized. We show that PLS can be effectively used for facial expression recognition across poses by training on coupled expressions of the same identity from two different poses. This way of training lets the learned bases model the differences between expressions of different poses by excluding the effect of the identity. We have evaluated the proposed approach on the BU3DFE database and shown that it is possible to successfully recognize expressions of an individual from arbitrary viewpoints by only having his/her expressions from a single pose, for example frontal pose as the most practical case. Overall, we achieved an average recognition rate of 87.6% when using frontal images as gallery and 86.6% when considering all pose pairs.
UR - http://www.scopus.com/inward/record.url?scp=84881511491&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84881511491&partnerID=8YFLogxK
U2 - 10.1109/FG.2013.6553814
DO - 10.1109/FG.2013.6553814
M3 - Conference contribution
AN - SCOPUS:84881511491
SN - 9781467355452
T3 - 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2013
BT - 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2013
T2 - 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2013
Y2 - 22 April 2013 through 26 April 2013
ER -