TY - GEN
T1 - Visual speech recognition using PCA networks and LSTMs in a tandem GMM-HMM system
AU - Zimmermann, Marina
AU - Mehdipour Ghazi, Mostafa
AU - Ekenel, Hazım Kemal
AU - Thiran, Jean Philippe
N1 - Publisher Copyright:
© Springer International Publishing AG 2017.
PY - 2017
Y1 - 2017
N2 - Automatic visual speech recognition is an interesting problem in pattern recognition especially when audio data is noisy or not readily available. It is also a very challenging task mainly because of the lower amount of information in the visual articulations compared to the audible utterance. In this work, principle component analysis is applied to the image patches — extracted from the video data — to learn the weights of a two-stage convolutional network. Block histograms are then extracted as the unsupervised learning features. These features are employed to learn a recurrent neural network with a set of long short-term memory cells to obtain spatiotemporal features. Finally, the obtained features are used in a tandem GMM-HMM system for speech recognition. Our results show that the proposed method has outperformed the baseline techniques applied to the OuluVS2 audiovisual database for phrase recognition with the frontal view cross-validation and testing sentence correctness reaching 79% and 73%, respectively, as compared to the baseline of 74% on cross-validation.
AB - Automatic visual speech recognition is an interesting problem in pattern recognition especially when audio data is noisy or not readily available. It is also a very challenging task mainly because of the lower amount of information in the visual articulations compared to the audible utterance. In this work, principle component analysis is applied to the image patches — extracted from the video data — to learn the weights of a two-stage convolutional network. Block histograms are then extracted as the unsupervised learning features. These features are employed to learn a recurrent neural network with a set of long short-term memory cells to obtain spatiotemporal features. Finally, the obtained features are used in a tandem GMM-HMM system for speech recognition. Our results show that the proposed method has outperformed the baseline techniques applied to the OuluVS2 audiovisual database for phrase recognition with the frontal view cross-validation and testing sentence correctness reaching 79% and 73%, respectively, as compared to the baseline of 74% on cross-validation.
UR - http://www.scopus.com/inward/record.url?scp=85016096334&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85016096334&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-54427-4_20
DO - 10.1007/978-3-319-54427-4_20
M3 - Conference contribution
AN - SCOPUS:85016096334
SN - 9783319544267
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 264
EP - 276
BT - Computer Vision - ACCV 2016 Workshops, ACCV 2016 International Workshops, Revised Selected Papers
A2 - Ma, Kai-Kuang
A2 - Lu, Jiwen
A2 - Chen, Chu-Song
PB - Springer Verlag
T2 - 13th Asian Conference on Computer Vision, ACCV 2016
Y2 - 20 November 2016 through 24 November 2016
ER -