TY - JOUR
T1 - Visual speech primes open-set recognition of spoken words
AU - Buchwald, Adam B.
AU - Winters, Stephen J.
AU - Pisoni, David B.
N1 - Funding Information:
Correspondence should be addressed to Adam B. Buchwald, Department of Speech-Language Pathology and Audiology, New York University, 665 Broadway, Suite 910, New York, NY 10012, USA. E-mail: [email protected] This work was supported by NIH DC00012. The authors would like to thank Melissa Troyer for her assistance with this study, and Susannah Levi, Tessa Bent, Manuel Carreiras, and several anonymous reviewers for helpful comments on earlier drafts of this paper.
PY - 2009
Y1 - 2009
N2 - Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins, 2004). In this paper, we used a cross-modality repetition priming paradigm with visual speech lexical primes and auditory lexical targets to explore the nature of this priming effect. First, we report that participants identified spoken words mixed with noise more accurately when the words were preceded by a visual speech prime of the same word compared with a control condition. Second, analyses of the responses indicated that both correct and incorrect responses were constrained by the visual speech information in the prime. These complementary results suggest that the visual speech primes have an effect on lexical access by increasing the likelihood that words with certain phonetic properties are selected. Third, we found that the cross-modality repetition priming effect was maintained even when visual and auditory signals came from different speakers, and thus different instances of the same lexical item. We discuss implications of these results for current theories of speech perception.
AB - Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins, 2004). In this paper, we used a cross-modality repetition priming paradigm with visual speech lexical primes and auditory lexical targets to explore the nature of this priming effect. First, we report that participants identified spoken words mixed with noise more accurately when the words were preceded by a visual speech prime of the same word compared with a control condition. Second, analyses of the responses indicated that both correct and incorrect responses were constrained by the visual speech information in the prime. These complementary results suggest that the visual speech primes have an effect on lexical access by increasing the likelihood that words with certain phonetic properties are selected. Third, we found that the cross-modality repetition priming effect was maintained even when visual and auditory signals came from different speakers, and thus different instances of the same lexical item. We discuss implications of these results for current theories of speech perception.
KW - Audiovisual priming
KW - Lexical access
KW - Visual speech
KW - Word recognition
UR - http://www.scopus.com/inward/record.url?scp=70549097068&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=70549097068&partnerID=8YFLogxK
U2 - 10.1080/01690960802536357
DO - 10.1080/01690960802536357
M3 - Article
AN - SCOPUS:70549097068
SN - 0169-0965
VL - 24
SP - 580
EP - 610
JO - Language and Cognitive Processes
JF - Language and Cognitive Processes
IS - 4
ER -