TY - JOUR
T1 - Improving Joint Speech-Text Representations Without Alignment
AU - Peyser, Cal
AU - Meng, Zhong
AU - Hu, Ke
AU - Prabhavalkar, Rohit
AU - Rosenberg, Andrew
AU - Sainath, Tara N.
AU - Picheny, Michael
AU - Cho, Kyunghyun
N1 - Publisher Copyright:
© 2023 International Speech Communication Association. All rights reserved.
PY - 2023
Y1 - 2023
N2 - The last year has seen astonishing progress in text-prompted image generation premised on the idea of a cross-modal representation space in which the text and image domains are represented jointly. In ASR, this idea has found application as joint speech-text encoders that can scale to the capacities of very large parameter models by being trained on both unpaired speech and text. While these methods show promise, they have required special treatment of the sequence-length mismatch inherent in speech and text, either by up-sampling heuristics or an explicit alignment model. In this work, we offer evidence that joint speech-text encoders naturally achieve consistent representations across modalities by disregarding sequence length, and argue that consistency losses could forgive length differences and simply assume the best alignment. We show that such a loss improves downstream WER in both a large-parameter monolingual and multilingual system.
AB - The last year has seen astonishing progress in text-prompted image generation premised on the idea of a cross-modal representation space in which the text and image domains are represented jointly. In ASR, this idea has found application as joint speech-text encoders that can scale to the capacities of very large parameter models by being trained on both unpaired speech and text. While these methods show promise, they have required special treatment of the sequence-length mismatch inherent in speech and text, either by up-sampling heuristics or an explicit alignment model. In this work, we offer evidence that joint speech-text encoders naturally achieve consistent representations across modalities by disregarding sequence length, and argue that consistency losses could forgive length differences and simply assume the best alignment. We show that such a loss improves downstream WER in both a large-parameter monolingual and multilingual system.
UR - http://www.scopus.com/inward/record.url?scp=85171575936&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85171575936&partnerID=8YFLogxK
U2 - 10.21437/Interspeech.2023-403
DO - 10.21437/Interspeech.2023-403
M3 - Conference article
AN - SCOPUS:85171575936
SN - 2308-457X
VL - 2023-August
SP - 1354
EP - 1358
JO - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
JF - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
T2 - 24th International Speech Communication Association, Interspeech 2023
Y2 - 20 August 2023 through 24 August 2023
ER -