Dialogwae: Multimodal response generation with conditional Wasserstein auto-encoder

Xiaodong Gu, Kyunghyun Cho, Jung Woo Ha, Sunghun Kim

Research output: Contribution to conferencePaper

Abstract

Variational autoencoders (VAEs) have shown a promise in data-driven conversation modeling. However, most VAE conversation models match the approximate posterior distribution over the latent variables to a simple prior such as standard normal distribution, thereby restricting the generated responses to a relatively simple (e.g., unimodal) scope. In this paper, we propose DialogWAE, a conditional Wasserstein autoencoder (WAE) specially designed for dialogue modeling. Unlike VAEs that impose a simple distribution over the latent variables, DialogWAE models the distribution of data by training a GAN within the latent variable space. Specifically, our model samples from the prior and posterior distributions over the latent variables by transforming context-dependent random noise using neural networks and minimizes the Wasserstein distance between the two distributions. We further develop a Gaussian mixture prior network to enrich the latent space. Experiments on two popular datasets show that DialogWAE outperforms the state-of-the-art approaches in generating more coherent, informative and diverse responses.

Original languageEnglish (US)
StatePublished - 2019
Event7th International Conference on Learning Representations, ICLR 2019 - New Orleans, United States
Duration: May 6 2019May 9 2019

Conference

Conference7th International Conference on Learning Representations, ICLR 2019
CountryUnited States
CityNew Orleans
Period5/6/195/9/19

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Fingerprint Dive into the research topics of 'Dialogwae: Multimodal response generation with conditional Wasserstein auto-encoder'. Together they form a unique fingerprint.

  • Cite this

    Gu, X., Cho, K., Ha, J. W., & Kim, S. (2019). Dialogwae: Multimodal response generation with conditional Wasserstein auto-encoder. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.