Enabling multimodal human-robot interaction for the Karlsruhe humanoid robot

Rainer Stiefelhagen, Hazim Kemal Ekenel, Christian Fügen, Petra Gieselmann, Hartwig Holzapfel, Florian Kraft, Kai Nickel, Micheal Voit, Alex Waibel

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we present our work in building technologies for natural multimodal human-robot interaction. We present our systems for spontaneous speech recognition, multimodal dialogue processing, and visual perception of a user, which includes localization, tracking, and identification of the user, recognition of pointing gestures, as well as the recognition of a person's head orientation. Each of the components is described in the paper and experimental results are presented. We also present several experiments on multimodal human-robot interaction, such as interaction using speech and gestures, the automatic determination of the addressee during human-human-robot interaction, as well on interactive learning of dialogue strategies. The work and the components presented here constitute the core building blocks for audiovisual perception of humans and multimodal human-robot interaction used for the humanoid robot developed within the German research project (Sonderforschungsbereich) on humanoid cooperative robots.

Original languageEnglish (US)
Pages (from-to)840-851
Number of pages12
JournalIEEE Transactions on Robotics
Volume23
Issue number5
DOIs
StatePublished - Oct 1 2007

Keywords

  • Audiovisual perception
  • Human-centered robotics
  • Human-robot interaction
  • Multimodal interaction

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Enabling multimodal human-robot interaction for the Karlsruhe humanoid robot'. Together they form a unique fingerprint.

Cite this