That Sounds Right: Auditory Self-Supervision for Dynamic Robot Manipulation

Abitha Thankaraj, Lerrel Pinto

Research output: Contribution to journalConference articlepeer-review

Abstract

Learning to produce contact-rich, dynamic behaviors from raw sensory data has been a longstanding challenge in robotics. Prominent approaches primarily focus on using visual and tactile sensing. However, pure vision often fails to capture high-frequency interaction, while current tactile sensors can be too delicate for large-scale data collection. In this work, we propose a data-centric approach to dynamic manipulation that uses an often ignored source of information - sound. We first collect a dataset of 25k interaction-sound pairs across five dynamic tasks using contact microphones. Then, given this data, we leverage self-supervised learning to accelerate behavior prediction from sound. Our experiments indicate that this self-supervised 'pretraining' is crucial to achieving high performance, with a 34.5% lower MSE than plain supervised learning and a 54.3% lower MSE over visual training. Importantly, we find that when asked to generate desired sound profiles, online rollouts of our models on a UR10 robot can produce dynamic behavior that achieves an average of 11.5% improvement over supervised learning on audio similarity metrics. Videos and audio data are best seen on our project website: https://aurl-anon.github.io/.

Original languageEnglish (US)
JournalProceedings of Machine Learning Research
Volume229
StatePublished - 2023
Event7th Conference on Robot Learning, CoRL 2023 - Atlanta, United States
Duration: Nov 6 2023Nov 9 2023

Keywords

  • Audio
  • Dynamic manipulation
  • Self supervised learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'That Sounds Right: Auditory Self-Supervision for Dynamic Robot Manipulation'. Together they form a unique fingerprint.

Cite this