Mapping Word to World in ASL: Evidence from a Human Simulation Paradigm

Allison Fitch, Sudha Arunachalam, Amy M. Lieberman

Research output: Contribution to journalArticlepeer-review

Abstract

Across languages, children map words to meaning with great efficiency, despite a seemingly unconstrained space of potential mappings. The literature on how children do this is primarily limited to spoken language. This leaves a gap in our understanding of sign language acquisition, because several of the hypothesized mechanisms that children use are visual (e.g., visual attention to the referent), and sign languages are perceived in the visual modality. Here, we used the Human Simulation Paradigm in American Sign Language (ASL) to determine potential cues to word learning. Sign-naïve adult participants viewed video clips of parent–child interactions in ASL, and at a designated point, had to guess what ASL sign the parent produced. Across two studies, we demonstrate that referential clarity in ASL interactions is characterized by access to information about word class and referent presence (for verbs), similarly to spoken language. Unlike spoken language, iconicity is a cue to word meaning in ASL, although this is not always a fruitful cue. We also present evidence that verbs are highlighted well in the input, relative to spoken English. The results shed light on both similarities and differences in the information that learners may have access to in acquiring signed versus spoken languages.

Original languageEnglish (US)
Article numbere13061
JournalCognitive Science
Volume45
Issue number12
DOIs
StatePublished - Dec 2021

Keywords

  • American Sign Language
  • Human Simulation Paradigm
  • Iconicity
  • Word learning

ASJC Scopus subject areas

  • Experimental and Cognitive Psychology
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Mapping Word to World in ASL: Evidence from a Human Simulation Paradigm'. Together they form a unique fingerprint.

Cite this