A comparison of informal and formal acceptability judgments using a random sample from Linguistic Inquiry 2001-2010

Jon Sprouse, Carson T. Schütze, Diogo Almeida

Research output: Contribution to journalArticlepeer-review

Abstract

The goal of the present study is to provide a direct comparison of the results of informal judgment collection methods with the results of formal judgment collection methods, as a first step in understanding the relative merits of each family of methods. Although previous studies have compared small samples of informal and formal results, this article presents the first large-scale comparison based on a random sample of phenomena from a leading theoretical journal (Linguistic Inquiry). We tested 296 data points from the approximately 1743 English data points that were published in Linguistic Inquiry between 2001 and 2010. We tested this sample with 936 naïve participants using three formal judgment tasks (magnitude estimation, 7-point Likert scale, and two-alternative forced-choice) and report five statistical analyses. The results suggest a convergence rate of 95% between informal and formal methods, with a margin of error of 5.3-5.8%. We discuss the implications of this convergence rate for the ongoing conversation about judgment collection methods, and lay out a set of questions for future research into syntactic methodology.

Original languageEnglish (US)
Pages (from-to)219-248
Number of pages30
JournalLingua
Volume134
DOIs
StatePublished - 2013

Keywords

  • Acceptability judgments
  • Experimental syntax
  • Grammaticality judgments
  • Methodology

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'A comparison of informal and formal acceptability judgments using a random sample from Linguistic Inquiry 2001-2010'. Together they form a unique fingerprint.

Cite this