TY - JOUR
T1 - A comparison of informal and formal acceptability judgments using a random sample from Linguistic Inquiry 2001-2010
AU - Sprouse, Jon
AU - Schütze, Carson T.
AU - Almeida, Diogo
N1 - Funding Information:
We would like to thank audiences at the following universities for helpful comments on earlier stages of this project: Harvard University, Johns Hopkins University, Michigan State University, Pomona College, Princeton University, University of Connecticut, University of Michigan, and the attendees of TEAL 7 at Hiroshima University. We would also like to thank Colin Phillips and one anonymous reviewer for helpful comments on an earlier draft. This work was supported in part by NSF grant BCS-0843896 to JS.
PY - 2013
Y1 - 2013
N2 - The goal of the present study is to provide a direct comparison of the results of informal judgment collection methods with the results of formal judgment collection methods, as a first step in understanding the relative merits of each family of methods. Although previous studies have compared small samples of informal and formal results, this article presents the first large-scale comparison based on a random sample of phenomena from a leading theoretical journal (Linguistic Inquiry). We tested 296 data points from the approximately 1743 English data points that were published in Linguistic Inquiry between 2001 and 2010. We tested this sample with 936 naïve participants using three formal judgment tasks (magnitude estimation, 7-point Likert scale, and two-alternative forced-choice) and report five statistical analyses. The results suggest a convergence rate of 95% between informal and formal methods, with a margin of error of 5.3-5.8%. We discuss the implications of this convergence rate for the ongoing conversation about judgment collection methods, and lay out a set of questions for future research into syntactic methodology.
AB - The goal of the present study is to provide a direct comparison of the results of informal judgment collection methods with the results of formal judgment collection methods, as a first step in understanding the relative merits of each family of methods. Although previous studies have compared small samples of informal and formal results, this article presents the first large-scale comparison based on a random sample of phenomena from a leading theoretical journal (Linguistic Inquiry). We tested 296 data points from the approximately 1743 English data points that were published in Linguistic Inquiry between 2001 and 2010. We tested this sample with 936 naïve participants using three formal judgment tasks (magnitude estimation, 7-point Likert scale, and two-alternative forced-choice) and report five statistical analyses. The results suggest a convergence rate of 95% between informal and formal methods, with a margin of error of 5.3-5.8%. We discuss the implications of this convergence rate for the ongoing conversation about judgment collection methods, and lay out a set of questions for future research into syntactic methodology.
KW - Acceptability judgments
KW - Experimental syntax
KW - Grammaticality judgments
KW - Methodology
UR - http://www.scopus.com/inward/record.url?scp=84884354332&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84884354332&partnerID=8YFLogxK
U2 - 10.1016/j.lingua.2013.07.002
DO - 10.1016/j.lingua.2013.07.002
M3 - Article
AN - SCOPUS:84884354332
SN - 0024-3841
VL - 134
SP - 219
EP - 248
JO - Lingua
JF - Lingua
ER -