TY - JOUR
T1 - Inverse Scaling
T2 - When Bigger Isn’t Better
AU - McKenzie, Ian R.
AU - Lyzhov, Alexander
AU - Pieler, Michael
AU - Parrish, Alicia
AU - Mueller, Aaron
AU - Prabhu, Ameya
AU - McLean, Euan
AU - Kirtland, Aaron
AU - Ross, Alexis
AU - Liu, Alisa
AU - Gritsevskiy, Andrew
AU - Wurgaft, Daniel
AU - Kauffman, Derik
AU - Recchia, Gabriel
AU - Liu, Jiacheng
AU - Cavanagh, Joe
AU - Weiss, Max
AU - Huang, Sicong
AU - Droid, The Floating
AU - Tseng, Tom
AU - Korbak, Tomasz
AU - Shen, Xudong
AU - Zhang, Yuhui
AU - Zhou, Zhengping
AU - Kim, Najoung
AU - Bowman, Samuel R.
AU - Perez, Ethan
N1 - Publisher Copyright:
© 2023, Transactions on Machine Learning Research. All rights reserved.
PY - 2023/10/1
Y1 - 2023/10/1
N2 - Work on scaling laws has found that large language models (LMs) show predictable improvements to overall loss with increased scale (model size, training data, and compute). Here, we present evidence for the claim that LMs may show inverse scaling, or worse task performance with increased scale, e.g., due to flaws in the training objective and data. We present empirical evidence of inverse scaling on 11 datasets collected by running a public contest, the Inverse Scaling Prize, with a substantial prize pool. Through analysis of the datasets, along with other examples found in the literature, we identify four potential causes of inverse scaling: (i) preference to repeat memorized sequences over following in-context instructions, (ii) imitation of undesirable patterns in the training data, (iii) tasks containing an easy distractor task which LMs could focus on, rather than the harder real task, and (iv) correct but misleading few-shot demonstrations of the task. We release the winning datasets at inversescaling.com/data to allow for further investigation of inverse scaling. Our tasks have helped drive the discovery of U-shaped and inverted-U scaling trends, where an initial trend reverses, suggesting that scaling trends are less reliable at predicting the behavior of larger-scale models than previously understood. Overall, our results suggest that there are tasks for which increased model scale alone may not lead to progress, and that more careful thought needs to go into the data and objectives for training language models.
AB - Work on scaling laws has found that large language models (LMs) show predictable improvements to overall loss with increased scale (model size, training data, and compute). Here, we present evidence for the claim that LMs may show inverse scaling, or worse task performance with increased scale, e.g., due to flaws in the training objective and data. We present empirical evidence of inverse scaling on 11 datasets collected by running a public contest, the Inverse Scaling Prize, with a substantial prize pool. Through analysis of the datasets, along with other examples found in the literature, we identify four potential causes of inverse scaling: (i) preference to repeat memorized sequences over following in-context instructions, (ii) imitation of undesirable patterns in the training data, (iii) tasks containing an easy distractor task which LMs could focus on, rather than the harder real task, and (iv) correct but misleading few-shot demonstrations of the task. We release the winning datasets at inversescaling.com/data to allow for further investigation of inverse scaling. Our tasks have helped drive the discovery of U-shaped and inverted-U scaling trends, where an initial trend reverses, suggesting that scaling trends are less reliable at predicting the behavior of larger-scale models than previously understood. Overall, our results suggest that there are tasks for which increased model scale alone may not lead to progress, and that more careful thought needs to go into the data and objectives for training language models.
UR - http://www.scopus.com/inward/record.url?scp=86000080442&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=86000080442&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:86000080442
SN - 2835-8856
VL - 2023
JO - Transactions on Machine Learning Research
JF - Transactions on Machine Learning Research
ER -