TY - JOUR
T1 - Human-Guided Complexity-Controlled Abstractions
AU - Peng, Andi
AU - Tucker, Mycal
AU - Kenny, Eoin M.
AU - Zaslavsky, Noga
AU - Agrawal, Pulkit
AU - Shah, Julie A.
N1 - Publisher Copyright:
© 2023 Neural information processing systems foundation. All rights reserved.
PY - 2023
Y1 - 2023
N2 - Neural networks often learn task-specific latent representations that fail to generalize to novel settings or tasks. Conversely, humans learn discrete representations (i.e., concepts or words) at a variety of abstraction levels (e.g., “bird” vs. “sparrow”) and deploy the appropriate abstraction based on task. Inspired by this, we train neural models to generate a spectrum of discrete representations and control the complexity of the representations (roughly, how many bits are allocated for encoding inputs) by tuning the entropy of the distribution over representations. In finetuning experiments, using only a small number of labeled examples for a new task, we show that (1) tuning the representation to a task-appropriate complexity level supports the highest finetuning performance, and (2) in a human-participant study, users were able to identify the appropriate complexity level for a downstream task using visualizations of discrete representations. Our results indicate a promising direction for rapid model finetuning by leveraging human insight.
AB - Neural networks often learn task-specific latent representations that fail to generalize to novel settings or tasks. Conversely, humans learn discrete representations (i.e., concepts or words) at a variety of abstraction levels (e.g., “bird” vs. “sparrow”) and deploy the appropriate abstraction based on task. Inspired by this, we train neural models to generate a spectrum of discrete representations and control the complexity of the representations (roughly, how many bits are allocated for encoding inputs) by tuning the entropy of the distribution over representations. In finetuning experiments, using only a small number of labeled examples for a new task, we show that (1) tuning the representation to a task-appropriate complexity level supports the highest finetuning performance, and (2) in a human-participant study, users were able to identify the appropriate complexity level for a downstream task using visualizations of discrete representations. Our results indicate a promising direction for rapid model finetuning by leveraging human insight.
UR - http://www.scopus.com/inward/record.url?scp=85191178230&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85191178230&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85191178230
SN - 1049-5258
VL - 36
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 37th Conference on Neural Information Processing Systems, NeurIPS 2023
Y2 - 10 December 2023 through 16 December 2023
ER -