Abstract
A key challenge in designing convolutional network models is sizing them appropriately. Many factors are involved in these decisions, including number of layers, feature maps, kernel sizes, etc. Complicating this further is the fact that each of these influence not only the numbers and dimensions of the activation units, but also the total number of parameters. In this paper we focus on assessing the independent contributions of three of these linked variables: The numbers of layers, feature maps, and parameters. To accomplish this, we employ a recursive convolutional network whose weights are tied between layers; this allows us to vary each of the three factors in a controlled setting. We find that while increasing the numbers of layers and parameters each have clear benefit, the number of feature maps (and hence dimensionality of the representation) appears ancillary, and finds most of its benefit through the introduction of more weights. Our results (i) empirically confirm the notion that adding layers alone increases computational power, within the context of convolutional layers, and (ii) suggest that precise sizing of convolutional feature map dimensions is itself of little concern; more attention should be paid to the number of parameters in these layers instead.
Original language | English (US) |
---|---|
State | Published - Jan 1 2014 |
Event | 2nd International Conference on Learning Representations, ICLR 2014 - Banff, Canada Duration: Apr 14 2014 → Apr 16 2014 |
Conference
Conference | 2nd International Conference on Learning Representations, ICLR 2014 |
---|---|
Country/Territory | Canada |
City | Banff |
Period | 4/14/14 → 4/16/14 |
ASJC Scopus subject areas
- Computer Science Applications
- Linguistics and Language
- Language and Linguistics
- Education