Abstract
We analyze architectural features of Deep Neural Networks (DNNs) using the so-called Neural Tangent Kernel (NTK), which describes the training and generalization of DNNs in the infinite-width setting. In this setting, we show that for fully-connected DNNs, as the depth grows, two regimes appear: freeze (or order), where the (scaled) NTK converges to a constant, and chaos, where it converges to a Kronecker delta. Extreme freeze slows down training while extreme chaos hinders generalization. Using the scaled ReLU as a nonlinearity, we end up in the frozen regime. In contrast, Layer Normalization brings the network into the chaotic regime. We observe a similar effect for Batch Normalization (BN) applied after the last nonlinearity. We uncover the same freeze and chaos modes in Deep Deconvolutional Networks (DC-NNs). Our analysis explains the appearance of so-called checkerboard patterns and border artifacts. Moving the network into the chaotic regime prevents checkerboard patterns; we propose a graph-based parametrization which eliminates border artifacts; finally, we introduce a new layer-dependent learning rate to improve the convergence of DC-NNs. We illustrate our findings on DCGANs: the frozen regime leads to a collapse of the generator to a checkerboard mode, which can be avoided by tuning the nonlinearity to reach the chaotic regime. As a result, we are able to obtain good quality samples for DCGANs without BN.
Original language | English (US) |
---|---|
Pages (from-to) | 257-270 |
Number of pages | 14 |
Journal | Proceedings of Machine Learning Research |
Volume | 190 |
State | Published - 2022 |
Event | 3rd Annual Conference on Mathematical and Scientific Machine Learning, MSML 2022 - Beijing, China Duration: Aug 15 2022 → Aug 17 2022 |
Keywords
- Chaos
- Checkerboard patterns
- Freeze
- GANs
- NTK
- Order
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability