Large Batch Training of Convolutional Networks


0. TLDR

Generally speaking, large mini-batches improve training speed but worsen accuracy / generalization. Idea is to use higher learning rates with larger mini-batches, but use a warmup period for the learning rate during the beginning of training. Seems to do well and speeds up training.

1. Background

Typically, stochastic gradient descent (SGD) and its variants are used to train deep learning models, and these methods make updates to the model parameters based on a mini-batch of data. Smaller mini-batches can result in noisy gradient estimates, which can help avoid local minima, but also slow down convergence. Larger mini-batches can provide a more accurate gradient estimate and allow for higher computational efficiency due to parallelism, but they often lead to poorer generalization performance.

2. Problem

The authors focus on the problem of maintaining model performance while increasing the mini-batch size. They aim to leverage the computational benefits of large mini-batches without compromising the final model accuracy.

3. Methodology

The authors propose a new learning rate scaling rule for large mini-batch training. The rule is straightforward: when the mini-batch size is multiplied by (k), the learning rate should also be multiplied by (k). This is in contrast to the conventional wisdom that the learning rate should be independent of the mini-batch size.

However, simply applying this scaling rule at the beginning of training can result in instability or divergence. To mitigate this, the authors propose a warmup strategy where the learning rate is initially small, then increased to its 'scaled' value over a number of epochs.

In mathematical terms, the proposed learning rate schedule is given by:

[ \eta = \begin{cases} \frac{\eta_{\text{base}}}{5} \cdot \left(\frac{\text{epoch}}{5}\right) & \text{if epoch} \leq 5 \ \eta_{\text{base}} \cdot \left(1 - \frac{\text{epoch}}{\text{total epochs}}\right) & \text{if epoch} > 5 \end{cases} ]

where (\eta_{\text{base}}) is the base learning rate.

4. Experiments and Results

The authors conducted experiments on ImageNet with a variety of CNN architectures, including AlexNet, VGG, and ResNet. They found that their proposed learning rate scaling rule and warmup strategy allowed them to increase the mini-batch size up to 32,000 without compromising model accuracy.

Moreover, they were able to achieve a training speedup nearly proportional to the increase in mini-batch size. For example, using a mini-batch size of 8192, training AlexNet and ResNet-50 on ImageNet was 6.3x and 5.3x faster, respectively, compared to using a mini-batch size of 256.

5. Implications

The paper has significant implications for the training of deep learning models, particularly in scenarios where computational resources are abundant but time is a constraint. By allowing for successful training with large mini-batches, the proposed methods can significantly speed up the training process.

Furthermore, the paper challenges conventional wisdom on the relationship between the learning rate and the mini-batch size, which could stimulate further research into the optimization dynamics of deep learning models.

However, it's worth noting that the proposed methods may not be applicable or beneficial in all scenarios. For example, they

Here is a plot of the learning rate schedule proposed by the authors. The x-axis represents the number of epochs, and the y-axis represents the learning rate.

As you can see, the learning rate is initially small and then increases linearly for the first 5 epochs. After the 5th epoch, the learning rate gradually decreases for the remaining epochs. This is the 'warmup' strategy proposed by the authors.

In the context of this plot, (\eta_{\text{base}}) is set to 0.1, and the total number of epochs is 90, which aligns with typical settings for training deep learning models on ImageNet.

This learning rate schedule is one of the key contributions of the paper, and it's a strategy that has since been widely adopted in the training of deep learning models, particularly when using large mini-batches.

Overall, while large-batch training might not always be feasible or beneficial due to memory limitations or the risk of poor generalization, it presents a valuable tool for situations where time efficiency is critical and computational resources are abundant. Furthermore, the insights from this paper about the interplay between batch size and learning rate have broadened our understanding of the optimization dynamics in deep learning.



Tags: optimization, 2018
👁️ 1481
hills
21:58
30.05.23
you need login for comment