Learning Rate Scheduler in Keras

Moklesur Rahman
7 min readJan 23, 2023

The learning rate is considered one of the most important hyperparameters for training deep learning models, but choosing it can be quite hard. Rather than simply using a fixed learning rate, it is common to use a learning rate scheduler.

image credit: pyimagesearch.com

When using different optimizers like Adam to train a deep learning model with Keras or TensorFlow, the learning rate of the model stays the same throughout the training process. Changing the learning rate in different steps, batches, and epochs can boost the performance of the deep learning model. There are many ways to decrease the learning rate during the training process over time. It is also known as “learning rate scheduling” or “learning rate annealing”. However, the Keras includes numerous schedulers for learning rate that can be used to anneal the learning rate over time. In the following subsections, different “learning rate scheduling” approaches are discussed and implemented. To implement a different method, I used a common deep learning model for the MNIST dataset.

Importing MNIST dataset:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets
import numpy as np
from tensorflow.keras.utils import to_categorical

(X_train, Y_train), (X_test, Y_test) = datasets.mnist.load_data()
X_train,X_test = X_train.reshape(-1,28,28,1), X_test.reshape(-1,28,28,1)…

--

--

Moklesur Rahman
Moklesur Rahman

Written by Moklesur Rahman

PhD student | Computer Science | University of Milan | Data science | AI in Cardiology | Writer | Researcher

Responses (1)