dlpy.lr_scheduler.MultiStepLR¶
-
class
dlpy.lr_scheduler.
MultiStepLR
(learning_rate, gamma, steps)¶ Bases: dlpy.lr_scheduler._LRScheduler
Multiple steps learning rate scheduler The initial learning rate is decayed by gamma once the number of epoch reaches one of the steps. Example:
# reduce learning rate by 0.1 at 20th, 50th, 80th epochs lr_scheduler = MultiStepLR(learning_rate=0.0001, gamma=0.1, steps=[20, 50, 80]) solver = MomentumSolver(lr_scheduler = lr_scheduler, clip_grad_max = 100, clip_grad_min = -100)Parameters: - learning_rate : double, optional
Specifies the initial learning rate.
- gamma : double, optional
Specifies the gamma for the learning rate policy.
- steps : list-of-ints, optional
specifies a list of epoch counts. When the current epoch matches one of the specified steps, the learning rate is multiplied by the value of the gamma parameter. For example, if you specify {5, 9, 13}, then the learning rate is multiplied by gamma after the fifth, ninth, and thirteenth epochs.
Returns: -
__init__
(learning_rate, gamma, steps)¶ Initialize self. See help(type(self)) for accurate signature.
Methods
__init__(learning_rate, gamma, steps) Initialize self. clear() get(k[,d]) items() keys() pop(k[,d]) If key is not found, d is returned if given, otherwise KeyError is raised. popitem() as a 2-tuple; but raise KeyError if D is empty. setdefault(k[,d]) update([E, ]**F) If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v values()