dlpy.lr_scheduler.StepLR

class dlpy.lr_scheduler.StepLR(learning_rate=0.001, gamma=0.1, step_size=10)

Step learning rate scheduler The learning rate is reduced by a factor(gamma) at certain intervals(step_size) Example:

# reduce learning rate every 2 epochs lr_scheduler = StepLR(learning_rate=0.0001, gamma=0.1, step_size=2) solver = MomentumSolver(lr_scheduler = lr_scheduler, clip_grad_max = 100, clip_grad_min = -100)

Parameters
learning_ratedouble, optional

Specifies the initial learning rate.

gammadouble, optional

Specifies the gamma for the learning rate policy.

step_sizeint, optional

Specifies the step size when the learning rate policy is set to STEP.

Returns
StepLR
__init__(learning_rate=0.001, gamma=0.1, step_size=10)

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__([learning_rate, gamma, step_size])

Initialize self.

clear()

get(k[,d])

items()

keys()

pop(k[,d])

If key is not found, d is returned if given, otherwise KeyError is raised.

popitem()

as a 2-tuple; but raise KeyError if D is empty.

setdefault(k[,d])

update([E, ]**F)

If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v

values()