dlpy.lr_scheduler.ReduceLROnPlateau

class dlpy.lr_scheduler.ReduceLROnPlateau(conn, learning_rate, gamma=0.1, cool_down_iters=10, patience=10)

Reduce learning rate on plateau learning rate scheduler Reduce learning rate when loss has stopped improving for a certain number of epochs(patience). Example:

lr_scheduler = ReduceLROnPlateau(conn=sess, cool_down_iters=2, gamma=0.1, learning_rate=0.01, patience=3) solver = MomentumSolver(lr_scheduler = lr_scheduler, clip_grad_max = 100, clip_grad_min = -100)

Parameters
connCAS

Specifies the CAS connection object.

learning_ratedouble, optional

Specifies the initial learning rate.

gammadouble, optional

Specifies the gamma for the learning rate policy.

cool_down_itersint, optional

Specifies number of iterations to wait before resuming normal operation after lr has been reduced.

patienceint, optional

Specifies number of epochs with no improvement after which learning rate will be reduced.

Returns
ReduceLROnPlateau
__init__(conn, learning_rate, gamma=0.1, cool_down_iters=10, patience=10)

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__(conn, learning_rate[, gamma, …])

Initialize self.

clear()

get(k[,d])

items()

keys()

pop(k[,d])

If key is not found, d is returned if given, otherwise KeyError is raised.

popitem()

as a 2-tuple; but raise KeyError if D is empty.

setdefault(k[,d])

update([E, ]**F)

If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v

values()