lsst.scarlet.lite gee10cc3b42+772f6ae910
|
Public Member Functions | |
__init__ (self, np.ndarray x, Callable|float step, Callable|None grad=None, Callable|None prox=None, float b1=0.9, float b2=0.999, float eps=1e-8, float p=0.25, np.ndarray|None m0=None, np.ndarray|None v0=None, np.ndarray|None vhat0=None, str scheme="amsgrad", float prox_e_rel=1e-6) | |
update (self, int it, np.ndarray input_grad, *args) | |
![]() | |
float | step (self) |
tuple[int,...] | shape (self) |
npt.DTypeLike | dtype (self) |
Parameter | copy (self) |
resize (self, Box old_box, Box new_box) | |
Public Attributes | |
b1 | |
b2 | |
eps | |
p | |
phi_psi | |
e_rel | |
x | |
![]() | |
x | |
helpers | |
grad | |
prox | |
Additional Inherited Members | |
![]() | |
_step | |
Operator updated using te Proximal ADAM algorithm Uses multiple variants of adaptive quasi-Newton gradient descent * Adam (Kingma & Ba 2015) * NAdam (Dozat 2016) * AMSGrad (Reddi, Kale & Kumar 2018) * PAdam (Chen & Gu 2018) * AdamX (Phuong & Phong 2019) * RAdam (Liu et al. 2019) See details of the algorithms in the respective papers.
lsst.scarlet.lite.parameters.AdaproxParameter.__init__ | ( | self, | |
np.ndarray | x, | ||
Callable | float | step, | ||
Callable | None | grad = None, | ||
Callable | None | prox = None, | ||
float | b1 = 0.9, | ||
float | b2 = 0.999, | ||
float | eps = 1e-8, | ||
float | p = 0.25, | ||
np.ndarray | None | m0 = None, | ||
np.ndarray | None | v0 = None, | ||
np.ndarray | None | vhat0 = None, | ||
str | scheme = "amsgrad", | ||
float | prox_e_rel = 1e-6 ) |
Reimplemented from lsst.scarlet.lite.parameters.Parameter.
lsst.scarlet.lite.parameters.AdaproxParameter.update | ( | self, | |
int | it, | ||
np.ndarray | input_grad, | ||
* | args ) |
Update the parameter and meta-parameters using the PGM See `~Parameter` for more.
Reimplemented from lsst.scarlet.lite.parameters.Parameter.