NNFS
Neural network library from scratch
|
Adam optimizer - Adaptive Moment Estimation, one of the most popular and efficient gradient-based optimization algorithms. More...
#include <Adam.hpp>
Public Member Functions | |
Adam (double lr=1e-3, double decay=.0, double epsilon=1e-7, double beta_1=.9, double beta_2=.999) | |
Construct a new Adam object. | |
void | update_params (std::shared_ptr< Dense > &layer) |
Update the parameters of the layer. | |
Public Member Functions inherited from NNFS::Optimizer | |
Optimizer (double lr, double decay) | |
Construct a new Optimizer object. | |
virtual | ~Optimizer ()=default |
Basic destructor. | |
virtual void | update_params (std::shared_ptr< Dense > &layer)=0 |
Update the parameters of the layer. | |
void | pre_update_params () |
Pre-update parameters (e.g. learning rate decay) | |
void | post_update_params () |
Post-update parameters (e.g. increase iteration count) | |
double & | current_lr () |
Get the current learning rate. | |
int & | iterations () |
Get current iteration count. | |
Additional Inherited Members | |
Protected Attributes inherited from NNFS::Optimizer | |
const double | _lr |
double | _current_lr |
int | _iterations |
double | _decay |
Adam optimizer - Adaptive Moment Estimation, one of the most popular and efficient gradient-based optimization algorithms.
This class implements the Adam optimizer.
|
inline |
Construct a new Adam object.
lr | Learning rate (default: 1e-3) |
decay | Learning rate decay (default: 0.0) |
epsilon | Epsilon value to avoid division by zero (default: 1e-7) |
beta_1 | Exponential decay rate for the first moment estimates (default: 0.9) |
beta_2 | Exponential decay rate for the second moment estimates (default: 0.999) |
|
inlinevirtual |
Update the parameters of the layer.
[in,out] | layer | Layer to update |
Implements NNFS::Optimizer.