qualia2.nn package¶
Subpackages¶
- qualia2.nn.modules package
- Submodules
- qualia2.nn.modules.activation module
- qualia2.nn.modules.conv module
- qualia2.nn.modules.dropout module
- qualia2.nn.modules.linear module
- qualia2.nn.modules.module module
- qualia2.nn.modules.normalize module
- qualia2.nn.modules.pool module
- qualia2.nn.modules.recurrent module
- qualia2.nn.modules.sparse module
- Module contents
Submodules¶
qualia2.nn.init module¶
-
qualia2.nn.init.
calculate_gain
(nonlinearity, param=None)[source]¶ calculate gain
Return the recommended gain value for the given nonlinearity function. The values are as follows: ================= ==================================================== nonlinearity gain ================= ==================================================== Linear / Identity \(1\) Conv{1,2,3}D \(1\) Sigmoid \(1\) Tanh :math:`
- rac{5}{3}`
ReLU \(\sqrt{2}\) Leaky Relu :math:`sqrt{
- rac{2}{1 + ext{negative_slope}^2}}`
-
qualia2.nn.init.
kaiming_normal_
(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')[source]¶ Fills the input Tensor with values according to the method described in “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification” - He, K. et al. (2015), using a normal distribution. The resulting tensor will have values sampled from \(\mathcal{N}(0, ext{std})\) where .. math:
ext{std} = \sqrt{
- rac{2}{(1 + a^2) imes ext{fan_in}}}
Also known as He initialization.
- Args:
tensor (Tensor): an n-dimensional Tensor a (float): the negative slope of the rectifier used after this layer (0 for ReLU
by default)
- mode (str): either ‘fan_in’ (default) or ‘fan_out’. Choosing fan_in
preserves the magnitude of the variance of the weights in the forward pass. Choosing fan_out preserves the magnitudes in the backwards pass.
- nonlinearity (str): the non-linear function (functions name),
recommended to use only with ‘relu’ or ‘leaky_relu’ (default).
- Examples:
>>> w = qualia2.empty(3, 5) >>> nn.init.kaiming_normal_(w, mode='fan_out', nonlinearity='relu')
-
qualia2.nn.init.
kaiming_uniform_
(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')[source]¶ Fills the input Tensor with values according to the method described in “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification” - He, K. et al. (2015), using a uniform distribution. The resulting tensor will have values sampled from \(\mathcal{U}(- ext{bound}, ext{bound})\) where .. math:
ext{bound} = \sqrt{
- rac{6}{(1 + a^2) imes ext{fan_in}}}
Also known as He initialization.
- Args:
tensor (Tensor): an n-dimensional Tensor a (float): the negative slope of the rectifier used after this layer (0 for ReLU
by default)
- mode (str): either ‘fan_in’ (default) or ‘fan_out’. Choosing fan_in
preserves the magnitude of the variance of the weights in the forward pass. Choosing fan_out preserves the magnitudes in the backwards pass.
- nonlinearity (str): the non-linear function (functions name),
recommended to use only with ‘relu’ or ‘leaky_relu’ (default).
- Examples:
>>> w = qualia2.empty(3, 5) >>> nn.init.kaiming_uniform_(w, mode='fan_in', nonlinearity='relu')
-
qualia2.nn.init.
xavier_normal_
(tensor, gain=1)[source]¶ xavier normal
Fills the input Tensor with values according to the method described in “Understanding the difficulty of training deep feedforward neural networks” - Glorot, X. & Bengio, Y. (2010), using a normal distribution. The resulting tensor will have values sampled from \(\mathcal{N}(0, ext{std})\) where .. math:
ext{std} = ext{gain} imes \sqrt{
- rac{2}{ ext{fan_in} + ext{fan_out}}}
Also known as Glorot initialization.
- Args:
tensor (Tensor): an n-dimensional Tensor gain (float): an optional scaling factor
- Examples:
>>> w = qualia2.empty(3, 5) >>> nn.init.xavier_normal_(w)
-
qualia2.nn.init.
xavier_uniform_
(tensor, gain=1)[source]¶ xavier uniform
Fills the input Tensor with values according to the method described in “Understanding the difficulty of training deep feedforward neural networks” - Glorot, X. & Bengio, Y. (2010), using a uniform distribution. The resulting tensor will have values sampled from \(\mathcal{U}(-a, a)\) where .. math:
a = ext{gain} imes \sqrt{
- rac{6}{ ext{fan_in} + ext{fan_out}}}
Also known as Glorot initialization.
- Args:
tensor (Tensor): an n-dimensional Tensor gain (float): an optional scaling factor
- Examples:
>>> w = qualia2.empty(3, 5) >>> nn.init.xavier_uniform_(w, gain=nn.init.calculate_gain('relu'))
qualia2.nn.optim module¶
-
class
qualia2.nn.optim.
AdaGrad
(parameters, lr=0.001, eps=1e-08, weight_decay=0)[source]¶ Bases:
qualia2.nn.optim.Optimizer
Implements Adagrad algorithm.
- Args:
parameters (iterable): iterable of parameters to optimize lr (float): learning rate Default: 1e-03 eps (flaot): for numerical stability Default: 1e-08 weight_decay (float): weight decay (L2 penalty) Default: 0
-
class
qualia2.nn.optim.
AdaMax
(parameters, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)[source]¶ Bases:
qualia2.nn.optim.Optimizer
Implements AdaMax algorithm.
- Args:
parameters (iterable): iterable of parameters to optimize lr (float): learning rate Default: 1e-03 betas (tuple of float): coefficients used for computing running averages of gradient and its square Default: (0.9, 0.999) eps (float): for numerical stability Default: 1e-08 weight_decay (float): weight decay (L2 penalty) Default: 0
-
class
qualia2.nn.optim.
Adadelta
(parameters, lr=1.0, decay_rate=0.9, eps=1e-08, weight_decay=0)[source]¶ Bases:
qualia2.nn.optim.Optimizer
Implements Adadelta algorithm.
- Args:
parameters (iterable): iterable of parameters to optimize lr (float): coefficient that scale delta before it is applied to the parameters. Default: 1.0 decay_rate (float): coefficient used for computing a running average of squared gradients. Default: 0.9 eps (flaot): for numerical stability Default: 1e-08 weight_decay (float): weight decay (L2 penalty) Default: 0
-
class
qualia2.nn.optim.
Adam
(parameters, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)[source]¶ Bases:
qualia2.nn.optim.Optimizer
Implements Adam algorithm.
- Args:
parameters (iterable): iterable of parameters to optimize lr (float): learning rate Default: 1e-03 betas (tuple of float): coefficients used for computing running averages of gradient and its square Default: (0.9, 0.999) eps (float): for numerical stability Default: 1e-08 weight_decay (float): weight decay (L2 penalty) Default: 0
-
class
qualia2.nn.optim.
Nadam
(parameters, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)[source]¶ Bases:
qualia2.nn.optim.Optimizer
Implements Nesterov-accelerated adaptive moment estimation (Nadam) algorithm.
- Args:
parameters (iterable): iterable of parameters to optimize lr (float): learning rate Default: 1e-03 betas (tuple of float): coefficients used for computing running averages of gradient and its square Default: (0.9, 0.999) eps (float): for numerical stability Default: 1e-08 weight_decay (float): weight decay (L2 penalty) Default: 0
-
class
qualia2.nn.optim.
NovoGrad
(parameters, lr=0.001, betas=(0.95, 0.98), eps=1e-08, weight_decay=0)[source]¶ Bases:
qualia2.nn.optim.Optimizer
Implements NovoGrad algorithm.
- Args:
parameters (iterable): iterable of parameters to optimize lr (float): learning rate Default: 1e-03 betas (tuple of float): coefficients used for computing running averages of gradient and its square Default: (0.95, 0.98) eps (float): for numerical stability Default: 1e-08 weight_decay (float): weight decay (L2 penalty) Default: 0
-
class
qualia2.nn.optim.
Optimizer
(parameters)[source]¶ Bases:
object
Optimizer base class
- Args:
parameters (generator): Parameters to optimize
-
class
qualia2.nn.optim.
RAdam
(parameters, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)[source]¶ Bases:
qualia2.nn.optim.Optimizer
Implements Rectified Adam algorithm.
- Args:
parameters (iterable): iterable of parameters to optimize lr (float): learning rate Default: 1e-03 betas (tuple of float): coefficients used for computing running averages of gradient and its square Default: (0.9, 0.999) eps (float): for numerical stability Default: 1e-08 weight_decay (float): weight decay (L2 penalty) Default: 0
-
class
qualia2.nn.optim.
RMSProp
(parameters, lr=0.001, alpha=0.99, eps=1e-08, weight_decay=0)[source]¶ Bases:
qualia2.nn.optim.Optimizer
Implements RMSprop algorithm.
- Args:
parameters (iterable): iterable of parameters to optimize lr (float): learning rate Default: 1e-03 alpha (float): smoothing constant Default: 0.99 eps (float): for numerical stability Default: 1e-08 weight_decay (float): weight decay (L2 penalty) Default: 0
-
class
qualia2.nn.optim.
SGD
(parameters, lr=0.001, momentum=0, weight_decay=0)[source]¶ Bases:
qualia2.nn.optim.Optimizer
Implements stochastic gradient descent (optionally with momentum).
- Args:
params (iterable): iterable of parameters to optimize lr (float): learning rate momentum (float): momentum factor Default: 0 weight_decay (float) – weight decay (L2 penalty) Default: 0