symjax.nn.initializers

This module provides all the basic initializers used in Deep Learning. All the involved operations are meant to take as input a shape of the desired weight tensor (vector, matrix, …) and will return a numpy-array.

constant(shape, value)
uniform(shape[, scale]) Sample uniform weights U(-scale, scale).
normal(shape[, scale]) Sample Gaussian weights N(0, scale).
orthogonal(shape[, scale]) From Lasagne.
glorot_uniform(shape) Reference: Glorot & Bengio, AISTATS 2010
glorot_normal(shape) Reference: Glorot & Bengio, AISTATS 2010
he_uniform(shape) Reference: He et al., http://arxiv.org/abs/1502.01852
he_normal(shape) Reference: He et al., http://arxiv.org/abs/1502.01852
lecun_uniform(shape[, name]) Reference: LeCun 98, Efficient Backprop http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf
get_fans(shape) utility giving fan_in and fan_out of a tensor (shape).
variance_scaling(shape, mode[, gain, …]) Variance Scaling initialization.

Detailed Descriptions

symjax.nn.initializers.constant(shape, value)[source]
symjax.nn.initializers.uniform(shape, scale=0.05)[source]

Sample uniform weights U(-scale, scale).

Parameters:
  • shape (tuple) –
  • scale (float (default=0.05)) –
symjax.nn.initializers.normal(shape, scale=0.05)[source]

Sample Gaussian weights N(0, scale).

Parameters:
  • shape (tuple) –
  • scale (float (default=0.05)) –
symjax.nn.initializers.orthogonal(shape, scale=1)[source]

From Lasagne. Reference: Saxe et al., http://arxiv.org/abs/1312.6120

symjax.nn.initializers.glorot_uniform(shape)[source]

Reference: Glorot & Bengio, AISTATS 2010

symjax.nn.initializers.glorot_normal(shape)[source]

Reference: Glorot & Bengio, AISTATS 2010

symjax.nn.initializers.he_uniform(shape)[source]

Reference: He et al., http://arxiv.org/abs/1502.01852

symjax.nn.initializers.he_normal(shape)[source]

Reference: He et al., http://arxiv.org/abs/1502.01852

symjax.nn.initializers.lecun_uniform(shape, name=None)[source]

Reference: LeCun 98, Efficient Backprop http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf

symjax.nn.initializers.get_fans(shape)[source]

utility giving fan_in and fan_out of a tensor (shape).

The concept of fan_in and fan_out helps to create weight initializers. Those quantities represent the number of units that the current weight takes as input (from the previous layer) and the number of output is produces. From those two numbers, the variance of random variables can be obtained such that the layer feature maps do not vanish or explode in amplitude.

Parameters:shape (tuple) – the shape of the tensor. For a densely connected this is (previous layer width, current layer width) and for convolutional (2D) it is (n_filters, input_channels)+ spatial shapes
Returns:
  • fan_in (int)
  • fan_out (int)
symjax.nn.initializers.variance_scaling(shape, mode, gain=1, distribution=<function normal>)[source]

Variance Scaling initialization.