symjax.nn.layers

Dense

Dense(input, units[, W, b, trainable_W, …]) Fully-connected/Dense layer

Renormalization

BatchNormalization(input, axis, deterministic) batch-normalization layer

Data Augmentation

RandomCrop(input, crop_shape, deterministic) random crop selection form the input
RandomFlip(input, p, axis, deterministic[, seed]) random axis flip on the input
Dropout(input, p, deterministic[, seed]) binary mask onto the input

Convolution

Conv1D(input, n_filters, filter_length[, W, …]) 1-D (time) convolution
Conv2D(input, n_filters, filter_shape[, …]) 2-D (spatial) convolution

Pooling

Pool1D(input, pool_shape[, pool_type, strides]) 2-D (spatial) pooling
Pool2D(input, pool_shape[, pool_type, strides]) 2-D (spatial) pooling

Recurrent

RNN(sequence, init_h, units[, W, H, b, …])
GRU(sequence, init_h, units[, Wh, Uh, bh, …])
LSTM(sequence, init_h, units[, Wf, Uf, bf, …])

Detailed Description

class symjax.nn.layers.BatchNormalization(input, axis, deterministic, const=0.001, beta_1=0.99, beta_2=0.99, W=<function ones>, b=<function zeros>, trainable_W=True, trainable_b=True)[source]

batch-normalization layer

input_or_shape: shape or Tensor
the layer input tensor or shape
axis: list or tuple of ints
the axis to normalize on. If using BN on a dense layer then axis should be [0] to normalize over the samples. If the layer if a convolutional layer with data format NCHW then axis should be [0, 2, 3] to normalize over the samples and spatial dimensions (commonly done)
deterministic: bool or Tensor
controlling the state of the layer
const: float32 (optional)
the constant used in the standard deviation renormalization
beta1: flaot32 (optional)
the parameter for the exponential moving average of the mean
beta2: float32 (optional)
the parameters for the exponential moving average of the std
Returns:output
Return type:the layer output with attributes given by the layer options
class symjax.nn.layers.RandomCrop(input, crop_shape, deterministic, padding=0, seed=None)[source]

random crop selection form the input

Random layer that will select a window of the input based on the given parameters, with the possibility to first apply a padding. This layer is commonly used as a data augmentation technique and positioned at the beginning of the deep network topology. Note that all the involved operations are GPU compatible and allow for backpropagation

Parameters:
  • input_or_shape (shape or Tensor) – the input of the layer or the shape of the layer input
  • crop_shape (shape) – the shape of the cropped part of the input. It must have the same length as the input shape minus one for the first dimension
  • deterministic (bool or Tensor) – if the layer is in deterministic mode or not
  • padding (shape) – the amount of padding to apply on each dimension (except the first one) each dimension should have a couple for the before and after padding parts
  • seed (seed (optional)) – to control reproducibility
Returns:

output

Return type:

the output tensor which containts the internal variables

class symjax.nn.layers.RandomFlip(input, p, axis, deterministic, seed=None)[source]

random axis flip on the input

Random layer that will randomly flip the axis of the input. Note that all the involved operations are GPU compatible and allow for backpropagation

Parameters:
  • input_or_shape (shape or Tensor) – the input of the layer or the shape of the layer input
  • crop_shape (shape) – the shape of the cropped part of the input. It must have the same length as the input shape minus one for the first dimension
  • deterministic (bool or Tensor) – if the layer is in deterministic mode or not
  • padding (shape) – the amount of padding to apply on each dimension (except the first one) each dimension should have a couple for the before and after padding parts
  • seed (seed (optional)) – to control reproducibility
Returns:

output

Return type:

the output tensor which containts the internal variables

class symjax.nn.layers.Dropout(input, p, deterministic, seed=None)[source]

binary mask onto the input

Parameters:
  • input_or_shape (shape or Tensor) – the layer input or shape
  • p (float (0<=p<=1)) – the probability to drop the value
  • deterministic (bool or Tensor) – the state of the layer
  • seed (seed) – the RNG seed
Returns:

output

Return type:

the layer output

class symjax.nn.layers.Conv1D(input, n_filters, filter_length, W=<function glorot_uniform>, b=<built-in function zeros>, stride=1, padding='VALID', trainable_W=True, trainable_b=True, inplace_W=False, inplace_b=False, W_preprocessor=None, b_preprocessor=None, input_dilations=None, filter_dilations=None)[source]

1-D (time) convolution

perform a dense matrix multiplication and bias shifting of the input

input

n_filters

filter_length

W=initializers.glorot_uniform

b=numpy.zeros

stride=1

padding=”VALID”

trainable_W=True

trainable_b=True

inplace_W=False

inplace_b=False

W_preprocessor=None

b_preprocessor=None

input_dilations=None

filter_dilations=None

class symjax.nn.layers.Conv2D(input, n_filters, filter_shape, padding='VALID', strides=1, W=<function glorot_uniform>, b=<built-in function zeros>, trainable_W=True, trainable_b=True, inplace_W=False, inplace_b=False, input_dilations=None, filter_dilations=None, W_preprocessor=None, b_preprocessor=None)[source]

2-D (spatial) convolution

class symjax.nn.layers.Pool1D(input, pool_shape, pool_type='MAX', strides=None)[source]

2-D (spatial) pooling

class symjax.nn.layers.Pool2D(input, pool_shape, pool_type='MAX', strides=None)[source]

2-D (spatial) pooling

class symjax.nn.layers.RNN(sequence, init_h, units, W=<function glorot_uniform>, H=<function orthogonal>, b=<function zeros>, trainable_W=True, trainable_H=True, trainable_b=True, activation=<function sigmoid>, only_last=False)[source]
class symjax.nn.layers.GRU(sequence, init_h, units, Wh=<function glorot_uniform>, Uh=<function orthogonal>, bh=<function zeros>, Wz=<function glorot_uniform>, Uz=<function orthogonal>, bz=<function zeros>, Wr=<function glorot_uniform>, Ur=<function orthogonal>, br=<function zeros>, trainable_Wh=True, trainable_Uh=True, trainable_bh=True, trainable_Wz=True, trainable_Uz=True, trainable_bz=True, trainable_Wr=True, trainable_Ur=True, trainable_br=True, activation=<function sigmoid>, phi=<function _one_to_one_unop.<locals>.<lambda>>, only_last=False, gate='minimal')[source]
class symjax.nn.layers.LSTM(sequence, init_h, units, Wf=<function glorot_uniform>, Uf=<function orthogonal>, bf=<function zeros>, Wi=<function glorot_uniform>, Ui=<function orthogonal>, bi=<function zeros>, Wo=<function glorot_uniform>, Uo=<function orthogonal>, bo=<function zeros>, Wc=<function glorot_uniform>, Uc=<function orthogonal>, bc=<function zeros>, trainable_Wf=True, trainable_Uf=True, trainable_bf=True, trainable_Wi=True, trainable_Ui=True, trainable_bi=True, trainable_Wo=True, trainable_Uo=True, trainable_bo=True, trainable_Wc=True, trainable_Uc=True, trainable_bc=True, activation_g=<function sigmoid>, activation_c=<function _one_to_one_unop.<locals>.<lambda>>, activation_h=<function _one_to_one_unop.<locals>.<lambda>>, only_last=False, gate='minimal')[source]