symjax.nn.layers.LSTM¶
-
class
symjax.nn.layers.
LSTM
(sequence, init_h, units, Wf=<function glorot_uniform>, Uf=<function orthogonal>, bf=<function zeros>, Wi=<function glorot_uniform>, Ui=<function orthogonal>, bi=<function zeros>, Wo=<function glorot_uniform>, Uo=<function orthogonal>, bo=<function zeros>, Wc=<function glorot_uniform>, Uc=<function orthogonal>, bc=<function zeros>, trainable_Wf=True, trainable_Uf=True, trainable_bf=True, trainable_Wi=True, trainable_Ui=True, trainable_bi=True, trainable_Wo=True, trainable_Uo=True, trainable_bo=True, trainable_Wc=True, trainable_Uc=True, trainable_bc=True, activation_g=<function sigmoid>, activation_c=<function _one_to_one_unop.<locals>.<lambda>>, activation_h=<function _one_to_one_unop.<locals>.<lambda>>, only_last=False, gate='minimal')[source]¶ -
__init__
(sequence, init_h, units, Wf=<function glorot_uniform>, Uf=<function orthogonal>, bf=<function zeros>, Wi=<function glorot_uniform>, Ui=<function orthogonal>, bi=<function zeros>, Wo=<function glorot_uniform>, Uo=<function orthogonal>, bo=<function zeros>, Wc=<function glorot_uniform>, Uc=<function orthogonal>, bc=<function zeros>, trainable_Wf=True, trainable_Uf=True, trainable_bf=True, trainable_Wi=True, trainable_Ui=True, trainable_bi=True, trainable_Wo=True, trainable_Uo=True, trainable_bo=True, trainable_Wc=True, trainable_Uc=True, trainable_bc=True, activation_g=<function sigmoid>, activation_c=<function _one_to_one_unop.<locals>.<lambda>>, activation_h=<function _one_to_one_unop.<locals>.<lambda>>, only_last=False, gate='minimal')[source]¶ Initialize self. See help(type(self)) for accurate signature.
Methods
__init__
(sequence, init_h, units[, Wf, Uf, …])Initialize self. LSTM.add_update
LSTM.add_variable
argmax
([axis, out])Returns the indices of the maximum values along an axis. argmin
([axis, out])Returns the indices of the minimum values along an axis. astype
(new_dtype)Elementwise cast. cast
(new_dtype)Elementwise cast. clone
(givens)conj
()Return the complex conjugate, element-wise. conjugate
()Return the complex conjugate, element-wise. LSTM.create_tensor
LSTM.create_variable
dot
(b, *[, precision])Dot product of two arrays. expand_dims
(axis, Tuple[int, …]])Expand the shape of an array. flatten
()reshape the input into a vector forward
()gate
(carry, x, Wf, Uf, bf, Wi, Ui, bi, Wo, …)imag
()Return the imaginary part of the complex argument. LSTM.init_input
matmul
(b, *[, precision])Matrix product of two arrays. max
([axis, out, keepdims, initial, where])Return the maximum of an array or maximum along an axis. mean
([axis, dtype, out, keepdims])Compute the arithmetic mean along the specified axis. min
([axis, out, keepdims, initial, where])Return the minimum of an array or minimum along an axis. prod
([axis, dtype, out, keepdims, initial, …])Return the product of array elements over a given axis. real
()Return the real part of the complex argument. repeat
(repeats[, axis, total_repeat_length])Repeat elements of an array. reshape
(newshape[, order])Gives a new shape to an array without changing its data. round
([decimals, out])Round an array to the given number of decimals. squeeze
(axis, Tuple[int, …]] = None)Remove single-dimensional entries from the shape of an array. std
([axis, dtype, out, ddof, keepdims])Compute the standard deviation along the specified axis. sum
([axis, dtype, out, keepdims, initial, where])Sum of array elements over a given axis. transpose
([axes])Reverse or permute the axes of an array; returns the modified array. var
([axis, dtype, out, ddof, keepdims])Compute the variance along the specified axis. variables
([trainable])Attributes
dtype
LSTM.fn_name
name
ndim
scope
shape
LSTM.updates
-