symjax.tensor.linalg

cond(x[, p]) Compute the condition number of a matrix.
det
eig(a) Compute the eigenvalues and right eigenvectors of a square array.
eigh(a[, b, lower, eigvals_only, …]) Solve a standard or generalized eigenvalue problem for a complex
eigvals(a) Compute the eigenvalues of a general matrix.
eigvalsh(a[, UPLO]) Compute the eigenvalues of a complex Hermitian or real symmetric matrix.
inv(a[, overwrite_a, check_finite]) Compute the inverse of a matrix.
lstsq(a, b[, rcond, numpy_resid]) Return the least-squares solution to a linear matrix equation.
matrix_power(a, n) Raise a square matrix to the (integer) power n.
matrix_rank(M[, tol]) Return matrix rank of array using SVD method
multi_dot(arrays, *[, precision]) Compute the dot product of two or more arrays in a single function call,
norm(x[, ord, axis, keepdims]) Tensor/Matrix/Vector norm.
pinv(a[, rcond]) Compute the (Moore-Penrose) pseudo-inverse of a matrix.
qr(a[, mode]) Compute the qr factorization of a matrix.
slogdet(a) Compute the sign and (natural) logarithm of the determinant of an array.
solve(a, b) Solve a linear matrix equation, or system of linear scalar equations.
svd(a[, full_matrices, compute_uv]) Singular Value Decomposition.
tensorinv(a[, ind]) Compute the ‘inverse’ of an N-dimensional array.
tensorsolve(a, b[, axes]) Solve the tensor equation a x = b for x.
cholesky(a[, lower, overwrite_a, check_finite]) Compute the Cholesky decomposition of a matrix.
block_diag(*arrs) Create a block diagonal matrix from provided arrays.
cho_solve(c_and_lower, b[, overwrite_b, …]) Solve the linear equations A x = b, given the Cholesky factorization of A.
eigh(a[, b, lower, eigvals_only, …]) Solve a standard or generalized eigenvalue problem for a complex
expm(A, *[, upper_triangular, max_squarings]) Compute the matrix exponential using Pade approximation.
expm_frechet
inv(a[, overwrite_a, check_finite]) Compute the inverse of a matrix.
lu(a[, permute_l, overwrite_a, check_finite]) Compute pivoted LU decomposition of a matrix.
lu_factor(a[, overwrite_a, check_finite]) Compute pivoted LU decomposition of a matrix.
lu_solve(lu_and_piv, b[, trans, …]) Solve an equation system, a x = b, given the LU factorization of a
solve_triangular(a, b[, trans, lower, …]) Solve the equation a x = b for x, assuming a is a triangular matrix.
tril(m[, k]) Make a copy of a matrix with elements above the kth diagonal zeroed.
triu(m[, k]) Make a copy of a matrix with elements below the kth diagonal zeroed.
singular_vectors_power_iteration(weight[, …])
eigenvector_power_iteration(weight[, axis, …])
gram_schmidt(V[, normalize]) gram-schmidt orthogonalization
modified_gram_schmidt(V) modified gram-schmidt orthogonalization

Detailed Description

symjax.tensor.linalg.cond(x, p=None)[source]

Compute the condition number of a matrix.

LAX-backend implementation of cond(). Original docstring below.

This function is capable of returning the condition number using one of seven different norms, depending on the value of p (see Parameters below).

Parameters:
  • x ((.., M, N) array_like) – The matrix whose condition number is sought.
  • p ({None, 1, -1, 2, -2, inf, -inf, 'fro'}, optional) – Order of the norm:
Returns:

c – The condition number of the matrix. May be infinite.

Return type:

{float, inf}

See also

numpy.linalg.norm()

Notes

The condition number of x is defined as the norm of x times the norm of the inverse of x [1]_; the norm can be the usual L2-norm (root-of-sum-of-squares) or one of a number of other matrix norms.

References

[1]G. Strang, Linear Algebra and Its Applications, Orlando, FL, Academic Press, Inc., 1980, pg. 285.

Examples

>>> from numpy import linalg as LA
>>> a = np.array([[1, 0, -1], [0, 1, 0], [1, 0, 1]])
>>> a
array([[ 1,  0, -1],
       [ 0,  1,  0],
       [ 1,  0,  1]])
>>> LA.cond(a)
1.4142135623730951
>>> LA.cond(a, 'fro')
3.1622776601683795
>>> LA.cond(a, np.inf)
2.0
>>> LA.cond(a, -np.inf)
1.0
>>> LA.cond(a, 1)
2.0
>>> LA.cond(a, -1)
1.0
>>> LA.cond(a, 2)
1.4142135623730951
>>> LA.cond(a, -2)
0.70710678118654746 # may vary
>>> min(LA.svd(a, compute_uv=False))*min(LA.svd(LA.inv(a), compute_uv=False))
0.70710678118654746 # may vary
symjax.tensor.linalg.eig(a)[source]

Compute the eigenvalues and right eigenvectors of a square array.

LAX-backend implementation of eig(). Original docstring below.

Parameters:a ((.., M, M) array) – Matrices for which the eigenvalues and right eigenvectors will be computed
Returns:
  • w ((…, M) array) – The eigenvalues, each repeated according to its multiplicity. The eigenvalues are not necessarily ordered. The resulting array will be of complex type, unless the imaginary part is zero in which case it will be cast to a real type. When a is real the resulting eigenvalues will be real (0 imaginary part) or occur in conjugate pairs
  • v ((…, M, M) array) – The normalized (unit “length”) eigenvectors, such that the column v[:,i] is the eigenvector corresponding to the eigenvalue w[i].
Raises:LinAlgError – If the eigenvalue computation does not converge.

See also

eigvals()
eigenvalues of a non-symmetric array.
eigh()
eigenvalues and eigenvectors of a real symmetric or complex Hermitian (conjugate symmetric) array.
eigvalsh()
eigenvalues of a real symmetric or complex Hermitian (conjugate symmetric) array.
scipy.linalg.eig()
Similar function in SciPy that also solves the generalized eigenvalue problem.
scipy.linalg.schur()
Best choice for unitary and other non-Hermitian normal matrices.

Notes

New in version 1.8.0.

Broadcasting rules apply, see the numpy.linalg documentation for details.

This is implemented using the _geev LAPACK routines which compute the eigenvalues and eigenvectors of general square arrays.

The number w is an eigenvalue of a if there exists a vector v such that a @ v = w * v. Thus, the arrays a, w, and v satisfy the equations a @ v[:,i] = w[i] * v[:,i] for \(i \in \{0,...,M-1\}\).

The array v of eigenvectors may not be of maximum rank, that is, some of the columns may be linearly dependent, although round-off error may obscure that fact. If the eigenvalues are all different, then theoretically the eigenvectors are linearly independent and a can be diagonalized by a similarity transformation using v, i.e, inv(v) @ a @ v is diagonal.

For non-Hermitian normal matrices the SciPy function scipy.linalg.schur is preferred because the matrix v is guaranteed to be unitary, which is not the case when using eig. The Schur factorization produces an upper triangular matrix rather than a diagonal matrix, but for normal matrices only the diagonal of the upper triangular matrix is needed, the rest is roundoff error.

Finally, it is emphasized that v consists of the right (as in right-hand side) eigenvectors of a. A vector y satisfying y.T @ a = z * y.T for some number z is called a left eigenvector of a, and, in general, the left and right eigenvectors of a matrix are not necessarily the (perhaps conjugate) transposes of each other.

References

G. Strang, Linear Algebra and Its Applications, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, Various pp.

Examples

>>> from numpy import linalg as LA

(Almost) trivial example with real e-values and e-vectors.

>>> w, v = LA.eig(np.diag((1, 2, 3)))
>>> w; v
array([1., 2., 3.])
array([[1., 0., 0.],
       [0., 1., 0.],
       [0., 0., 1.]])

Real matrix possessing complex e-values and e-vectors; note that the e-values are complex conjugates of each other.

>>> w, v = LA.eig(np.array([[1, -1], [1, 1]]))
>>> w; v
array([1.+1.j, 1.-1.j])
array([[0.70710678+0.j        , 0.70710678-0.j        ],
       [0.        -0.70710678j, 0.        +0.70710678j]])

Complex-valued matrix with real e-values (but complex-valued e-vectors); note that a.conj().T == a, i.e., a is Hermitian.

>>> a = np.array([[1, 1j], [-1j, 1]])
>>> w, v = LA.eig(a)
>>> w; v
array([2.+0.j, 0.+0.j])
array([[ 0.        +0.70710678j,  0.70710678+0.j        ], # may vary
       [ 0.70710678+0.j        , -0.        +0.70710678j]])

Be careful about round-off error!

>>> a = np.array([[1 + 1e-9, 0], [0, 1 - 1e-9]])
>>> # Theor. e-values are 1 +/- 1e-9
>>> w, v = LA.eig(a)
>>> w; v
array([1., 1.])
array([[1., 0.],
       [0., 1.]])
symjax.tensor.linalg.eigh(a, b=None, lower=True, eigvals_only=False, overwrite_a=False, overwrite_b=False, turbo=True, eigvals=None, type=1, check_finite=True)[source]
Solve a standard or generalized eigenvalue problem for a complex
Hermitian or real symmetric matrix.

LAX-backend implementation of eigh(). Original docstring below.

Find eigenvalues array w and optionally eigenvectors array v of array a, where b is positive definite such that for every eigenvalue λ (i-th entry of w) and its eigenvector vi (i-th column of v) satisfies:

              a @ vi = λ * b @ vi
vi.conj().T @ a @ vi = λ
vi.conj().T @ b @ vi = 1

In the standard problem, b is assumed to be the identity matrix.

Parameters:
  • a ((M, M) array_like) – A complex Hermitian or real symmetric matrix whose eigenvalues and eigenvectors will be computed.
  • b ((M, M) array_like, optional) – A complex Hermitian or real symmetric definite positive matrix in. If omitted, identity matrix is assumed.
  • lower (bool, optional) – Whether the pertinent array data is taken from the lower or upper triangle of a and, if applicable, b. (Default: lower)
  • eigvals_only (bool, optional) – Whether to calculate only eigenvalues and no eigenvectors. (Default: both are calculated)
  • type (int, optional) – For the generalized problems, this keyword specifies the problem type to be solved for w and v (only takes 1, 2, 3 as possible inputs):
  • overwrite_a (bool, optional) – Whether to overwrite data in a (may improve performance). Default is False.
  • overwrite_b (bool, optional) – Whether to overwrite data in b (may improve performance). Default is False.
  • check_finite (bool, optional) – Whether to check that the input matrices contain only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs.
  • turbo (bool, optional) – Deprecated since v1.5.0, use ``driver=gvd`` keyword instead. Use divide and conquer algorithm (faster but expensive in memory, only for generalized eigenvalue problem and if full set of eigenvalues are requested.). Has no significant effect if eigenvectors are not requested.
  • eigvals (tuple (lo, hi), optional) – Deprecated since v1.5.0, use ``subset_by_index`` keyword instead. Indexes of the smallest and largest (in ascending order) eigenvalues and corresponding eigenvectors to be returned: 0 <= lo <= hi <= M-1. If omitted, all eigenvalues and eigenvectors are returned.
Returns:

  • w ((N,) ndarray) – The N (1<=N<=M) selected eigenvalues, in ascending order, each repeated according to its multiplicity.
  • v ((M, N) ndarray) – (if eigvals_only == False)

Raises:

LinAlgError – If eigenvalue computation does not converge, an error occurred, or b matrix is not definite positive. Note that if input matrices are not symmetric or Hermitian, no error will be reported but results will be wrong.

See also

eigvalsh()
eigenvalues of symmetric or Hermitian arrays
eig()
eigenvalues and right eigenvectors for non-symmetric arrays
eigh_tridiagonal()
eigenvalues and right eiegenvectors for symmetric/Hermitian tridiagonal matrices

Notes

This function does not check the input array for being hermitian/symmetric in order to allow for representing arrays with only their upper/lower triangular parts. Also, note that even though not taken into account, finiteness check applies to the whole array and unaffected by “lower” keyword.

This function uses LAPACK drivers for computations in all possible keyword combinations, prefixed with sy if arrays are real and he if complex, e.g., a float array with “evr” driver is solved via “syevr”, complex arrays with “gvx” driver problem is solved via “hegvx” etc.

As a brief summary, the slowest and the most robust driver is the classical <sy/he>ev which uses symmetric QR. <sy/he>evr is seen as the optimal choice for the most general cases. However, there are certain occassions that <sy/he>evd computes faster at the expense of more memory usage. <sy/he>evx, while still being faster than <sy/he>ev, often performs worse than the rest except when very few eigenvalues are requested for large arrays though there is still no performance guarantee.

For the generalized problem, normalization with respoect to the given type argument:

type 1 and 3 :      v.conj().T @ a @ v = w
type 2       : inv(v).conj().T @ a @ inv(v) = w

type 1 or 2  :      v.conj().T @ b @ v  = I
type 3       : v.conj().T @ inv(b) @ v  = I

Examples

>>> from scipy.linalg import eigh
>>> A = np.array([[6, 3, 1, 5], [3, 0, 5, 1], [1, 5, 6, 2], [5, 1, 2, 2]])
>>> w, v = eigh(A)
>>> np.allclose(A @ v - v @ np.diag(w), np.zeros((4, 4)))
True

Request only the eigenvalues

>>> w = eigh(A, eigvals_only=True)

Request eigenvalues that are less than 10.

>>> A = np.array([[34, -4, -10, -7, 2],
...               [-4, 7, 2, 12, 0],
...               [-10, 2, 44, 2, -19],
...               [-7, 12, 2, 79, -34],
...               [2, 0, -19, -34, 29]])
>>> eigh(A, eigvals_only=True, subset_by_value=[-np.inf, 10])
array([6.69199443e-07, 9.11938152e+00])

Request the largest second eigenvalue and its eigenvector

>>> w, v = eigh(A, subset_by_index=[1, 1])
>>> w
array([9.11938152])
>>> v.shape  # only a single column is returned
(5, 1)
symjax.tensor.linalg.eigvals(a)[source]

Compute the eigenvalues of a general matrix.

LAX-backend implementation of eigvals(). Original docstring below.

Main difference between eigvals and eig: the eigenvectors aren’t returned.

Parameters:a ((.., M, M) array_like) – A complex- or real-valued matrix whose eigenvalues will be computed.
Returns:w – The eigenvalues, each repeated according to its multiplicity. They are not necessarily ordered, nor are they necessarily real for real matrices.
Return type:(.., M,) ndarray
Raises:LinAlgError – If the eigenvalue computation does not converge.

See also

eig()
eigenvalues and right eigenvectors of general arrays
eigvalsh()
eigenvalues of real symmetric or complex Hermitian (conjugate symmetric) arrays.
eigh()
eigenvalues and eigenvectors of real symmetric or complex Hermitian (conjugate symmetric) arrays.
scipy.linalg.eigvals()
Similar function in SciPy.

Notes

New in version 1.8.0.

Broadcasting rules apply, see the numpy.linalg documentation for details.

This is implemented using the _geev LAPACK routines which compute the eigenvalues and eigenvectors of general square arrays.

Examples

Illustration, using the fact that the eigenvalues of a diagonal matrix are its diagonal elements, that multiplying a matrix on the left by an orthogonal matrix, Q, and on the right by Q.T (the transpose of Q), preserves the eigenvalues of the “middle” matrix. In other words, if Q is orthogonal, then Q * A * Q.T has the same eigenvalues as A:

>>> from numpy import linalg as LA
>>> x = np.random.random()
>>> Q = np.array([[np.cos(x), -np.sin(x)], [np.sin(x), np.cos(x)]])
>>> LA.norm(Q[0, :]), LA.norm(Q[1, :]), np.dot(Q[0, :],Q[1, :])
(1.0, 1.0, 0.0)

Now multiply a diagonal matrix by Q on one side and by Q.T on the other:

>>> D = np.diag((-1,1))
>>> LA.eigvals(D)
array([-1.,  1.])
>>> A = np.dot(Q, D)
>>> A = np.dot(A, Q.T)
>>> LA.eigvals(A)
array([ 1., -1.]) # random
symjax.tensor.linalg.eigvalsh(a, UPLO='L')[source]

Compute the eigenvalues of a complex Hermitian or real symmetric matrix.

LAX-backend implementation of eigvalsh(). Original docstring below.

Main difference from eigh: the eigenvectors are not computed.

Parameters:
  • a ((.., M, M) array_like) – A complex- or real-valued matrix whose eigenvalues are to be computed.
  • UPLO ({'L', 'U'}, optional) – Specifies whether the calculation is done with the lower triangular part of a (‘L’, default) or the upper triangular part (‘U’). Irrespective of this value only the real parts of the diagonal will be considered in the computation to preserve the notion of a Hermitian matrix. It therefore follows that the imaginary part of the diagonal will always be treated as zero.
Returns:

w – The eigenvalues in ascending order, each repeated according to its multiplicity.

Return type:

(.., M,) ndarray

Raises:

LinAlgError – If the eigenvalue computation does not converge.

See also

eigh()
eigenvalues and eigenvectors of real symmetric or complex Hermitian (conjugate symmetric) arrays.
eigvals()
eigenvalues of general real or complex arrays.
eig()
eigenvalues and right eigenvectors of general real or complex arrays.
scipy.linalg.eigvalsh()
Similar function in SciPy.

Notes

New in version 1.8.0.

Broadcasting rules apply, see the numpy.linalg documentation for details.

The eigenvalues are computed using LAPACK routines _syevd, _heevd.

Examples

>>> from numpy import linalg as LA
>>> a = np.array([[1, -2j], [2j, 5]])
>>> LA.eigvalsh(a)
array([ 0.17157288,  5.82842712]) # may vary
>>> # demonstrate the treatment of the imaginary part of the diagonal
>>> a = np.array([[5+2j, 9-2j], [0+2j, 2-1j]])
>>> a
array([[5.+2.j, 9.-2.j],
       [0.+2.j, 2.-1.j]])
>>> # with UPLO='L' this is numerically equivalent to using LA.eigvals()
>>> # with:
>>> b = np.array([[5.+0.j, 0.-2.j], [0.+2.j, 2.-0.j]])
>>> b
array([[5.+0.j, 0.-2.j],
       [0.+2.j, 2.+0.j]])
>>> wa = LA.eigvalsh(a)
>>> wb = LA.eigvals(b)
>>> wa; wb
array([1., 6.])
array([6.+0.j, 1.+0.j])
symjax.tensor.linalg.inv(a, overwrite_a=False, check_finite=True)[source]

Compute the inverse of a matrix.

LAX-backend implementation of inv(). Original docstring below.

Parameters:
  • a (array_like) – Square matrix to be inverted.
  • overwrite_a (bool, optional) – Discard data in a (may improve performance). Default is False.
  • check_finite (bool, optional) – Whether to check that the input matrix contains only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs.
Returns:

ainv – Inverse of the matrix a.

Return type:

ndarray

Raises:
  • LinAlgError – If a is singular.
  • ValueError – If a is not square, or not 2D.

Examples

>>> from scipy import linalg
>>> a = np.array([[1., 2.], [3., 4.]])
>>> linalg.inv(a)
array([[-2. ,  1. ],
       [ 1.5, -0.5]])
>>> np.dot(a, linalg.inv(a))
array([[ 1.,  0.],
       [ 0.,  1.]])
symjax.tensor.linalg.lstsq(a, b, rcond=None, *, numpy_resid=False)[source]

Return the least-squares solution to a linear matrix equation.

LAX-backend implementation of lstsq(). It has two important differences:

  1. In numpy.linalg.lstsq, the default rcond is -1, and warns that in the future the default will be None. Here, the default rcond is None.
  2. In np.linalg.lstsq the returned residuals are empty for low-rank or over-determined solutions. Here, the residuals are returned in all cases, to make the function compatible with jit. The non-jit compatible numpy behavior can be recovered by passing numpy_resid=True.

The lstsq function does not currently have a custom JVP rule, so the gradient is poorly behaved for some inputs, particularly for low-rank a.

Original docstring below.

Computes the vector x that approximatively solves the equation a @ x = b. The equation may be under-, well-, or over-determined (i.e., the number of linearly independent rows of a can be less than, equal to, or greater than its number of linearly independent columns). If a is square and of full rank, then x (but for round-off error) is the “exact” solution of the equation. Else, x minimizes the Euclidean 2-norm \(|| b - a x ||\).

Parameters:
  • a ((M, N) array_like) – “Coefficient” matrix.
  • b ({(M,), (M, K)} array_like) – Ordinate or “dependent variable” values. If b is two-dimensional, the least-squares solution is calculated for each of the K columns of b.
  • rcond (float, optional) – Cut-off ratio for small singular values of a. For the purposes of rank determination, singular values are treated as zero if they are smaller than rcond times the largest singular value of a.
Returns:

  • x ({(N,), (N, K)} ndarray) – Least-squares solution. If b is two-dimensional, the solutions are in the K columns of x.
  • residuals ({(1,), (K,), (0,)} ndarray) – Sums of residuals; squared Euclidean 2-norm for each column in b - a*x. If the rank of a is < N or M <= N, this is an empty array. If b is 1-dimensional, this is a (1,) shape array. Otherwise the shape is (K,).
  • rank (int) – Rank of matrix a.
  • s ((min(M, N),) ndarray) – Singular values of a.

Raises:

LinAlgError – If computation does not converge.

See also

scipy.linalg.lstsq()
Similar function in SciPy.

Notes

If b is a matrix, then all array results are returned as matrices.

Examples

Fit a line, y = mx + c, through some noisy data-points:

>>> x = np.array([0, 1, 2, 3])
>>> y = np.array([-1, 0.2, 0.9, 2.1])

By examining the coefficients, we see that the line should have a gradient of roughly 1 and cut the y-axis at, more or less, -1.

We can rewrite the line equation as y = Ap, where A = [[x 1]] and p = [[m], [c]]. Now use lstsq to solve for p:

>>> A = np.vstack([x, np.ones(len(x))]).T
>>> A
array([[ 0.,  1.],
       [ 1.,  1.],
       [ 2.,  1.],
       [ 3.,  1.]])
>>> m, c = np.linalg.lstsq(A, y, rcond=None)[0]
>>> m, c
(1.0 -0.95) # may vary

Plot the data along with the fitted line:

>>> import matplotlib.pyplot as plt
>>> _ = plt.plot(x, y, 'o', label='Original data', markersize=10)
>>> _ = plt.plot(x, m*x + c, 'r', label='Fitted line')
>>> _ = plt.legend()
>>> plt.show()
symjax.tensor.linalg.matrix_power(a, n)[source]

Raise a square matrix to the (integer) power n.

LAX-backend implementation of matrix_power(). Original docstring below.

For positive integers n, the power is computed by repeated matrix squarings and matrix multiplications. If n == 0, the identity matrix of the same shape as M is returned. If n < 0, the inverse is computed and then raised to the abs(n).

Note

Stacks of object matrices are not currently supported.

Parameters:
  • a ((.., M, M) array_like) – Matrix to be “powered”.
  • n (int) – The exponent can be any integer or long integer, positive, negative, or zero.
Returns:

a**n – The return value is the same shape and type as M; if the exponent is positive or zero then the type of the elements is the same as those of M. If the exponent is negative the elements are floating-point.

Return type:

(.., M, M) ndarray or matrix object

Raises:

LinAlgError – For matrices that are not square or that (for negative powers) cannot be inverted numerically.

Examples

>>> from numpy.linalg import matrix_power
>>> i = np.array([[0, 1], [-1, 0]]) # matrix equiv. of the imaginary unit
>>> matrix_power(i, 3) # should = -i
array([[ 0, -1],
       [ 1,  0]])
>>> matrix_power(i, 0)
array([[1, 0],
       [0, 1]])
>>> matrix_power(i, -3) # should = 1/(-i) = i, but w/ f.p. elements
array([[ 0.,  1.],
       [-1.,  0.]])

Somewhat more sophisticated example

>>> q = np.zeros((4, 4))
>>> q[0:2, 0:2] = -i
>>> q[2:4, 2:4] = i
>>> q # one of the three quaternion units not equal to 1
array([[ 0., -1.,  0.,  0.],
       [ 1.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  1.],
       [ 0.,  0., -1.,  0.]])
>>> matrix_power(q, 2) # = -np.eye(4)
array([[-1.,  0.,  0.,  0.],
       [ 0., -1.,  0.,  0.],
       [ 0.,  0., -1.,  0.],
       [ 0.,  0.,  0., -1.]])
symjax.tensor.linalg.matrix_rank(M, tol=None)[source]

Return matrix rank of array using SVD method

LAX-backend implementation of matrix_rank(). Original docstring below.

Rank of the array is the number of singular values of the array that are greater than tol.

Changed in version 1.14: Can now operate on stacks of matrices

Parameters:
  • M ({(M,), (.., M, N)} array_like) – Input vector or stack of matrices.
  • tol ((..) array_like, float, optional) – Threshold below which SVD values are considered zero. If tol is None, and S is an array with singular values for M, and eps is the epsilon value for datatype of S, then tol is set to S.max() * max(M.shape) * eps.
Returns:

rank – Rank of M.

Return type:

(..) array_like

Notes

The default threshold to detect rank deficiency is a test on the magnitude of the singular values of M. By default, we identify singular values less than S.max() * max(M.shape) * eps as indicating rank deficiency (with the symbols defined above). This is the algorithm MATLAB uses [1]. It also appears in Numerical recipes in the discussion of SVD solutions for linear least squares [2].

This default threshold is designed to detect rank deficiency accounting for the numerical errors of the SVD computation. Imagine that there is a column in M that is an exact (in floating point) linear combination of other columns in M. Computing the SVD on M will not produce a singular value exactly equal to 0 in general: any difference of the smallest SVD value from 0 will be caused by numerical imprecision in the calculation of the SVD. Our threshold for small SVD values takes this numerical imprecision into account, and the default threshold will detect such numerical rank deficiency. The threshold may declare a matrix M rank deficient even if the linear combination of some columns of M is not exactly equal to another column of M but only numerically very close to another column of M.

We chose our default threshold because it is in wide use. Other thresholds are possible. For example, elsewhere in the 2007 edition of Numerical recipes there is an alternative threshold of S.max() * np.finfo(M.dtype).eps / 2. * np.sqrt(m + n + 1.). The authors describe this threshold as being based on “expected roundoff error” (p 71).

The thresholds above deal with floating point roundoff error in the calculation of the SVD. However, you may have more information about the sources of error in M that would make you consider other tolerance values to detect effective rank deficiency. The most useful measure of the tolerance depends on the operations you intend to use on your matrix. For example, if your data come from uncertain measurements with uncertainties greater than floating point epsilon, choosing a tolerance near that uncertainty may be preferable. The tolerance may be absolute if the uncertainties are absolute rather than relative.

References

[1]MATLAB reference documention, “Rank” https://www.mathworks.com/help/techdoc/ref/rank.html
[2]W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, “Numerical Recipes (3rd edition)”, Cambridge University Press, 2007, page 795.

Examples

>>> from numpy.linalg import matrix_rank
>>> matrix_rank(np.eye(4)) # Full rank matrix
4
>>> I=np.eye(4); I[-1,-1] = 0. # rank deficient matrix
>>> matrix_rank(I)
3
>>> matrix_rank(np.ones((4,))) # 1 dimension - rank 1 unless all 0
1
>>> matrix_rank(np.zeros((4,)))
0
symjax.tensor.linalg.multi_dot(arrays, *, precision=None)[source]
Compute the dot product of two or more arrays in a single function call,
while automatically selecting the fastest evaluation order.

LAX-backend implementation of multi_dot(). Original docstring below.

multi_dot chains numpy.dot and uses optimal parenthesization of the matrices [1]_ [2]_. Depending on the shapes of the matrices, this can speed up the multiplication a lot.

If the first argument is 1-D it is treated as a row vector. If the last argument is 1-D it is treated as a column vector. The other arguments must be 2-D.

Think of multi_dot as:

def multi_dot(arrays): return functools.reduce(np.dot, arrays)
Parameters:arrays (sequence of array_like) – If the first argument is 1-D it is treated as row vector. If the last argument is 1-D it is treated as column vector. The other arguments must be 2-D.
Returns:output – Returns the dot product of the supplied arrays.
Return type:ndarray

See also

dot()
dot multiplication with two arguments.

References

[1]Cormen, “Introduction to Algorithms”, Chapter 15.2, p. 370-378
[2]https://en.wikipedia.org/wiki/Matrix_chain_multiplication

Examples

multi_dot allows you to write:

>>> from numpy.linalg import multi_dot
>>> # Prepare some data
>>> A = np.random.random((10000, 100))
>>> B = np.random.random((100, 1000))
>>> C = np.random.random((1000, 5))
>>> D = np.random.random((5, 333))
>>> # the actual dot multiplication
>>> _ = multi_dot([A, B, C, D])

instead of:

>>> _ = np.dot(np.dot(np.dot(A, B), C), D)
>>> # or
>>> _ = A.dot(B).dot(C).dot(D)

Notes

The cost for a matrix multiplication can be calculated with the following function:

def cost(A, B):
    return A.shape[0] * A.shape[1] * B.shape[1]

Assume we have three matrices \(A_{10x100}, B_{100x5}, C_{5x50}\).

The costs for the two different parenthesizations are as follows:

cost((AB)C) = 10*100*5 + 10*5*50   = 5000 + 2500   = 7500
cost(A(BC)) = 10*100*50 + 100*5*50 = 50000 + 25000 = 75000
symjax.tensor.linalg.norm(x, ord=2, axis=None, keepdims=False)[source]

Tensor/Matrix/Vector norm.

For matrices and vectors, this function is able to return one of eight different matrix norms, or one of an infinite number of vector norms (described below), depending on the value of the ord parameter.

for higher-dimensional tensors, only \(0<ord<\infty\) is supported.

Parameters:
  • x (array_like) – Input array. If axis is None, x must be 1-D or 2-D, unless ord is None. If both axis and ord are None, the 2-norm of x.ravel will be returned.
  • ord ({non-zero int, inf, -inf, 'fro', 'nuc'}, optional) – Order of the norm (see table under Notes). inf means numpy’s inf object. The default is 2.
  • axis ({None, int, 2-tuple of ints}, optional.) – If axis is an integer, it specifies the axis of x along which to compute the vector norms. If axis is a 2-tuple, it specifies the axes that hold 2-D matrices, and the matrix norms of these matrices are computed. If axis is None then either a vector norm (when x is 1-D) or a matrix norm (when x is 2-D) is returned. The default is None. .. versionadded:: 1.8.0
  • keepdims (bool, optional) – If this is set to True, the axes which are normed over are left in the result as dimensions with size one. With this option the result will broadcast correctly against the original x. .. versionadded:: 1.10.0
Returns:

n – Norm of the matrix or vector(s).

Return type:

float or ndarray

See also

scipy.linalg.norm()
Similar function in SciPy.

Notes

For values of ord < 1, the result is, strictly speaking, not a mathematical ‘norm’, but it may still be useful for various numerical purposes. The following norms can be calculated: ===== ============================ ========================== ord norm for matrices norm for vectors ===== ============================ ========================== None Frobenius norm 2-norm ‘fro’ Frobenius norm – ‘nuc’ nuclear norm – inf max(sum(abs(x), axis=1)) max(abs(x)) -inf min(sum(abs(x), axis=1)) min(abs(x)) 0 – sum(x != 0) 1 max(sum(abs(x), axis=0)) as below -1 min(sum(abs(x), axis=0)) as below 2 2-norm (largest sing. value) as below -2 smallest singular value as below other – sum(abs(x)**ord)**(1./ord) ===== ============================ ========================== The Frobenius norm is given by [1]_:

\(||A||_F = [\sum_{i,j} abs(a_{i,j})^2]^{1/2}\)

The nuclear norm is the sum of the singular values. Both the Frobenius and nuclear norm orders are only defined for matrices and raise a ValueError when x.ndim != 2.

References

[1]G. H. Golub and C. F. Van Loan, Matrix Computations, Baltimore, MD, Johns Hopkins University Press, 1985, pg. 15

Examples

>>> from numpy import linalg as LA
>>> a = np.arange(9) - 4
>>> a
array([-4, -3, -2, ...,  2,  3,  4])
>>> b = a.reshape((3, 3))
>>> b
array([[-4, -3, -2],
       [-1,  0,  1],
       [ 2,  3,  4]])
>>> LA.norm(a)
7.745966692414834
>>> LA.norm(b)
7.745966692414834
>>> LA.norm(b, 'fro')
7.745966692414834
>>> LA.norm(a, np.inf)
4.0
>>> LA.norm(b, np.inf)
9.0
>>> LA.norm(a, -np.inf)
0.0
>>> LA.norm(b, -np.inf)
2.0
>>> LA.norm(a, 1)
20.0
>>> LA.norm(b, 1)
7.0
>>> LA.norm(a, -1)
-4.6566128774142013e-010
>>> LA.norm(b, -1)
6.0
>>> LA.norm(a, 2)
7.745966692414834
>>> LA.norm(b, 2)
7.3484692283495345
>>> LA.norm(a, -2)
0.0
>>> LA.norm(b, -2)
1.8570331885190563e-016 # may vary
>>> LA.norm(a, 3)
5.8480354764257312 # may vary
>>> LA.norm(a, -3)
0.0
Using the `axis` argument to compute vector norms:
>>> c = np.array([[ 1, 2, 3],
...               [-1, 1, 4]])
>>> LA.norm(c, axis=0)
array([ 1.41421356,  2.23606798,  5.        ])
>>> LA.norm(c, axis=1)
array([ 3.74165739,  4.24264069])
>>> LA.norm(c, ord=1, axis=1)
array([ 6.,  6.])
Using the `axis` argument to compute matrix norms:
>>> m = np.arange(8).reshape(2,2,2)
>>> LA.norm(m, axis=(1,2))
array([  3.74165739,  11.22497216])
>>> LA.norm(m[0, :, :]), LA.norm(m[1, :, :])
(3.7416573867739413, 11.224972160321824)
symjax.tensor.linalg.pinv(a, rcond=None)[source]

Compute the (Moore-Penrose) pseudo-inverse of a matrix.

LAX-backend implementation of pinv(). It differs only in default value of rcond. In numpy.linalg.pinv, the default rcond is 1e-15. Here the default is 10. * max(num_rows, num_cols) * jnp.finfo(dtype).eps.

Original docstring below.

Calculate the generalized inverse of a matrix using its singular-value decomposition (SVD) and including all large singular values.

Changed in version 1.14: Can now operate on stacks of matrices

Parameters:
  • a ((.., M, N) array_like) – Matrix or stack of matrices to be pseudo-inverted.
  • rcond ((..) array_like of float) – Cutoff for small singular values. Singular values less than or equal to rcond * largest_singular_value are set to zero. Broadcasts against the stack of matrices.
Returns:

B – The pseudo-inverse of a. If a is a matrix instance, then so is B.

Return type:

(.., N, M) ndarray

Raises:

LinAlgError – If the SVD computation does not converge.

See also

scipy.linalg.pinv()
Similar function in SciPy.
scipy.linalg.pinv2()
Similar function in SciPy (SVD-based).
scipy.linalg.pinvh()
Compute the (Moore-Penrose) pseudo-inverse of a Hermitian matrix.

Notes

The pseudo-inverse of a matrix A, denoted \(A^+\), is defined as: “the matrix that ‘solves’ [the least-squares problem] \(Ax = b\),” i.e., if \(\bar{x}\) is said solution, then \(A^+\) is that matrix such that \(\bar{x} = A^+b\).

It can be shown that if \(Q_1 \Sigma Q_2^T = A\) is the singular value decomposition of A, then \(A^+ = Q_2 \Sigma^+ Q_1^T\), where \(Q_{1,2}\) are orthogonal matrices, \(\Sigma\) is a diagonal matrix consisting of A’s so-called singular values, (followed, typically, by zeros), and then \(\Sigma^+\) is simply the diagonal matrix consisting of the reciprocals of A’s singular values (again, followed by zeros). [1]_

References

[1]G. Strang, Linear Algebra and Its Applications, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pp. 139-142.

Examples

The following example checks that a * a+ * a == a and a+ * a * a+ == a+:

>>> a = np.random.randn(9, 6)
>>> B = np.linalg.pinv(a)
>>> np.allclose(a, np.dot(a, np.dot(B, a)))
True
>>> np.allclose(B, np.dot(B, np.dot(a, B)))
True
symjax.tensor.linalg.qr(a, mode='reduced')[source]

Compute the qr factorization of a matrix.

LAX-backend implementation of qr(). Original docstring below.

Factor the matrix a as qr, where q is orthonormal and r is upper-triangular.

Parameters:
  • a (array_like, shape (M, N)) – Matrix to be factored.
  • mode ({'reduced', 'complete', 'r', 'raw'}, optional) – If K = min(M, N), then
Returns:

  • q (ndarray of float or complex, optional) – A matrix with orthonormal columns. When mode = ‘complete’ the result is an orthogonal/unitary matrix depending on whether or not a is real/complex. The determinant may be either +/- 1 in that case.
  • r (ndarray of float or complex, optional) – The upper-triangular matrix.
  • (h, tau) (ndarrays of np.double or np.cdouble, optional) – The array h contains the Householder reflectors that generate q along with r. The tau array contains scaling factors for the reflectors. In the deprecated ‘economic’ mode only h is returned.

Raises:

LinAlgError – If factoring fails.

See also

scipy.linalg.qr()
Similar function in SciPy.
scipy.linalg.rq()
Compute RQ decomposition of a matrix.

Notes

This is an interface to the LAPACK routines dgeqrf, zgeqrf, dorgqr, and zungqr.

For more information on the qr factorization, see for example: https://en.wikipedia.org/wiki/QR_factorization

Subclasses of ndarray are preserved except for the ‘raw’ mode. So if a is of type matrix, all the return values will be matrices too.

New ‘reduced’, ‘complete’, and ‘raw’ options for mode were added in NumPy 1.8.0 and the old option ‘full’ was made an alias of ‘reduced’. In addition the options ‘full’ and ‘economic’ were deprecated. Because ‘full’ was the previous default and ‘reduced’ is the new default, backward compatibility can be maintained by letting mode default. The ‘raw’ option was added so that LAPACK routines that can multiply arrays by q using the Householder reflectors can be used. Note that in this case the returned arrays are of type np.double or np.cdouble and the h array is transposed to be FORTRAN compatible. No routines using the ‘raw’ return are currently exposed by numpy, but some are available in lapack_lite and just await the necessary work.

Examples

>>> a = np.random.randn(9, 6)
>>> q, r = np.linalg.qr(a)
>>> np.allclose(a, np.dot(q, r))  # a does equal qr
True
>>> r2 = np.linalg.qr(a, mode='r')
>>> np.allclose(r, r2)  # mode='r' returns the same r as mode='full'
True

Example illustrating a common use of qr: solving of least squares problems

What are the least-squares-best m and y0 in y = y0 + mx for the following data: {(0,1), (1,0), (1,2), (2,1)}. (Graph the points and you’ll see that it should be y0 = 0, m = 1.) The answer is provided by solving the over-determined matrix equation Ax = b, where:

A = array([[0, 1], [1, 1], [1, 1], [2, 1]])
x = array([[y0], [m]])
b = array([[1], [0], [2], [1]])

If A = qr such that q is orthonormal (which is always possible via Gram-Schmidt), then x = inv(r) * (q.T) * b. (In numpy practice, however, we simply use lstsq.)

>>> A = np.array([[0, 1], [1, 1], [1, 1], [2, 1]])
>>> A
array([[0, 1],
       [1, 1],
       [1, 1],
       [2, 1]])
>>> b = np.array([1, 0, 2, 1])
>>> q, r = np.linalg.qr(A)
>>> p = np.dot(q.T, b)
>>> np.dot(np.linalg.inv(r), p)
array([  1.1e-16,   1.0e+00])
symjax.tensor.linalg.slogdet(a)[source]

Compute the sign and (natural) logarithm of the determinant of an array.

LAX-backend implementation of slogdet(). Original docstring below.

If an array has a very small or very large determinant, then a call to det may overflow or underflow. This routine is more robust against such issues, because it computes the logarithm of the determinant rather than the determinant itself.

Returns:
  • sign ((…) array_like) – A number representing the sign of the determinant. For a real matrix, this is 1, 0, or -1. For a complex matrix, this is a complex number with absolute value 1 (i.e., it is on the unit circle), or else 0.
  • logdet ((…) array_like) – The natural log of the absolute value of the determinant.
  • If the determinant is zero, then sign will be 0 and logdet will be
  • -Inf. In all cases, the determinant is equal to sign * np.exp(logdet).

See also

det()

Notes

New in version 1.8.0.

Broadcasting rules apply, see the numpy.linalg documentation for details.

New in version 1.6.0.

The determinant is computed via LU factorization using the LAPACK routine z/dgetrf.

Examples

The determinant of a 2-D array [[a, b], [c, d]] is ad - bc:

>>> a = np.array([[1, 2], [3, 4]])
>>> (sign, logdet) = np.linalg.slogdet(a)
>>> (sign, logdet)
(-1, 0.69314718055994529) # may vary
>>> sign * np.exp(logdet)
-2.0

Computing log-determinants for a stack of matrices:

>>> a = np.array([ [[1, 2], [3, 4]], [[1, 2], [2, 1]], [[1, 3], [3, 1]] ])
>>> a.shape
(3, 2, 2)
>>> sign, logdet = np.linalg.slogdet(a)
>>> (sign, logdet)
(array([-1., -1., -1.]), array([ 0.69314718,  1.09861229,  2.07944154]))
>>> sign * np.exp(logdet)
array([-2., -3., -8.])

This routine succeeds where ordinary det does not:

>>> np.linalg.det(np.eye(500) * 0.1)
0.0
>>> np.linalg.slogdet(np.eye(500) * 0.1)
(1, -1151.2925464970228)
symjax.tensor.linalg.solve(a, b)[source]

Solve a linear matrix equation, or system of linear scalar equations.

LAX-backend implementation of solve(). Original docstring below.

Computes the “exact” solution, x, of the well-determined, i.e., full rank, linear matrix equation ax = b.

Returns:x – Solution to the system a x = b. Returned shape is identical to b.
Return type:{(.., M,), (.., M, K)} ndarray
Raises:LinAlgError – If a is singular or not square.

See also

scipy.linalg.solve()
Similar function in SciPy.

Notes

New in version 1.8.0.

Broadcasting rules apply, see the numpy.linalg documentation for details.

The solutions are computed using LAPACK routine _gesv.

a must be square and of full-rank, i.e., all rows (or, equivalently, columns) must be linearly independent; if either is not true, use lstsq for the least-squares best “solution” of the system/equation.

References

[1]G. Strang, Linear Algebra and Its Applications, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pg. 22.

Examples

Solve the system of equations 3 * x0 + x1 = 9 and x0 + 2 * x1 = 8:

>>> a = np.array([[3,1], [1,2]])
>>> b = np.array([9,8])
>>> x = np.linalg.solve(a, b)
>>> x
array([2.,  3.])

Check that the solution is correct:

>>> np.allclose(np.dot(a, x), b)
True
symjax.tensor.linalg.svd(a, full_matrices=True, compute_uv=True)[source]

Singular Value Decomposition.

LAX-backend implementation of svd(). Original docstring below.

When a is a 2D array, it is factorized as u @ np.diag(s) @ vh = (u * s) @ vh, where u and vh are 2D unitary arrays and s is a 1D array of a’s singular values. When a is higher-dimensional, SVD is applied in stacked mode as explained below.

Parameters:
  • a ((.., M, N) array_like) – A real or complex array with a.ndim >= 2.
  • full_matrices (bool, optional) – If True (default), u and vh have the shapes (..., M, M) and (..., N, N), respectively. Otherwise, the shapes are (..., M, K) and (..., K, N), respectively, where K = min(M, N).
  • compute_uv (bool, optional) – Whether or not to compute u and vh in addition to s. True by default.
Returns:

  • u ({ (…, M, M), (…, M, K) } array) – Unitary array(s). The first a.ndim - 2 dimensions have the same size as those of the input a. The size of the last two dimensions depends on the value of full_matrices. Only returned when compute_uv is True.
  • s ((…, K) array) – Vector(s) with the singular values, within each vector sorted in descending order. The first a.ndim - 2 dimensions have the same size as those of the input a.
  • vh ({ (…, N, N), (…, K, N) } array) – Unitary array(s). The first a.ndim - 2 dimensions have the same size as those of the input a. The size of the last two dimensions depends on the value of full_matrices. Only returned when compute_uv is True.

Raises:

LinAlgError – If SVD computation does not converge.

See also

scipy.linalg.svd()
Similar function in SciPy.
scipy.linalg.svdvals()
Compute singular values of a matrix.

Notes

Changed in version 1.8.0: Broadcasting rules apply, see the numpy.linalg documentation for details.

The decomposition is performed using LAPACK routine _gesdd.

SVD is usually described for the factorization of a 2D matrix \(A\). The higher-dimensional case will be discussed below. In the 2D case, SVD is written as \(A = U S V^H\), where \(A = a\), \(U= u\), \(S= \mathtt{np.diag}(s)\) and \(V^H = vh\). The 1D array s contains the singular values of a and u and vh are unitary. The rows of vh are the eigenvectors of \(A^H A\) and the columns of u are the eigenvectors of \(A A^H\). In both cases the corresponding (possibly non-zero) eigenvalues are given by s**2.

If a has more than two dimensions, then broadcasting rules apply, as explained in routines.linalg-broadcasting. This means that SVD is working in “stacked” mode: it iterates over all indices of the first a.ndim - 2 dimensions and for each combination SVD is applied to the last two indices. The matrix a can be reconstructed from the decomposition with either (u * s[..., None, :]) @ vh or u @ (s[..., None] * vh). (The @ operator can be replaced by the function np.matmul for python versions below 3.5.)

If a is a matrix object (as opposed to an ndarray), then so are all the return values.

Examples

>>> a = np.random.randn(9, 6) + 1j*np.random.randn(9, 6)
>>> b = np.random.randn(2, 7, 8, 3) + 1j*np.random.randn(2, 7, 8, 3)

Reconstruction based on full SVD, 2D case:

>>> u, s, vh = np.linalg.svd(a, full_matrices=True)
>>> u.shape, s.shape, vh.shape
((9, 9), (6,), (6, 6))
>>> np.allclose(a, np.dot(u[:, :6] * s, vh))
True
>>> smat = np.zeros((9, 6), dtype=complex)
>>> smat[:6, :6] = np.diag(s)
>>> np.allclose(a, np.dot(u, np.dot(smat, vh)))
True

Reconstruction based on reduced SVD, 2D case:

>>> u, s, vh = np.linalg.svd(a, full_matrices=False)
>>> u.shape, s.shape, vh.shape
((9, 6), (6,), (6, 6))
>>> np.allclose(a, np.dot(u * s, vh))
True
>>> smat = np.diag(s)
>>> np.allclose(a, np.dot(u, np.dot(smat, vh)))
True

Reconstruction based on full SVD, 4D case:

>>> u, s, vh = np.linalg.svd(b, full_matrices=True)
>>> u.shape, s.shape, vh.shape
((2, 7, 8, 8), (2, 7, 3), (2, 7, 3, 3))
>>> np.allclose(b, np.matmul(u[..., :3] * s[..., None, :], vh))
True
>>> np.allclose(b, np.matmul(u[..., :3], s[..., None] * vh))
True

Reconstruction based on reduced SVD, 4D case:

>>> u, s, vh = np.linalg.svd(b, full_matrices=False)
>>> u.shape, s.shape, vh.shape
((2, 7, 8, 3), (2, 7, 3), (2, 7, 3, 3))
>>> np.allclose(b, np.matmul(u * s[..., None, :], vh))
True
>>> np.allclose(b, np.matmul(u, s[..., None] * vh))
True
symjax.tensor.linalg.tensorinv(a, ind=2)[source]

Compute the ‘inverse’ of an N-dimensional array.

LAX-backend implementation of tensorinv(). Original docstring below.

The result is an inverse for a relative to the tensordot operation tensordot(a, b, ind), i. e., up to floating-point accuracy, tensordot(tensorinv(a), a, ind) is the “identity” tensor for the tensordot operation.

Parameters:
  • a (array_like) – Tensor to ‘invert’. Its shape must be ‘square’, i. e., prod(a.shape[:ind]) == prod(a.shape[ind:]).
  • ind (int, optional) – Number of first indices that are involved in the inverse sum. Must be a positive integer, default is 2.
Returns:

ba’s tensordot inverse, shape a.shape[ind:] + a.shape[:ind].

Return type:

ndarray

Raises:

LinAlgError – If a is singular or not ‘square’ (in the above sense).

See also

numpy.tensordot(), tensorsolve()

Examples

>>> a = np.eye(4*6)
>>> a.shape = (4, 6, 8, 3)
>>> ainv = np.linalg.tensorinv(a, ind=2)
>>> ainv.shape
(8, 3, 4, 6)
>>> b = np.random.randn(4, 6)
>>> np.allclose(np.tensordot(ainv, b), np.linalg.tensorsolve(a, b))
True
>>> a = np.eye(4*6)
>>> a.shape = (24, 8, 3)
>>> ainv = np.linalg.tensorinv(a, ind=1)
>>> ainv.shape
(8, 3, 24)
>>> b = np.random.randn(24)
>>> np.allclose(np.tensordot(ainv, b, 1), np.linalg.tensorsolve(a, b))
True
symjax.tensor.linalg.tensorsolve(a, b, axes=None)[source]

Solve the tensor equation a x = b for x.

LAX-backend implementation of tensorsolve(). Original docstring below.

It is assumed that all indices of x are summed over in the product, together with the rightmost indices of a, as is done in, for example, tensordot(a, x, axes=b.ndim).

Parameters:
  • a (array_like) – Coefficient tensor, of shape b.shape + Q. Q, a tuple, equals the shape of that sub-tensor of a consisting of the appropriate number of its rightmost indices, and must be such that prod(Q) == prod(b.shape) (in which sense a is said to be ‘square’).
  • b (array_like) – Right-hand tensor, which can be of any shape.
  • axes (tuple of ints, optional) – Axes in a to reorder to the right, before inversion. If None (default), no reordering is done.
Returns:

x

Return type:

ndarray, shape Q

Raises:

LinAlgError – If a is singular or not ‘square’ (in the above sense).

See also

numpy.tensordot(), tensorinv(), numpy.einsum()

Examples

>>> a = np.eye(2*3*4)
>>> a.shape = (2*3, 4, 2, 3, 4)
>>> b = np.random.randn(2*3, 4)
>>> x = np.linalg.tensorsolve(a, b)
>>> x.shape
(2, 3, 4)
>>> np.allclose(np.tensordot(a, x, axes=3), b)
True
symjax.tensor.linalg.cholesky(a, lower=False, overwrite_a=False, check_finite=True)[source]

Compute the Cholesky decomposition of a matrix.

LAX-backend implementation of cholesky(). Original docstring below.

Returns the Cholesky decomposition, \(A = L L^*\) or \(A = U^* U\) of a Hermitian positive-definite matrix A.

Parameters:
  • a ((M, M) array_like) – Matrix to be decomposed
  • lower (bool, optional) – Whether to compute the upper- or lower-triangular Cholesky factorization. Default is upper-triangular.
  • overwrite_a (bool, optional) – Whether to overwrite data in a (may improve performance).
  • check_finite (bool, optional) – Whether to check that the input matrix contains only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs.
Returns:

c – Upper- or lower-triangular Cholesky factor of a.

Return type:

(M, M) ndarray

Raises:

LinAlgError : if decomposition fails.

Examples

>>> from scipy.linalg import cholesky
>>> a = np.array([[1,-2j],[2j,5]])
>>> L = cholesky(a, lower=True)
>>> L
array([[ 1.+0.j,  0.+0.j],
       [ 0.+2.j,  1.+0.j]])
>>> L @ L.T.conj()
array([[ 1.+0.j,  0.-2.j],
       [ 0.+2.j,  5.+0.j]])
symjax.tensor.linalg.block_diag(*arrs)[source]

Create a block diagonal matrix from provided arrays.

LAX-backend implementation of block_diag(). Original docstring below.

Given the inputs A, B and C, the output will have these arrays arranged on the diagonal:

[[A, 0, 0],
 [0, B, 0],
 [0, 0, C]]
Returns:D – Array with A, B, C, … on the diagonal. D has the same dtype as A.
Return type:ndarray

Notes

If all the input arrays are square, the output is known as a block diagonal matrix.

Empty sequences (i.e., array-likes of zero size) will not be ignored. Noteworthy, both [] and [[]] are treated as matrices with shape (1,0).

Examples

>>> from scipy.linalg import block_diag
>>> A = [[1, 0],
...      [0, 1]]
>>> B = [[3, 4, 5],
...      [6, 7, 8]]
>>> C = [[7]]
>>> P = np.zeros((2, 0), dtype='int32')
>>> block_diag(A, B, C)
array([[1, 0, 0, 0, 0, 0],
       [0, 1, 0, 0, 0, 0],
       [0, 0, 3, 4, 5, 0],
       [0, 0, 6, 7, 8, 0],
       [0, 0, 0, 0, 0, 7]])
>>> block_diag(A, P, B, C)
array([[1, 0, 0, 0, 0, 0],
       [0, 1, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0],
       [0, 0, 3, 4, 5, 0],
       [0, 0, 6, 7, 8, 0],
       [0, 0, 0, 0, 0, 7]])
>>> block_diag(1.0, [2, 3], [[4, 5], [6, 7]])
array([[ 1.,  0.,  0.,  0.,  0.],
       [ 0.,  2.,  3.,  0.,  0.],
       [ 0.,  0.,  0.,  4.,  5.],
       [ 0.,  0.,  0.,  6.,  7.]])
symjax.tensor.linalg.cho_solve(c_and_lower, b, overwrite_b=False, check_finite=True)[source]

Solve the linear equations A x = b, given the Cholesky factorization of A.

LAX-backend implementation of cho_solve(). Original docstring below.

(c, lower) : tuple, (array, bool)
Cholesky factorization of a, as given by cho_factor
b : array
Right-hand side
overwrite_b : bool, optional
Whether to overwrite data in b (may improve performance)
check_finite : bool, optional
Whether to check that the input matrices contain only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs.
x : array
The solution to the system A x = b

cho_factor : Cholesky factorization of a matrix

>>> from scipy.linalg import cho_factor, cho_solve
>>> A = np.array([[9, 3, 1, 5], [3, 7, 5, 1], [1, 5, 9, 2], [5, 1, 2, 6]])
>>> c, low = cho_factor(A)
>>> x = cho_solve((c, low), [1, 1, 1, 1])
>>> np.allclose(A @ x - [1, 1, 1, 1], np.zeros(4))
True
symjax.tensor.linalg.eigh(a, b=None, lower=True, eigvals_only=False, overwrite_a=False, overwrite_b=False, turbo=True, eigvals=None, type=1, check_finite=True)[source]
Solve a standard or generalized eigenvalue problem for a complex
Hermitian or real symmetric matrix.

LAX-backend implementation of eigh(). Original docstring below.

Find eigenvalues array w and optionally eigenvectors array v of array a, where b is positive definite such that for every eigenvalue λ (i-th entry of w) and its eigenvector vi (i-th column of v) satisfies:

              a @ vi = λ * b @ vi
vi.conj().T @ a @ vi = λ
vi.conj().T @ b @ vi = 1

In the standard problem, b is assumed to be the identity matrix.

Parameters:
  • a ((M, M) array_like) – A complex Hermitian or real symmetric matrix whose eigenvalues and eigenvectors will be computed.
  • b ((M, M) array_like, optional) – A complex Hermitian or real symmetric definite positive matrix in. If omitted, identity matrix is assumed.
  • lower (bool, optional) – Whether the pertinent array data is taken from the lower or upper triangle of a and, if applicable, b. (Default: lower)
  • eigvals_only (bool, optional) – Whether to calculate only eigenvalues and no eigenvectors. (Default: both are calculated)
  • type (int, optional) – For the generalized problems, this keyword specifies the problem type to be solved for w and v (only takes 1, 2, 3 as possible inputs):
  • overwrite_a (bool, optional) – Whether to overwrite data in a (may improve performance). Default is False.
  • overwrite_b (bool, optional) – Whether to overwrite data in b (may improve performance). Default is False.
  • check_finite (bool, optional) – Whether to check that the input matrices contain only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs.
  • turbo (bool, optional) – Deprecated since v1.5.0, use ``driver=gvd`` keyword instead. Use divide and conquer algorithm (faster but expensive in memory, only for generalized eigenvalue problem and if full set of eigenvalues are requested.). Has no significant effect if eigenvectors are not requested.
  • eigvals (tuple (lo, hi), optional) – Deprecated since v1.5.0, use ``subset_by_index`` keyword instead. Indexes of the smallest and largest (in ascending order) eigenvalues and corresponding eigenvectors to be returned: 0 <= lo <= hi <= M-1. If omitted, all eigenvalues and eigenvectors are returned.
Returns:

  • w ((N,) ndarray) – The N (1<=N<=M) selected eigenvalues, in ascending order, each repeated according to its multiplicity.
  • v ((M, N) ndarray) – (if eigvals_only == False)

Raises:

LinAlgError – If eigenvalue computation does not converge, an error occurred, or b matrix is not definite positive. Note that if input matrices are not symmetric or Hermitian, no error will be reported but results will be wrong.

See also

eigvalsh()
eigenvalues of symmetric or Hermitian arrays
eig()
eigenvalues and right eigenvectors for non-symmetric arrays
eigh_tridiagonal()
eigenvalues and right eiegenvectors for symmetric/Hermitian tridiagonal matrices

Notes

This function does not check the input array for being hermitian/symmetric in order to allow for representing arrays with only their upper/lower triangular parts. Also, note that even though not taken into account, finiteness check applies to the whole array and unaffected by “lower” keyword.

This function uses LAPACK drivers for computations in all possible keyword combinations, prefixed with sy if arrays are real and he if complex, e.g., a float array with “evr” driver is solved via “syevr”, complex arrays with “gvx” driver problem is solved via “hegvx” etc.

As a brief summary, the slowest and the most robust driver is the classical <sy/he>ev which uses symmetric QR. <sy/he>evr is seen as the optimal choice for the most general cases. However, there are certain occassions that <sy/he>evd computes faster at the expense of more memory usage. <sy/he>evx, while still being faster than <sy/he>ev, often performs worse than the rest except when very few eigenvalues are requested for large arrays though there is still no performance guarantee.

For the generalized problem, normalization with respoect to the given type argument:

type 1 and 3 :      v.conj().T @ a @ v = w
type 2       : inv(v).conj().T @ a @ inv(v) = w

type 1 or 2  :      v.conj().T @ b @ v  = I
type 3       : v.conj().T @ inv(b) @ v  = I

Examples

>>> from scipy.linalg import eigh
>>> A = np.array([[6, 3, 1, 5], [3, 0, 5, 1], [1, 5, 6, 2], [5, 1, 2, 2]])
>>> w, v = eigh(A)
>>> np.allclose(A @ v - v @ np.diag(w), np.zeros((4, 4)))
True

Request only the eigenvalues

>>> w = eigh(A, eigvals_only=True)

Request eigenvalues that are less than 10.

>>> A = np.array([[34, -4, -10, -7, 2],
...               [-4, 7, 2, 12, 0],
...               [-10, 2, 44, 2, -19],
...               [-7, 12, 2, 79, -34],
...               [2, 0, -19, -34, 29]])
>>> eigh(A, eigvals_only=True, subset_by_value=[-np.inf, 10])
array([6.69199443e-07, 9.11938152e+00])

Request the largest second eigenvalue and its eigenvector

>>> w, v = eigh(A, subset_by_index=[1, 1])
>>> w
array([9.11938152])
>>> v.shape  # only a single column is returned
(5, 1)
symjax.tensor.linalg.expm(A, *, upper_triangular=False, max_squarings=16)[source]

Compute the matrix exponential using Pade approximation.

LAX-backend implementation of expm().

In addition to the original NumPy argument(s) listed below, also supports the optional boolean argument upper_triangular to specify whether the A matrix is upper triangular, and the optional argument max_squarings to specify the max number of squarings allowed in the scaling-and-squaring approximation method. Return nan if the actual number of squarings required is more than max_squarings.

The number of required squarings = max(0, ceil(log2(norm(A)) - c) where norm() denotes the L1 norm, and

c=2.42 for float64 or complex128, c=1.97 for float32 or complex64

Original docstring below.

Parameters:A ((N, N) array_like or sparse matrix) – Matrix to be exponentiated.
Returns:expm – Matrix exponential of A.
Return type:(N, N) ndarray

References

[1]Awad H. Al-Mohy and Nicholas J. Higham (2009) “A New Scaling and Squaring Algorithm for the Matrix Exponential.” SIAM Journal on Matrix Analysis and Applications. 31 (3). pp. 970-989. ISSN 1095-7162

Examples

>>> from scipy.linalg import expm, sinm, cosm

Matrix version of the formula exp(0) = 1:

>>> expm(np.zeros((2,2)))
array([[ 1.,  0.],
       [ 0.,  1.]])

Euler’s identity (exp(i*theta) = cos(theta) + i*sin(theta)) applied to a matrix:

>>> a = np.array([[1.0, 2.0], [-1.0, 3.0]])
>>> expm(1j*a)
array([[ 0.42645930+1.89217551j, -2.13721484-0.97811252j],
       [ 1.06860742+0.48905626j, -1.71075555+0.91406299j]])
>>> cosm(a) + 1j*sinm(a)
array([[ 0.42645930+1.89217551j, -2.13721484-0.97811252j],
       [ 1.06860742+0.48905626j, -1.71075555+0.91406299j]])
symjax.tensor.linalg.inv(a, overwrite_a=False, check_finite=True)[source]

Compute the inverse of a matrix.

LAX-backend implementation of inv(). Original docstring below.

Parameters:
  • a (array_like) – Square matrix to be inverted.
  • overwrite_a (bool, optional) – Discard data in a (may improve performance). Default is False.
  • check_finite (bool, optional) – Whether to check that the input matrix contains only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs.
Returns:

ainv – Inverse of the matrix a.

Return type:

ndarray

Raises:
  • LinAlgError – If a is singular.
  • ValueError – If a is not square, or not 2D.

Examples

>>> from scipy import linalg
>>> a = np.array([[1., 2.], [3., 4.]])
>>> linalg.inv(a)
array([[-2. ,  1. ],
       [ 1.5, -0.5]])
>>> np.dot(a, linalg.inv(a))
array([[ 1.,  0.],
       [ 0.,  1.]])
symjax.tensor.linalg.lu(a, permute_l=False, overwrite_a=False, check_finite=True)[source]

Compute pivoted LU decomposition of a matrix.

LAX-backend implementation of lu(). Original docstring below.

The decomposition is:

A = P L U

where P is a permutation matrix, L lower triangular with unit diagonal elements, and U upper triangular.

a : (M, N) array_like
Array to decompose
permute_l : bool, optional
Perform the multiplication P*L (Default: do not permute)
overwrite_a : bool, optional
Whether to overwrite data in a (may improve performance)
check_finite : bool, optional
Whether to check that the input matrix contains only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs.

(If permute_l == False)

p : (M, M) ndarray
Permutation matrix
l : (M, K) ndarray
Lower triangular or trapezoidal matrix with unit diagonal. K = min(M, N)
u : (K, N) ndarray
Upper triangular or trapezoidal matrix

(If permute_l == True)

pl : (M, K) ndarray
Permuted L matrix. K = min(M, N)
u : (K, N) ndarray
Upper triangular or trapezoidal matrix

This is a LU factorization routine written for SciPy.

>>> from scipy.linalg import lu
>>> A = np.array([[2, 5, 8, 7], [5, 2, 2, 8], [7, 5, 6, 6], [5, 4, 4, 8]])
>>> p, l, u = lu(A)
>>> np.allclose(A - p @ l @ u, np.zeros((4, 4)))
True
symjax.tensor.linalg.lu_factor(a, overwrite_a=False, check_finite=True)[source]

Compute pivoted LU decomposition of a matrix.

LAX-backend implementation of lu_factor(). Original docstring below.

The decomposition is:

A = P L U

where P is a permutation matrix, L lower triangular with unit diagonal elements, and U upper triangular.

Parameters:
  • a ((M, M) array_like) – Matrix to decompose
  • overwrite_a (bool, optional) – Whether to overwrite data in A (may increase performance)
  • check_finite (bool, optional) – Whether to check that the input matrix contains only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs.
Returns:

  • lu ((N, N) ndarray) – Matrix containing U in its upper triangle, and L in its lower triangle. The unit diagonal elements of L are not stored.
  • piv ((N,) ndarray) – Pivot indices representing the permutation matrix P: row i of matrix was interchanged with row piv[i].

See also

lu_solve()
solve an equation system using the LU factorization of a matrix

Notes

This is a wrapper to the *GETRF routines from LAPACK.

Examples

>>> from scipy.linalg import lu_factor
>>> A = np.array([[2, 5, 8, 7], [5, 2, 2, 8], [7, 5, 6, 6], [5, 4, 4, 8]])
>>> lu, piv = lu_factor(A)
>>> piv
array([2, 2, 3, 3], dtype=int32)

Convert LAPACK’s piv array to NumPy index and test the permutation

>>> piv_py = [2, 0, 3, 1]
>>> L, U = np.tril(lu, k=-1) + np.eye(4), np.triu(lu)
>>> np.allclose(A[piv_py] - L @ U, np.zeros((4, 4)))
True
symjax.tensor.linalg.lu_solve(lu_and_piv, b, trans=0, overwrite_b=False, check_finite=True)[source]

Solve an equation system, a x = b, given the LU factorization of a

LAX-backend implementation of lu_solve(). Original docstring below.

Parameters:
  • b (array) – Right-hand side
  • trans ({0, 1, 2}, optional) – Type of system to solve:
  • overwrite_b (bool, optional) – Whether to overwrite data in b (may increase performance)
  • check_finite (bool, optional) – Whether to check that the input matrices contain only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs.
Returns:

x – Solution to the system

Return type:

array

See also

lu_factor()
LU factorize a matrix

Examples

>>> from scipy.linalg import lu_factor, lu_solve
>>> A = np.array([[2, 5, 8, 7], [5, 2, 2, 8], [7, 5, 6, 6], [5, 4, 4, 8]])
>>> b = np.array([1, 1, 1, 1])
>>> lu, piv = lu_factor(A)
>>> x = lu_solve((lu, piv), b)
>>> np.allclose(A @ x - b, np.zeros((4,)))
True
symjax.tensor.linalg.solve_triangular(a, b, trans=0, lower=False, unit_diagonal=False, overwrite_b=False, debug=None, check_finite=True)[source]

Solve the equation a x = b for x, assuming a is a triangular matrix.

LAX-backend implementation of solve_triangular(). Original docstring below.

Parameters:
  • a ((M, M) array_like) – A triangular matrix
  • b ((M,) or (M, N) array_like) – Right-hand side matrix in a x = b
  • lower (bool, optional) – Use only data contained in the lower triangle of a. Default is to use upper triangle.
  • trans ({0, 1, 2, 'N', 'T', 'C'}, optional) – Type of system to solve:
  • unit_diagonal (bool, optional) – If True, diagonal elements of a are assumed to be 1 and will not be referenced.
  • overwrite_b (bool, optional) – Allow overwriting data in b (may enhance performance)
  • check_finite (bool, optional) – Whether to check that the input matrices contain only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs.
Returns:

x – Solution to the system a x = b. Shape of return matches b.

Return type:

(M,) or (M, N) ndarray

Raises:

LinAlgError – If a is singular

Notes

New in version 0.9.0.

Examples

Solve the lower triangular system a x = b, where:

     [3  0  0  0]       [4]
a =  [2  1  0  0]   b = [2]
     [1  0  1  0]       [4]
     [1  1  1  1]       [2]
>>> from scipy.linalg import solve_triangular
>>> a = np.array([[3, 0, 0, 0], [2, 1, 0, 0], [1, 0, 1, 0], [1, 1, 1, 1]])
>>> b = np.array([4, 2, 4, 2])
>>> x = solve_triangular(a, b, lower=True)
>>> x
array([ 1.33333333, -0.66666667,  2.66666667, -1.33333333])
>>> a.dot(x)  # Check the result
array([ 4.,  2.,  4.,  2.])
symjax.tensor.linalg.tril(m, k=0)[source]

Make a copy of a matrix with elements above the kth diagonal zeroed.

LAX-backend implementation of tril(). Original docstring below.

Parameters:
  • m (array_like) – Matrix whose elements to return
  • k (int, optional) – Diagonal above which to zero elements. k == 0 is the main diagonal, k < 0 subdiagonal and k > 0 superdiagonal.
Returns:

tril – Return is the same shape and type as m.

Return type:

ndarray

Examples

>>> from scipy.linalg import tril
>>> tril([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1)
array([[ 0,  0,  0],
       [ 4,  0,  0],
       [ 7,  8,  0],
       [10, 11, 12]])
symjax.tensor.linalg.triu(m, k=0)[source]

Make a copy of a matrix with elements below the kth diagonal zeroed.

LAX-backend implementation of triu(). Original docstring below.

Parameters:
  • m (array_like) – Matrix whose elements to return
  • k (int, optional) – Diagonal below which to zero elements. k == 0 is the main diagonal, k < 0 subdiagonal and k > 0 superdiagonal.
Returns:

triu – Return matrix with zeroed elements below the kth diagonal and has same shape and type as m.

Return type:

ndarray

Examples

>>> from scipy.linalg import triu
>>> triu([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1)
array([[ 1,  2,  3],
       [ 4,  5,  6],
       [ 0,  8,  9],
       [ 0,  0, 12]])
symjax.tensor.linalg.singular_vectors_power_iteration(weight, axis=0, n_iters=1)[source]
symjax.tensor.linalg.eigenvector_power_iteration(weight, axis=0, n_iters=1)[source]