To compute the transpose of the matrix $ A $ must run transpose(A) or $ \mathbf {A \widehat { } {T } } $.
To calculate the inverse matrix for the matrix A, to execute inverse(A) or $\mathbf{A\widehat{ } {(-1)}}$.
To calculate the adjoint matrix for a given matrix $A$ execute adjoint(A) or $\mathbf{A\widehat{ }{\backslash star}}$.
To calculate the rank of a matrix $A$, you must run rank(A), to calculate its determinant, you must run det(A).
To calculate the conjugate matrix, you must run conjugate(A) or $\mathbf {A\widehat{ } { \backslash ast}}$.
To calculate the SVD-decomposition of a matrix, you must execute the command SVD(A). As a result, three matrices $ [U, D, V] $ will be calculated. The matrices $ U, V $ are unitary, the matrix $ D $ is diagonal: $ A = UDV $.
\begin {verbatim} SPACE = R64[]; A = [[2,3,4], [1,3,3], [2,4,3]]; B = \SVD(A); \print(B); \end{verbatim}
To compute the generalized inverse Moore-Penrose matrix must run genInverse(A) or $\mathbf{A\widehat{ } {+}}$.
To compute the echelon form of the matrix $A$, you should run toEchelonForm(A).
To calculate the kernel of matrix $A$, you should run kernel(A).
To calculate the characteristic polynomial of the matrix A with entries in $R[x_1,…,x_m]$, you should give the ring $R[x_1,…,x_m]R[t]$ or $R[t,x_1,…,x_m]$ with some new variable $t$ and run charPolynom(A).
To calculate the LSU-decomposition of the matrix A, you must run LSU(A).
The result is a vector of three matrices $[L,D,U]$. Where $L$ is a lower triangular matrix, $U$ — upper triangular matrix, $D$ — permutation matrix, multiplied by the inverse of the diagonal matrix. If the elements of the matrix A are elements of commutative domain R, then elements of matrices $L$, $D^{-1}$, $U$ are elements of the same domain R.
\section {Choletsky Decomposition}
This decomposition is done with a command where the argument is the original matrix: cholesky} {(A) or cholesky(A, 0). In this case, the matrix must be symmetric and positive definite, only in this case the expansion will be correctly calculated.
The result is two lower triangular matrices: $ [L, S] $, with $A=l*L^{T} $ and $ S*L=I $.
For large dense matrices, starting from a size of 100x100, you can use a fast algorithm that uses multiplication of blocks by the Winograd-Strassen algorithm: cholesky(A, 1).
To calculate the LSU-decomposition of the matrix $A$ together with decomposition of the pseudo inverse matrix $A^{\times}=(1/det^2)WSM$, you must run LSUWMdet(A).
The result is a vector of five matrices and determinant of the largest non-degenerate corner block $ [L, D, U, W, M,det] $. Here $ L $ and $ U $ ~ are the lower and upper triangular matrices, $ S $ ~ — truncated weighted permutation matrix, $ DM $ and $ WD $ ~ — lower and upper triangular matrices. Moreover, $ A = LSU $ and $ A ^ {\times} = (1/det^2) WSM $. If the elements of the matrix $ A $ are taken from the commutative domain, then all matrices, except for $ S $, also belong to this domain.
To calculate the Bruhat decomposition of the matrix A, you must run BruhatDecomposition(A).
The result is a vector of three matrices $[V,D,U]$. Where $V$ and $U$ — upper triangular matrices, $D$ — permutation matrix, multiplied by the inverse of the diagonal matrix. If the elements of the matrix $A$ are elements of commutative domain $R$, then elements of matrices $V$, $D^{-1}$, $U$ are elements of the same domain $R$.
\
Other functions:
\
LSUWMdet — Result is a vector of 6 matrices $ [L, S, U, W, M, [[det]]] $. A = LSU, pseudoInverse(A) = $(1/det^2)$WSM, det is a nonzero maximum in size angular minor.
pseudoInverse} { — Pseudo inverse of a matrix. It, unlike the Moore-Penrose matrix, satisfies only two of the four identities. However, it is faster to compute;
SVD} { — SVD decomposition of a matrix over real numbers. The result is a vector of three matrices $ [U, D, V^{T}] $. Here $ U, V^{T} $ — are orthogonal matrices, $ D $ is a diagonal matrix.
QR} { — QR decomposition of a matrix over real numbers. The result is a vector of two matrices $[Q, R]$. Here $ Q $ — is an orthogonal matrix, $ R $ — is an upper triangular matrix.
sylvesterp1, p2, kind=0 or 1 — the Sylvester matrix is constructed from the coefficients of the polynomials $ p1, p2 $. The ring Z [x, y, z, u] will be considered as a ring Z[u][x, y, z] (ring in one variable u with coefficients from Z[x, y, z].) If kind = 0, then the size of the matrix is (n1 + n2), if kind = 1, then the size of the matrix is 2*max(n1, n2).
Let there be given the objective function $\sum_{j = 1}^n c_j x_j$ and conditions $$\sum_{j = 1}^n a_{ij}x_j\leqslant b_i,\text{ here }i = 1,2,…,m,$$ $$x_j\geqslant 0,\text{ here }j = 1,2,…,n.$$
We define $m\times n$-matrix $A = (a_{ij})$, $ m $-dimensional vector $b = (b_i)$, $n$-dimensional vector $c = (c_j)$ and $n$-dimensional vector $x = (x_j)$.
Then the objective function can be written as $c^Tx,$ and and conditions can be written as $$Ax \leqslant b,$$ $$ x \geqslant 0.$$
For solving linear programming problems, you can use one of the following two commands SimplexMax or SimplexMin. The result is a vector.
Depending on the type of problem you have the following options.
1. To solve the problem $$c^Tx \rightarrow max$$ under conditions $$Ax \leqslant b,$$ $$ x \geqslant 0,$$ we use the SimplexMax(A, b, c).
If the objective function needs to be minimized, , i.e. $$c^Tx \rightarrow min,$$ then we use the SimplexMin(A, b, c).
Example.
We need to maximize the $$3x_1 + x_2 + 2x_3$$ under the conditions $$ x_1 + x_2 + 3x_3 \leqslant 30, 2x_1 + 2x_2 + 5x_3 \leqslant 24, 4x_1 + x_2 + 2x_3 \leqslant 36, x_1, x_2, x_3 \geqslant 0. $$
2. To solve the problem $$c^Tx \rightarrow max$$ under the conditions $$A_1 x\leqslant b_1,$$ $$A_2 x= b_2,$$ $$ x \geqslant 0,$$ we use the SimplexMax(A_1,A_2, b_1, b_2, c).
If the objective function needs to be minimized, i.e. $$c^Tx \rightarrow min,$$ then we use the SimplexMin(A_1,A_2, b_1, b_2, c).
Example.
We need to maximize the $$7x_1 + x_3 - 4x_4$$
under the conditions $$ x_1 - x_2 + 2x_3 - x_4 \leqslant 6, 2x_1 + x_2 - x_3 = -1, x_1, x_2, x_3, x_4 \geqslant 0. $$
3. To solve the problem $$c^Tx \rightarrow max$$ under the conditions $$A_1 x\leqslant b_1,$$ $$A_2 x= b_2,$$ $$A_3 x\geqslant b_3,$$ we use the SimplexMax(A_1,A_2, A_3,b_1, b_2, b_3,c).
If the objective function needs to be minimized, i.e. $$c^Tx \rightarrow min,$$ then we use the SimplexMin(A_1,A_2, A_3,b_1, b_2, b_3, c).
Example.
$$7x_1 + x_3 - 4x_4$$ We need to maximize the $$x_1 + x_2$$ under the conditions $$ 4x_1 - x_2 \leqslant 8, 2x_1 + x_2 \leqslant 10, -5x_1 + 2x_2 \geqslant -2, x_1, x_2 \geqslant 0. $$
4. To solve the problem $$c^Tx \rightarrow max$$ in mixed conditions desired by the matrix $A$ and vector $b$, you can use the command SimplexMax(A,signs,b,c), where an array of integers $ signs $ determines the signs of comparison:
-1 means "less than or equal to",
0 means "equal to",
1 means "greater than or equal to".
The array $signs$ must contain the same number of elements as the vector $ b $. If the objective function needs to be minimized, i.e. $$c^Tx \rightarrow min,$$ then we use the SimplexMin(A,signs,b,c).
Example.
We need to minimize the $$-2x_1-4x_2-2x_3$$ under the conditions
$$ -2x_1 + x_2 + x_3 \leqslant 4, - x_1 + x_2 + 3x_3 \leqslant 6, x_1 - 3x_2 + x_3 \leqslant 2, x_1, x_2, x_3 \geqslant 0. $$ In: