|
|
| |
Math::MatrixReal(3) |
User Contributed Perl Documentation |
Math::MatrixReal(3) |
Math::MatrixReal - Matrix of Reals
Implements the data type "matrix of real numbers" (and
consequently also "vector of real numbers").
my $a = Math::MatrixReal->new_random(5, 5);
my $b =
$a->new_random(10, 30, { symmetric=>1,
bounded_by=>[-1,1] });
my $c = $b *
$a ** 3;
my $d =
$b->new_from_rows( [ [ 5, 3 ,4], [3, 4, 5], [ 2,
4, 1 ] ] );
print $a;
my $row = ($a *
$b)->row(3);
my $col = (5*$c)->col(2);
my $transpose = ~$c;
my $transpose =
$c->transpose;
my $inverse =
$a->inverse;
my $inverse = 1/$a;
my $inverse = $a
** -1;
my $determinant=
$a->det;
- •
- $matrix->display_precision($integer)
Sets the default precision when matrices are printed or
stringified.
$matrix->display_precision(0) will
only show the integer part of all the entries of
$matrix and
$matrix->display_precision() will
return to the default scientific display notation. This method does not
effect the precision of the calculations.
- use Math::MatrixReal;
Makes the methods and overloaded operators of this module
available to your program.
- $new_matrix = new
Math::MatrixReal($rows,$columns);
The matrix object constructor method. A new matrix of size
$rows by $columns will
be created, with the value 0.0 for all
elements.
Note that this method is implicitly called by many of the
other methods in this module.
- $new_matrix =
$some_matrix->new($rows,$columns);
Another way of calling the matrix object constructor
method.
Matrix $some_matrix is not changed by
this in any way.
- $new_matrix =
$matrix->new_from_cols( [
$column_vector|$array_ref|$string, ... ] )
Creates a new matrix given a reference to an array of any of
the following:
- column vectors ( n by 1 Math::MatrixReal matrices )
- references to arrays
- strings properly formatted to create a column with Math::MatrixReal's
new_from_string command
You may mix and match these as you wish. However, all must be of
the same dimension--no padding happens automatically. Example:
my $matrix = Math::MatrixReal->new_from_cols( [ [1,2], [3,4] ] );
print $matrix;
will print
[ 1.000000000000E+00 3.000000000000E+00 ]
[ 2.000000000000E+00 4.000000000000E+00 ]
- •
- new_from_rows( [ $row_vector|$array_ref|$string,
... ] )
Creates a new matrix given a reference to an array of any of
the following:
- row vectors ( 1 by n Math::MatrixReal matrices )
- references to arrays
- strings properly formatted to create a row with Math::MatrixReal's
new_from_string command
You may mix and match these as you wish. However, all must be of
the same dimension--no padding happens automatically. Example:
my $matrix = Math::MatrixReal->new_from_rows( [ [1,2], [3,4] ] );
print $matrix;
will print
[ 1.000000000000E+00 2.000000000000E+00 ]
[ 3.000000000000E+00 4.000000000000E+00 ]
- $new_matrix =
Math::MatrixReal->new_random($rows, $cols,
%options );
This method allows you to create a random matrix with various
properties controlled by the %options matrix,
which is optional. The default values of the
%options matrix are { integer => 0, symmetric
=> 0, tridiagonal => 0, diagonal => 0, bounded_by => [0,10]
} .
Example:
$matrix = Math::MatrixReal->new_random(4, { diagonal => 1, integer => 1 } );
print $matrix;
will print a 4x4 random diagonal matrix with integer entries
between zero and ten, something like
[ 5.000000000000E+00 0.000000000000E+00 0.000000000000E+00 0.000000000000E+00 ]
[ 0.000000000000E+00 2.000000000000E+00 0.000000000000E+00 0.000000000000E+00 ]
[ 0.000000000000E+00 0.000000000000E+00 1.000000000000E+00 0.000000000000E+00 ]
[ 0.000000000000E+00 0.000000000000E+00 0.000000000000E+00 8.000000000000E+00 ]
- $new_matrix = Math::MatrixReal->new_diag(
$array_ref );
This method allows you to create a diagonal matrix by only
specifying the diagonal elements. Example:
$matrix = Math::MatrixReal->new_diag( [ 1,2,3,4 ] );
print $matrix;
will print
[ 1.000000000000E+00 0.000000000000E+00 0.000000000000E+00 0.000000000000E+00 ]
[ 0.000000000000E+00 2.000000000000E+00 0.000000000000E+00 0.000000000000E+00 ]
[ 0.000000000000E+00 0.000000000000E+00 3.000000000000E+00 0.000000000000E+00 ]
[ 0.000000000000E+00 0.000000000000E+00 0.000000000000E+00 4.000000000000E+00 ]
- $new_matrix = Math::MatrixReal->new_tridiag(
$lower, $diag,
$upper );
This method allows you to create a tridiagonal matrix by only
specifying the lower diagonal, diagonal and upper diagonal,
respectively.
$matrix = Math::MatrixReal->new_tridiag( [ 6, 4, 2 ], [1,2,3,4], [1, 8, 9] );
print $matrix;
will print
[ 1.000000000000E+00 1.000000000000E+00 0.000000000000E+00 0.000000000000E+00 ]
[ 6.000000000000E+00 2.000000000000E+00 8.000000000000E+00 0.000000000000E+00 ]
[ 0.000000000000E+00 4.000000000000E+00 3.000000000000E+00 9.000000000000E+00 ]
[ 0.000000000000E+00 0.000000000000E+00 2.000000000000E+00 4.000000000000E+00 ]
- $new_matrix =
Math::MatrixReal->new_from_string($string);
This method allows you to read in a matrix from a string (for
instance, from the keyboard, from a file or from your code).
The syntax is simple: each row must start with
""[ "" and end with
"" ]\n""
(""\n"" being the newline
character and "" "" a space
or tab) and contain one or more numbers, all separated from each other
by spaces or tabs.
Additional spaces or tabs can be added at will, but no
comments.
Examples:
$string = "[ 1 2 3 ]\n[ 2 2 -1 ]\n[ 1 1 1 ]\n";
$matrix = Math::MatrixReal->new_from_string($string);
print "$matrix";
By the way, this prints
[ 1.000000000000E+00 2.000000000000E+00 3.000000000000E+00 ]
[ 2.000000000000E+00 2.000000000000E+00 -1.000000000000E+00 ]
[ 1.000000000000E+00 1.000000000000E+00 1.000000000000E+00 ]
But you can also do this in a much more comfortable way using
the shell-like "here-document" syntax:
$matrix = Math::MatrixReal->new_from_string(<<'MATRIX');
[ 1 0 0 0 0 0 1 ]
[ 0 1 0 0 0 0 0 ]
[ 0 0 1 0 0 0 0 ]
[ 0 0 0 1 0 0 0 ]
[ 0 0 0 0 1 0 0 ]
[ 0 0 0 0 0 1 0 ]
[ 1 0 0 0 0 0 -1 ]
MATRIX
You can even use variables in the matrix:
$c1 = 2 / 3;
$c2 = -2 / 5;
$c3 = 26 / 9;
$matrix = Math::MatrixReal->new_from_string(<<"MATRIX");
[ 3 2 0 ]
[ 0 3 2 ]
[ $c1 $c2 $c3 ]
MATRIX
(Remember that you may use spaces and tabs to format the
matrix to your taste)
Note that this method uses exactly the same representation for
a matrix as the "stringify" operator "": this means
that you can convert any matrix into a string with
"$string = "$matrix";" and
read it back in later (for instance from a file!).
Note however that you may suffer a precision loss in this
process because only 13 digits are supported in the mantissa when
printed!!
If the string you supply (or someone else supplies) does not
obey the syntax mentioned above, an exception is raised, which can be
caught by "eval" as follows:
print "Please enter your matrix (in one line): ";
$string = <STDIN>;
$string =~ s/\\n/\n/g;
eval { $matrix = Math::MatrixReal->new_from_string($string); };
if ($@)
{
print "$@";
# ...
# (error handling)
}
else
{
# continue...
}
or as follows:
eval { $matrix = Math::MatrixReal->new_from_string(<<"MATRIX"); };
[ 3 2 0 ]
[ 0 3 2 ]
[ $c1 $c2 $c3 ]
MATRIX
if ($@)
# ...
Actually, the method shown above for reading a matrix from the
keyboard is a little awkward, since you have to enter a lot of
"\n"'s for the newlines.
A better way is shown in this piece of code:
while (1)
{
print "\nPlease enter your matrix ";
print "(multiple lines, <ctrl-D> = done):\n";
eval { $new_matrix =
Math::MatrixReal->new_from_string(join('',<STDIN>)); };
if ($@)
{
$@ =~ s/\s+at\b.*?$//;
print "${@}Please try again.\n";
}
else { last; }
}
Possible error messages of the
"new_from_string()" method are:
Math::MatrixReal::new_from_string(): syntax error in input string
Math::MatrixReal::new_from_string(): empty input string
If the input string has rows with varying numbers of columns,
the following warning will be printed to STDERR:
Math::MatrixReal::new_from_string(): missing elements will be set to zero!
If everything is okay, the method returns an object reference
to the (newly allocated) matrix containing the elements you
specified.
- $new_matrix =
$some_matrix->shadow();
Returns an object reference to a NEW but EMPTY
matrix (filled with zero's) of the SAME SIZE as matrix
"$some_matrix".
Matrix "$some_matrix" is not
changed by this in any way.
- $matrix1->copy($matrix2);
Copies the contents of matrix
"$matrix2" to an ALREADY
EXISTING matrix "$matrix1" (which
must have the same size as matrix
"$matrix2"!).
Matrix "$matrix2" is not
changed by this in any way.
- $twin_matrix =
$some_matrix->clone();
Returns an object reference to a NEW matrix of the
SAME SIZE as matrix
"$some_matrix". The contents of matrix
"$some_matrix" have ALREADY BEEN
COPIED to the new matrix
"$twin_matrix". This is the method
that the operator "=" is overloaded to when you type
"$a = $b", when
$a and $b are
matrices.
Matrix "$some_matrix" is not
changed by this in any way.
- $matrix = Math::MatrixReal->reshape($rows,
$cols, $array_ref);
Return a matrix with the specified dimensions
($rows x $cols) whose
elements are taken from the array reference
$array_ref. The elements of the matrix are
accessed in column-major order (like Fortran arrays are stored).
$matrix = Math::MatrixReal->reshape(4, 3, [1..12]);
Creates the following matrix:
[ 1 5 9 ]
[ 2 6 10 ]
[ 3 7 11 ]
[ 4 8 12 ]
- $value =
$matrix->element($row,$column);
Returns the value of a specific element of the matrix
"$matrix", located in row
"$row" and column
"$column".
NOTE: Unlike Perl, matrices are indexed with base-one
indexes. Thus, the first element of the matrix is placed in the
first line, first column:
$elem = $matrix->element(1, 1); # first element of the matrix.
- $matrix->assign($row,$column,$value);
Explicitly assigns a value
"$value" to a single element of the
matrix "$matrix", located in row
"$row" and column
"$column", thereby replacing the value
previously stored there.
- $row_vector =
$matrix->row($row);
This is a projection method which returns an object reference
to a NEW matrix (which in fact is a (row) vector since it has
only one row) to which row number
"$row" of matrix
"$matrix" has already been copied.
Matrix "$matrix" is not
changed by this in any way.
- $column_vector =
$matrix->column($column);
This is a projection method which returns an object reference
to a NEW matrix (which in fact is a (column) vector since it has
only one column) to which column number
"$column" of matrix
"$matrix" has already been copied.
Matrix "$matrix" is not
changed by this in any way.
- @all_elements =
$matrix->as_list;
Get the contents of a Math::MatrixReal object as a Perl
list.
Example:
my $matrix = Math::MatrixReal->new_from_rows([ [1, 2], [3, 4] ]);
my @list = $matrix->as_list; # 1, 2, 3, 4
This method is suitable for use with OpenGL. For example,
there is need to rotate model around X-axis to 90 degrees clock-wise.
That could be achieved via:
use Math::Trig;
use OpenGL;
...;
my $axis = [1, 0, 0];
my $angle = 90;
...
my ($x, $y, $z) = @$axis;
my $f = $angle;
my $cos_f = cos(deg2rad($f));
my $sin_f = sin(deg2rad($f));
my $rotation = Math::MatrixReal->new_from_rows([
[$cos_f+(1-$cos_f)*$x**2, (1-$cos_f)*$x*$y-$sin_f*$z, (1-$cos_f)*$x*$z+$sin_f*$y, 0 ],
[(1-$cos_f)*$y*$z+$sin_f*$z, $cos_f+(1-$cos_f)*$y**2 , (1-$cos_f)*$y*$z-$sin_f*$x, 0 ],
[(1-$cos_f)*$z*$x-$sin_f*$y, (1-$cos_f)*$z*$y+$sin_f*$x, $cos_f+(1-$cos_f)*$z**2 ,0 ],
[0, 0, 0, 1 ],
]);
...;
my $model_initial = Math::MatrixReal->new_diag( [1, 1, 1, 1] ); # identity matrix
my $model = $model_initial * $rotation;
$model = ~$model; # OpenGL operates on transposed matrices
my $model_oga = OpenGL::Array->new_list(GL_FLOAT, $model->as_list);
$shader->SetMatrix(model => $model_oga); # instance of OpenGL::Shader
See OpenGL, OpenGL::Shader, OpenGL::Array, rotation matrix
<https://en.wikipedia.org/wiki/Rotation_matrix>.
- $new_matrix =
$matrix->each( \&function );
Creates a new matrix by evaluating a code reference on each
element of the given matrix. The function is passed the element, the row
index and the column index, in that order. The value the function
returns ( or the value of the last executed statement ) is the value
given to the corresponding element in
$new_matrix.
Example:
# add 1 to every element in the matrix
$matrix = $matrix->each ( sub { (shift) + 1 } );
Example:
my $cofactor = $matrix->each( sub { my(undef,$i,$j) = @_;
($i+$j) % 2 == 0 ? $matrix->minor($i,$j)->det()
: -1*$matrix->minor($i,$j)->det();
} );
This code needs some explanation. For each element of
$matrix, it throws away the actual value and
stores the row and column indexes in $i and
$j. Then it sets element [$i,$j] in
$cofactor to the determinant of
"$matrix->minor($i,$j)" if it is an
"even" element, or
"-1*$matrix->minor($i,$j)" if it is
an "odd" element.
- $new_matrix =
$matrix->each_diag( \&function );
Creates a new matrix by evaluating a code reference on each
diagonal element of the given matrix. The function is passed the
element, the row index and the column index, in that order. The value
the function returns ( or the value of the last executed statement ) is
the value given to the corresponding element in
$new_matrix.
- $matrix->swap_col(
$col1, $col2 );
This method takes two one-based column numbers and swaps the
values of each element in each column.
"$matrix->swap_col(2,3)" would
replace column 2 in $matrix with column 3, and
replace column 3 with column 2.
- $matrix->swap_row(
$row1, $row2 );
This method takes two one-based row numbers and swaps the
values of each element in each row.
"$matrix->swap_row(2,3)" would
replace row 2 in $matrix with row 3, and replace
row 3 with row 2.
- $matrix->assign_row(
$row_number ,
$new_row_vector );
This method takes a one-based row number and assigns row
$row_number of $matrix
with $new_row_vector and returns the resulting
matrix. "$matrix->assign_row(5,
$x)" would replace row 5 in $matrix
with the row vector $x.
- $matrix->maximum(); and
$matrix->minimum();
These two methods work similarly, one for computing the
maximum element or elements from a matrix, and the minimum element or
elements from a matrix. They work in a similar way as Octave/MatLab
max/min functions.
When computing the maximum or minimum from a vector (vertical
or horizontal), only one element is returned. When computing the maximum
or minimum from a matrix, the maximum/minimum element for each column is
returned in an array reference.
When called in list context, the function returns a pair,
where the first element is the maximum/minimum element (or elements) and
the second is the position of that value in the vector (first
occurrence), or the row where it occurs, for matrices.
Consider the matrix and vector below for the following
examples:
[ 1 9 4 ]
$A = [ 3 5 2 ] $B = [ 8 7 9 5 3 ]
[ 8 7 6 ]
When used in scalar context:
$max = $A->maximum(); # $max = [ 8, 9, 6 ]
$min = $B->minimum(); # $min = 3
When used in list context:
($min, $pos) = $A->minimum(); # $min = [ 1 5 2 ]
# $pos = [ 1 2 2 ]
($max, $pos) = $B->maximum(); # $max = 9
# $pos = 3
- "$det = $matrix->det();"
Returns the determinant of the matrix, without going through
the rigamarole of computing a LR decomposition. This method should be
much faster than LR decomposition if the matrix is diagonal or
triangular. Otherwise, it is just a wrapper for
"$matrix->decompose_LR->det_LR".
If the determinant is zero, there is no inverse and vice-versa. Only
quadratic matrices have determinants.
- "$inverse = $matrix->inverse();"
Returns the inverse of a matrix, without going through the
rigamarole of computing a LR decomposition. If no inverse exists, undef
is returned and an error is printed via
"carp()". This is nothing but a
wrapper for
"$matrix->decompose_LR->invert_LR".
- "($rows,$columns) = $matrix->dim();"
Returns a list of two items, representing the number of rows
and columns the given matrix "$matrix"
contains.
- "$norm_one = $matrix->norm_one();"
Returns the "one"-norm of the given matrix
"$matrix".
The "one"-norm is defined as follows:
For each column, the sum of the absolute values of the
elements in the different rows of that column is calculated. Finally,
the maximum of these sums is returned.
Note that the "one"-norm and the
"maximum"-norm are mathematically equivalent, although for the
same matrix they usually yield a different value.
Therefore, you should only compare values that have been
calculated using the same norm!
Throughout this package, the "one"-norm is
(arbitrarily) used for all comparisons, for the sake of uniformity and
comparability, except for the iterative methods
"solve_GSM()", "solve_SSM()" and
"solve_RM()" which use either norm depending on the
matrix itself.
- "$norm_max = $matrix->norm_max();"
Returns the "maximum"-norm of the given matrix
$matrix.
The "maximum"-norm is defined as follows:
For each row, the sum of the absolute values of the elements
in the different columns of that row is calculated. Finally, the maximum
of these sums is returned.
Note that the "maximum"-norm and the
"one"-norm are mathematically equivalent, although for the
same matrix they usually yield a different value.
Therefore, you should only compare values that have been
calculated using the same norm!
Throughout this package, the "one"-norm is
(arbitrarily) used for all comparisons, for the sake of uniformity and
comparability, except for the iterative methods
"solve_GSM()", "solve_SSM()" and
"solve_RM()" which use either norm depending on the
matrix itself.
- "$norm_sum = $matrix->norm_sum();"
This is a very simple norm which is defined as the sum of the
absolute values of every element.
- $p_norm =
$matrix->norm_p($n);>
This function returns the "p-norm" of a vector. The
argument $n must be a number greater than or
equal to 1 or the string "Inf". The p-norm is defined as
(sum(x_i^p))^(1/p). In words, it raised each element to the p-th power,
adds them up, and then takes the p-th root of that number. If the string
"Inf" is passed, the "infinity-norm" is computed,
which is really the limit of the p-norm as p goes to infinity. It is
defined as the maximum element of the vector. Also, note that the
familiar Euclidean distance between two vectors is just a special case
of a p-norm, when p is equal to 2.
Example:
$a =
Math::MatrixReal->new_from_cols([[1,2,3]]);
$p1 =
$a->norm_p(1);
$p2 =
$a->norm_p(2);
$p3 =
$a->norm_p(3);
$pinf =
$a->norm_p("Inf");
print "(1,2,3,Inf) norm:\n$p1\n$p2\n$p3\n$pinf\n";
$i1 = $a->new_from_rows([[1,0]]);
$i2 = $a->new_from_rows([[0,1]]);
# this should be sqrt(2) since it is the same as the
# hypotenuse of a 1 by 1 right triangle
$dist = ($i1-$i2)->norm_p(2);
print "Distance is $dist, which should be " . sqrt(2) . "\n";
Output:
(1,2,3,Inf) norm:
6
3.74165738677394139
3.30192724889462668
3
Distance is 1.41421356237309505, which should be 1.41421356237309505
- $frob_norm =
"$matrix->norm_frobenius();"
This norm is similar to that of a p-norm where p is 2, except
it acts on a matrix, not a vector. Each element of the matrix is
squared, this is added up, and then a square root is taken.
- "$matrix->spectral_radius();"
Returns the maximum value of the absolute value of all
eigenvalues. Currently this computes all eigenvalues, then sifts
through them to find the largest in absolute value. Needless to say,
this is very inefficient, and in the future an algorithm that computes
only the largest eigenvalue may be implemented.
- "$matrix1->transpose($matrix2);"
Calculates the transposed matrix of matrix
$matrix2 and stores the result in matrix
"$matrix1" (which must already exist
and have the same size as matrix
"$matrix2"!).
This operation can also be carried out "in-place",
i.e., input and output matrix may be identical.
Transposition is a symmetry operation: imagine you rotate the
matrix along the axis of its main diagonal (going through elements
(1,1), (2,2), (3,3) and so on) by 180 degrees.
Another way of looking at it is to say that rows and columns
are swapped. In fact the contents of element
"(i,j)" are swapped with those of
element "(j,i)".
Note that (especially for vectors) it makes a big difference
if you have a row vector, like this:
[ -1 0 1 ]
or a column vector, like this:
[ -1 ]
[ 0 ]
[ 1 ]
the one vector being the transposed of the other!
This is especially true for the matrix product of two
vectors:
[ -1 ]
[ -1 0 1 ] * [ 0 ] = [ 2 ] , whereas
[ 1 ]
* [ -1 0 1 ]
[ -1 ] [ 1 0 -1 ]
[ 0 ] * [ -1 0 1 ] = [ -1 ] [ 1 0 -1 ] = [ 0 0 0 ]
[ 1 ] [ 0 ] [ 0 0 0 ] [ -1 0 1 ]
[ 1 ] [ -1 0 1 ]
So be careful about what you really mean!
Hint: throughout this module, whenever a vector is explicitly
required for input, a COLUMN vector is expected!
- "$trace = $matrix->trace();"
This returns the trace of the matrix, which is defined as the
sum of the diagonal elements. The matrix must be quadratic.
- "$minor = $matrix->minor($row,$col);"
Returns the minor matrix corresponding to
$row and $col.
$matrix must be quadratic. If
$matrix is n rows by n cols, the minor of
$row and $col will be an
(n-1) by (n-1) matrix. The minor is defined as crossing out the row and
the col specified and returning the remaining rows and columns as a
matrix. This method is used by
"cofactor()".
- "$cofactor = $matrix->cofactor();"
The cofactor matrix is constructed as follows:
For each element, cross out the row and column that it sits
in. Now, take the determinant of the matrix that is left in the other
rows and columns. Multiply the determinant by (-1)^(i+j), where i is the
row index, and j is the column index. Replace the given element with
this value.
The cofactor matrix can be used to find the inverse of the
matrix. One formula for the inverse of a matrix is the cofactor matrix
transposed divided by the original determinant of the matrix.
The following two inverses should be exactly the same:
my $inverse1 = $matrix->inverse;
my $inverse2 = ~($matrix->cofactor)->each( sub { (shift)/$matrix->det() } );
Caveat: Although the cofactor matrix is simple algorithm to
compute the inverse of a matrix, and can be used with pencil and paper
for small matrices, it is comically slower than the native
"inverse()" function. Here is a small
benchmark:
# $matrix1 is 15x15
$det = $matrix1->det;
timethese( 10,
{'inverse' => sub { $matrix1->inverse(); },
'cofactor' => sub { (~$matrix1->cofactor)->each ( sub { (shift)/$det; } ) }
} );
Benchmark: timing 10 iterations of LR, cofactor, inverse...
inverse: 1 wallclock secs ( 0.56 usr + 0.00 sys = 0.56 CPU) @ 17.86/s (n=10)
cofactor: 36 wallclock secs (36.62 usr + 0.01 sys = 36.63 CPU) @ 0.27/s (n=10)
- "$adjoint = $matrix->adjoint();"
The adjoint is just the transpose of the cofactor matrix. This
method is just an alias for "
~($matrix->cofactor)".
- "$part_of_matrix =
$matrix->submatrix(x1,y1,x2,Y2);"
Submatrix permit to select only part of existing matrix in
order to produce a new one. This method take four arguments to define a
selection area:
- - firstly: Coordinate of top left corner to select (x1,y1)
- - secondly: Coordinate of bottom right corner to select (x2,y2)
Example:
my $matrix = Math::MatrixReal->new_from_string(<<'MATRIX');
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 1 0 1 ]
[ 0 0 0 0 0 1 0 ]
[ 0 0 0 0 1 0 1 ]
MATRIX
my $submatrix = $matrix->submatrix(5,5,7,7);
$submatrix->display_precision(0);
print $submatrix;
Output:
[ 1 0 1 ]
[ 0 1 0 ]
[ 1 0 1 ]
- "$matrix1->add($matrix2,$matrix3);"
Calculates the sum of matrix
"$matrix2" and matrix
"$matrix3" and stores the result in
matrix "$matrix1" (which must already
exist and have the same size as matrix
"$matrix2" and matrix
"$matrix3"!).
This operation can also be carried out "in-place",
i.e., the output and one (or both) of the input matrices may be
identical.
- "$matrix1->subtract($matrix2,$matrix3);"
Calculates the difference of matrix
"$matrix2" minus matrix
"$matrix3" and stores the result in
matrix "$matrix1" (which must already
exist and have the same size as matrix
"$matrix2" and matrix
"$matrix3"!).
This operation can also be carried out "in-place",
i.e., the output and one (or both) of the input matrices may be
identical.
Note that this operation is the same as
"$matrix1->add($matrix2,-$matrix3);",
although the latter is a little less efficient.
- "$matrix1->multiply_scalar($matrix2,$scalar);"
Calculates the product of matrix
"$matrix2" and the number
"$scalar" (i.e., multiplies each
element of matrix "$matrix2" with the
factor "$scalar") and stores the
result in matrix "$matrix1" (which
must already exist and have the same size as matrix
"$matrix2"!).
This operation can also be carried out "in-place",
i.e., input and output matrix may be identical.
- "$product_matrix =
$matrix1->multiply($matrix2);"
Calculates the product of matrix
"$matrix1" and matrix
"$matrix2" and returns an object
reference to a new matrix
"$product_matrix" in which the result
of this operation has been stored.
Note that the dimensions of the two matrices
"$matrix1" and
"$matrix2" (i.e., their numbers of
rows and columns) must harmonize in the following way (example):
[ 2 2 ]
[ 2 2 ]
[ 2 2 ]
[ 1 1 1 ] [ * * ]
[ 1 1 1 ] [ * * ]
[ 1 1 1 ] [ * * ]
[ 1 1 1 ] [ * * ]
I.e., the number of columns of matrix
"$matrix1" has to be the same as the
number of rows of matrix
"$matrix2".
The number of rows and columns of the resulting matrix
"$product_matrix" is determined by the
number of rows of matrix "$matrix1"
and the number of columns of matrix
"$matrix2", respectively.
- "$matrix1->negate($matrix2);"
Calculates the negative of matrix
"$matrix2" (i.e., multiplies all
elements with "-1") and stores the result in matrix
"$matrix1" (which must already exist
and have the same size as matrix
"$matrix2"!).
This operation can also be carried out "in-place",
i.e., input and output matrix may be identical.
- "$matrix_to_power =
$matrix1->exponent($integer);"
Raises the matrix to the $integer
power. Obviously, $integer must be an integer.
If it is zero, the identity matrix is returned. If a negative integer is
given, the inverse will be computed (if it exists) and then raised the
the absolute value of $integer. The matrix must
be quadratic.
- "$matrix->is_quadratic();"
Returns a boolean value indicating if the given matrix is
quadratic (also know as "square" or "n by n"). A
matrix is quadratic if it has the same number of rows as it does
columns.
- "$matrix->is_square();"
This is an alias for
"is_quadratic()".
- "$matrix->is_symmetric();"
Returns a boolean value indicating if the given matrix is
symmetric. By definition, a matrix is symmetric if and only if
(M[i,j]=M[j,i]). This is
equivalent to "($matrix == ~$matrix)"
but without memory allocation. Only quadratic matrices can be
symmetric.
Notes: A symmetric matrix always has real
eigenvalues/eigenvectors. A matrix plus its transpose is always
symmetric.
- "$matrix->is_skew_symmetric();"
Returns a boolean value indicating if the given matrix is skew
symmetric. By definition, a matrix is symmetric if and only if
(M[i,j]=-M[j,i]). This is
equivalent to "($matrix ==
-(~$matrix))" but without memory allocation. Only quadratic
matrices can be skew symmetric.
- "$matrix->is_diagonal();"
Returns a boolean value indicating if the given matrix is
diagonal, i.e. all of the nonzero elements are on the main diagonal.
Only quadratic matrices can be diagonal.
- "$matrix->is_tridiagonal();"
Returns a boolean value indicating if the given matrix is
tridiagonal, i.e. all of the nonzero elements are on the main diagonal
or the diagonals above and below the main diagonal. Only quadratic
matrices can be tridiagonal.
- "$matrix->is_upper_triangular();"
Returns a boolean value indicating if the given matrix is
upper triangular, i.e. all of the nonzero elements not on the main
diagonal are above it. Only quadratic matrices can be upper triangular.
Note: diagonal matrices are both upper and lower triangular.
- "$matrix->is_lower_triangular();"
Returns a boolean value indicating if the given matrix is
lower triangular, i.e. all of the nonzero elements not on the main
diagonal are below it. Only quadratic matrices can be lower triangular.
Note: diagonal matrices are both upper and lower triangular.
- "$matrix->is_orthogonal();"
Returns a boolean value indicating if the given matrix is
orthogonal. An orthogonal matrix is has the property that the transpose
equals the inverse of the matrix. Instead of computing each and
comparing them, this method multiplies the matrix by it's transpose, and
returns true if this turns out to be the identity matrix, false
otherwise. Only quadratic matrices can orthogonal.
- "$matrix->is_binary();"
Returns a boolean value indicating if the given matrix is
binary. A matrix is binary if it contains only zeroes or ones.
- "$matrix->is_gramian();"
Returns a boolean value indicating if the give matrix is
Gramian. A matrix $A is Gramian if and only if
there exists a square matrix $B such that
"$A = ~$B*$B". This is equivalent to
checking if $A is symmetric and has all
nonnegative eigenvalues, which is what Math::MatrixReal uses to check
for this property.
- "$matrix->is_LR();"
Returns a boolean value indicating if the matrix is an LR
decomposition matrix.
- "$matrix->is_positive();"
Returns a boolean value indicating if the matrix contains only
positive entries. Note that a zero entry is not positive and will cause
"is_positive()" to return false.
- "$matrix->is_negative();"
Returns a boolean value indicating if the matrix contains only
negative entries. Note that a zero entry is not negative and will cause
"is_negative()" to return false.
- "$matrix->is_periodic($k);"
Returns a boolean value indicating if the matrix is periodic
with period $k. This is true if
"$matrix ** ($k+1) == $matrix". When
"$k == 1", this reduces down to the
"is_idempotent()" function.
- "$matrix->is_idempotent();"
Returns a boolean value indicating if the matrix is
idempotent, which is defined as the square of the matrix being equal to
the original matrix, i.e "$matrix ** 2 ==
$matrix".
- "$matrix->is_row_vector();"
Returns a boolean value indicating if the matrix is a row
vector. A row vector is a matrix which is 1xn. Note that the 1x1 matrix
is both a row and column vector.
- "$matrix->is_col_vector();"
Returns a boolean value indicating if the matrix is a col
vector. A col vector is a matrix which is nx1. Note that the 1x1 matrix
is both a row and column vector.
- "($l, $V) =
$matrix->sym_diagonalize();"
This method performs the diagonalization of the quadratic
symmetric matrix M stored in
$matrix. On output, l is a column vector
containing all the eigenvalues of M and V is an orthogonal
matrix which columns are the corresponding normalized eigenvectors. The
primary property of an eigenvalue l and an eigenvector x
is of course that: M * x = l * x.
The method uses a Householder reduction to tridiagonal form
followed by a QL algoritm with implicit shifts on this tridiagonal. (The
tridiagonal matrix is kept internally in a compact form in this routine
to save memory.) In fact, this routine wraps the householder()
and tri_diagonalize() methods described below when their
intermediate results are not desired. The overall algorithmic complexity
of this technique is O(N^3). According to several books, the coefficient
hidden by the 'O' is one of the best possible for general (symmetric)
matrixes.
- "($T, $Q) = $matrix->householder();"
This method performs the Householder algorithm which reduces
the n by n real symmetric matrix M contained
in $matrix to tridiagonal form. On output,
T is a symmetric tridiagonal matrix (only diagonal and
off-diagonal elements are non-zero) and Q is an orthogonal
matrix performing the tranformation between M and T
("$M == $Q * $T * ~$Q").
- "($l, $V) =
$T->tri_diagonalize([$Q]);"
This method diagonalizes the symmetric tridiagonal matrix
T. On output, $l and
$V are similar to the output values described
for sym_diagonalize().
The optional argument $Q corresponds
to an orthogonal transformation matrix Q that should be used
additionally during V (eigenvectors) computation. It should be
supplied if the desired eigenvectors correspond to a more general
symmetric matrix M previously reduced by the householder()
method, not a mere tridiagonal. If T is really a tridiagonal
matrix, Q can be omitted (it will be internally created in fact
as an identity matrix). The method uses a QL algorithm (with implicit
shifts).
- "$l = $matrix->sym_eigenvalues();"
This method computes the eigenvalues of the quadratic
symmetric matrix M stored in
$matrix. On output, l is a column vector
containing all the eigenvalues of M. Eigenvectors are not
computed (on the contrary of
"sym_diagonalize()") and this method
is more efficient (even though it uses a similar algorithm with two
phases). However, understand that the algorithmic complexity of this
technique is still also O(N^3). But the coefficient hidden by the 'O' is
better by a factor of..., well, see your benchmark, it's wiser.
This routine wraps the householder_tridiagonal() and
tri_eigenvalues() methods described below when the intermediate
tridiagonal matrix is not needed.
- "$T =
$matrix->householder_tridiagonal();"
This method performs the Householder algorithm which reduces
the n by n real symmetric matrix M contained
in $matrix to tridiagonal form. On output,
T is the obtained symmetric tridiagonal matrix (only diagonal and
off-diagonal elements are non-zero). The operation is similar to the
householder() method, but potentially a little more efficient as
the transformation matrix is not computed.
- $l =
$T->tri_eigenvalues();
This method computesthe eigenvalues of the symmetric
tridiagonal matrix T. On output, $l is a
vector containing the eigenvalues (similar to
"sym_eigenvalues()"). This method is
much more efficient than tri_diagonalize() when eigenvectors are
not needed.
- $matrix->zero();
Assigns a zero to every element of the matrix
"$matrix", i.e., erases all values
previously stored there, thereby effectively transforming the matrix
into a "zero"-matrix or "null"-matrix, the neutral
element of the addition operation in a Ring.
(For instance the (quadratic) matrices with "n" rows
and columns and matrix addition and multiplication form a Ring. Most
prominent characteristic of a Ring is that multiplication is not
commutative, i.e., in general, ""matrix1 *
matrix2"" is not the same as
""matrix2 * matrix1""!)
- $matrix->one();
Assigns one's to the elements on the main diagonal (elements
(1,1), (2,2), (3,3) and so on) of matrix
"$matrix" and zero's to all others,
thereby erasing all values previously stored there and transforming the
matrix into a "one"-matrix, the neutral element of the
multiplication operation in a Ring.
(If the matrix is quadratic (which this method doesn't
require, though), then multiplying this matrix with itself yields this
same matrix again, and multiplying it with some other matrix leaves that
other matrix unchanged!)
- "$latex_string = $matrix->as_latex( align=>
"c", format => "%s", name => ""
);"
This function returns the matrix as a LaTeX string. It takes a
hash as an argument which is used to control the style of the output.
The hash element "align" may be
"c","l" or "r", corresponding to center,
left and right, respectively. The
"format" element is a format string
that is given to "sprintf" to control
the style of number format, such a floating point or scientific
notation. The "name" element can be
used so that a LaTeX string of "$name = " is prepended to the
string.
Example:
my $a = Math::MatrixReal->new_from_cols([[ 1.234, 5.678, 9.1011],[1,2,3]] );
print $a->as_latex( ( format => "%.2f", align => "l",name => "A" ) );
Output:
$A = $ $
\left( \begin{array}{ll}
1.23&1.00 \\
5.68&2.00 \\
9.10&3.00
\end{array} \right)
$
- "$yacas_string = $matrix->as_yacas( format =>
"%s", name => "", semi => 0 );"
This function returns the matrix as a string that can be read
by Yacas. It takes a hash as an an argument which controls the style of
the output. The "format" element is a
format string that is given to
"sprintf" to control the style of
number format, such a floating point or scientific notation. The
"name" element can be used so that
"$name = " is prepended to the string. The <semi>
element can be set to 1 to that a semicolon is appended (so Matlab does
not print out the matrix.)
Example:
$a = Math::MatrixReal->new_from_cols([[ 1.234, 5.678, 9.1011],[1,2,3]] );
print $a->as_yacas( ( format => "%.2f", align => "l",name => "A" ) );
Output:
A := {{1.23,1.00},{5.68,2.00},{9.10,3.00}}
- "$matlab_string = $matrix->as_matlab( format
=> "%s", name => "", semi => 0
);"
This function returns the matrix as a string that can be read
by Matlab. It takes a hash as an an argument which controls the style of
the output. The "format" element is a
format string that is given to
"sprintf" to control the style of
number format, such a floating point or scientific notation. The
"name" element can be used so that
"$name = " is prepended to the string. The <semi>
element can be set to 1 to that a semicolon is appended (so Matlab does
not print out the matrix.)
Example:
my $a = Math::MatrixReal->new_from_rows([[ 1.234, 5.678, 9.1011],[1,2,3]] );
print $a->as_matlab( ( format => "%.3f", name => "A",semi => 1 ) );
Output:
A = [ 1.234 5.678 9.101;
1.000 2.000 3.000];
- "$scilab_string = $matrix->as_scilab( format
=> "%s", name => "", semi => 0
);"
This function is just an alias for
"as_matlab()", since both Scilab and
Matlab have the same matrix format.
- "$minimum =
Math::MatrixReal::min($number1,$number2);"
"$minimum =
Math::MatrixReal::min($matrix);"
"<$minimum = $matrix-"min;>>
Returns the minimum of the two numbers
""number1"" and
""number2"" if called with
two arguments, or returns the value of the smallest element of a matrix
if called with one argument or as an object method.
- "$maximum =
Math::MatrixReal::max($number1,$number2);"
"$maximum =
Math::MatrixReal::max($number1,$number2);"
"$maximum =
Math::MatrixReal::max($matrix);"
"<$maximum = $matrix-"max;>>
Returns the maximum of the two numbers
""number1"" and
""number2"" if called with
two arguments, or returns the value of the largest element of a matrix
if called with one arguemnt or as on object method.
- "$minimal_cost_matrix =
$cost_matrix->kleene();"
Copies the matrix
"$cost_matrix" (which has to be
quadratic!) to a new matrix of the same size (i.e., "clones"
the input matrix) and applies Kleene's algorithm to it.
See Math::Kleene(3) for more details about this
algorithm!
The method returns an object reference to the new matrix.
Matrix "$cost_matrix" is not
changed by this method in any way.
- "($norm_matrix,$norm_vector) =
$matrix->normalize($vector);"
This method is used to improve the numerical stability when
solving linear equation systems.
Suppose you have a matrix "A" and a vector
"b" and you want to find out a vector "x" so that
"A * x = b", i.e., the vector
"x" which solves the equation system represented by the matrix
"A" and the vector "b".
Applying this method to the pair (A,b) yields a pair (A',b')
where each row has been divided by (the absolute value of) the greatest
coefficient appearing in that row. So this coefficient becomes equal to
"1" (or "-1") in the new pair (A',b') (all others
become smaller than one and greater than minus one).
Note that this operation does not change the equation system
itself because the same division is carried out on either side of the
equation sign!
The method requires a quadratic (!) matrix
"$matrix" and a vector
"$vector" for input (the vector must
be a column vector with the same number of rows as the input matrix) and
returns a list of two items which are object references to a new matrix
and a new vector, in this order.
The output matrix and vector are clones of the input matrix
and vector to which the operation explained above has been applied.
The input matrix and vector are not changed by this in any
way.
Example of how this method can affect the result of the
methods to solve equation systems (explained immediately below following
this method):
Consider the following little program:
#!perl -w
use Math::MatrixReal qw(new_from_string);
$A = Math::MatrixReal->new_from_string(<<"MATRIX");
[ 1 2 3 ]
[ 5 7 11 ]
[ 23 19 13 ]
MATRIX
$b = Math::MatrixReal->new_from_string(<<"MATRIX");
[ 0 ]
[ 1 ]
[ 29 ]
MATRIX
$LR = $A->decompose_LR();
if (($dim,$x,$B) = $LR->solve_LR($b))
{
$test = $A * $x;
print "x = \n$x";
print "A * x = \n$test";
}
($A_,$b_) = $A->normalize($b);
$LR = $A_->decompose_LR();
if (($dim,$x,$B) = $LR->solve_LR($b_))
{
$test = $A * $x;
print "x = \n$x";
print "A * x = \n$test";
}
This will print:
x =
[ 1.000000000000E+00 ]
[ 1.000000000000E+00 ]
[ -1.000000000000E+00 ]
A * x =
[ 4.440892098501E-16 ]
[ 1.000000000000E+00 ]
[ 2.900000000000E+01 ]
x =
[ 1.000000000000E+00 ]
[ 1.000000000000E+00 ]
[ -1.000000000000E+00 ]
A * x =
[ 0.000000000000E+00 ]
[ 1.000000000000E+00 ]
[ 2.900000000000E+01 ]
You can see that in the second example (where
"normalize()" has been used), the result is
"better", i.e., more accurate!
- "$LR_matrix =
$matrix->decompose_LR();"
This method is needed to solve linear equation systems.
Suppose you have a matrix "A" and a vector
"b" and you want to find out a vector "x" so that
"A * x = b", i.e., the vector
"x" which solves the equation system represented by the matrix
"A" and the vector "b".
You might also have a matrix "A" and a whole bunch
of different vectors "b1".."bk" for which you need
to find vectors "x1".."xk" so that
"A * xi = bi", for
"i=1..k".
Using Gaussian transformations (multiplying a row or column
with a factor, swapping two rows or two columns and adding a multiple of
one row or column to another), it is possible to decompose any matrix
"A" into two triangular matrices, called "L" and
"R" (for "Left" and "Right").
"L" has one's on the main diagonal (the elements
(1,1), (2,2), (3,3) and so so), non-zero values to the left and below of
the main diagonal and all zero's in the upper right half of the
matrix.
"R" has non-zero values on the main diagonal as well
as to the right and above of the main diagonal and all zero's in the
lower left half of the matrix, as follows:
[ 1 0 0 0 0 ] [ x x x x x ]
[ x 1 0 0 0 ] [ 0 x x x x ]
L = [ x x 1 0 0 ] R = [ 0 0 x x x ]
[ x x x 1 0 ] [ 0 0 0 x x ]
[ x x x x 1 ] [ 0 0 0 0 x ]
Note that ""L *
R"" is equivalent to matrix "A" in the sense
that "L * R * x = b <==> A * x =
b" for all vectors "x", leaving out of account
permutations of the rows and columns (these are taken care of
"magically" by this module!) and numerical errors.
Trick:
Because we know that "L" has one's on its main
diagonal, we can store both matrices together in the same array without
information loss! I.e.,
[ R R R R R ]
[ L R R R R ]
LR = [ L L R R R ]
[ L L L R R ]
[ L L L L R ]
Beware, though, that "LR" and
""L * R"" are not the
same!!!
Note also that for the same reason, you cannot apply the
method "normalize()" to an "LR" decomposition
matrix. Trying to do so will yield meaningless rubbish!
(You need to apply "normalize()" to each pair
(Ai,bi) BEFORE decomposing the matrix "Ai'"!)
Now what does all this help us in solving linear equation
systems?
It helps us because a triangular matrix is the next best thing
that can happen to us besides a diagonal matrix (a matrix that has
non-zero values only on its main diagonal - in which case the solution
is trivial, simply divide
""b[i]"" by
""A[i,i]"" to get
""x[i]""!).
To find the solution to our problem
""A * x = b"", we divide
this problem in parts: instead of solving "A * x =
b" directly, we first decompose "A" into
"L" and "R" and then solve
""L * y = b"" and finally
""R * x = y"" (motto: divide
and rule!).
From the illustration above it is clear that solving
""L * y = b"" and
""R * x = y"" is
straightforward: we immediately know that "y[1] =
b[1]". We then deduce swiftly that
y[2] = b[2] - L[2,1] * y[1]
(and we know
""y[1]"" by now!), that
y[3] = b[3] - L[3,1] * y[1] - L[3,2] * y[2]
and so on.
Having effortlessly calculated the vector "y", we
now proceed to calculate the vector "x" in a similar fashion:
we see immediately that "x[n] = y[n] /
R[n,n]". It follows that
x[n-1] = ( y[n-1] - R[n-1,n] * x[n] ) / R[n-1,n-1]
and
x[n-2] = ( y[n-2] - R[n-2,n-1] * x[n-1] - R[n-2,n] * x[n] )
/ R[n-2,n-2]
and so on.
You can see that - especially when you have many vectors
"b1".."bk" for which you are searching solutions to
"A * xi = bi" - this scheme is much
more efficient than a straightforward, "brute force"
approach.
This method requires a quadratic matrix as its input
matrix.
If you don't have that many equations, fill up with zero's
(i.e., do nothing to fill the superfluous rows if it's a
"fresh" matrix, i.e., a matrix that has been created with
"new()" or "shadow()").
The method returns an object reference to a new matrix
containing the matrices "L" and "R".
The input matrix is not changed by this method in any way.
Note that you can "copy()" or
"clone()" the result of this method without losing its
"magical" properties (for instance concerning the hidden
permutations of its rows and columns).
However, as soon as you are applying any method that alters
the contents of the matrix, its "magical" properties are
stripped off, and the matrix immediately reverts to an
"ordinary" matrix (with the values it just happens to contain
at that moment, be they meaningful as an ordinary matrix or not!).
- "($dimension,$x_vector,$base_matrix) =
$LR_matrix""->""solve_LR($b_vector);"
Use this method to actually solve an equation system.
Matrix "$LR_matrix" must be
a (quadratic) matrix returned by the method
"decompose_LR()", the LR decomposition matrix of the
matrix "A" of your equation system "A *
x = b".
The input vector "$b_vector"
is the vector "b" in your equation system
"A * x = b", which must be a column
vector and have the same number of rows as the input matrix
"$LR_matrix".
The method returns a list of three items if a solution exists
or an empty list otherwise (!).
Therefore, you should always use this method like this:
if ( ($dim,$x_vec,$base) = $LR->solve_LR($b_vec) )
{
# do something with the solution...
}
else
{
# do something with the fact that there is no solution...
}
The three items returned are: the dimension
"$dimension" of the solution space
(which is zero if only one solution exists, one if the solution is a
straight line, two if the solution is a plane, and so on), the solution
vector "$x_vector" (which is the
vector "x" of your equation system "A *
x = b") and a matrix
"$base_matrix" representing a base of
the solution space (a set of vectors which put up the solution space
like the spokes of an umbrella).
Only the first "$dimension"
columns of this base matrix actually contain entries, the remaining
columns are all zero.
Now what is all this stuff with that "base" good
for?
The output vector "x" is ALWAYS a solution of
your equation system "A * x = b".
But also any vector
"$vector"
$vector = $x_vector->clone();
$machine_infinity = 1E+99; # or something like that
for ( $i = 1; $i <= $dimension; $i++ )
{
$vector += rand($machine_infinity) * $base_matrix->column($i);
}
is a solution to your problem "A * x =
b", i.e., if "$A_matrix"
contains your matrix "A", then
print abs( $A_matrix * $vector - $b_vector ), "\n";
should print a number around 1E-16 or so!
By the way, note that you can actually calculate those vectors
"$vector" a little more efficient as
follows:
$rand_vector = $x_vector->shadow();
$machine_infinity = 1E+99; # or something like that
for ( $i = 1; $i <= $dimension; $i++ )
{
$rand_vector->assign($i,1, rand($machine_infinity) );
}
$vector = $x_vector + ( $base_matrix * $rand_vector );
Note that the input matrix and vector are not changed by this
method in any way.
- "$inverse_matrix =
$LR_matrix->invert_LR();"
Use this method to calculate the inverse of a given matrix
"$LR_matrix", which must be a
(quadratic) matrix returned by the method
"decompose_LR()".
The method returns an object reference to a new matrix of the
same size as the input matrix containing the inverse of the matrix that
you initially fed into "decompose_LR()" IF THE
INVERSE EXISTS, or an empty list otherwise.
Therefore, you should always use this method in the following
way:
if ( $inverse_matrix = $LR->invert_LR() )
{
# do something with the inverse matrix...
}
else
{
# do something with the fact that there is no inverse matrix...
}
Note that by definition (disregarding numerical errors), the
product of the initial matrix and its inverse (or vice-versa) is always
a matrix containing one's on the main diagonal (elements (1,1), (2,2),
(3,3) and so on) and zero's elsewhere.
The input matrix is not changed by this method in any way.
- "$condition =
$matrix->condition($inverse_matrix);"
In fact this method is just a shortcut for
abs($matrix) * abs($inverse_matrix)
Both input matrices must be quadratic and have the same size,
and the result is meaningful only if one of them is the inverse of the
other (for instance, as returned by the method
"invert_LR()").
The number returned is a measure of the "condition"
of the given matrix "$matrix", i.e., a
measure of the numerical stability of the matrix.
This number is always positive, and the smaller its value, the
better the condition of the matrix (the better the stability of all
subsequent computations carried out using this matrix).
Numerical stability means for example that if
abs( $vec_correct - $vec_with_error ) < $epsilon
holds, there must be a
"$delta" which doesn't depend on the
vector "$vec_correct" (nor
"$vec_with_error", by the way) so
that
abs( $matrix * $vec_correct - $matrix * $vec_with_error ) < $delta
also holds.
- "$determinant =
$LR_matrix->det_LR();"
Calculates the determinant of a matrix, whose LR decomposition
matrix "$LR_matrix" must be given
(which must be a (quadratic) matrix returned by the method
"decompose_LR()").
In fact the determinant is a by-product of the LR
decomposition: It is (in principle, that is, except for the sign) simply
the product of the elements on the main diagonal (elements (1,1), (2,2),
(3,3) and so on) of the LR decomposition matrix.
(The sign is taken care of "magically" by this
module)
- "$order = $LR_matrix->order_LR();"
Calculates the order (called "Rang" in German) of a
matrix, whose LR decomposition matrix
"$LR_matrix" must be given (which must
be a (quadratic) matrix returned by the method
"decompose_LR()").
This number is a measure of the number of linear independent
row and column vectors (= number of linear independent equations in the
case of a matrix representing an equation system) of the matrix that was
initially fed into "decompose_LR()".
If "n" is the number of rows and columns of the
(quadratic!) matrix, then "n - order" is the dimension of the
solution space of the associated equation system.
- "$rank = $LR_matrix->rank_LR();"
This is an alias for the
"order_LR()" function. The
"order" is usually called the "rank" in the United
States.
- "$scalar_product =
$vector1->scalar_product($vector2);"
Returns the scalar product of vector
"$vector1" and vector
"$vector2".
Both vectors must be column vectors (i.e., a matrix having
several rows but only one column).
This is a (more efficient!) shortcut for
$temp = ~$vector1 * $vector2;
$scalar_product = $temp->element(1,1);
or the sum "i=1..n" of the
products "vector1[i] *
vector2[i]".
Provided none of the two input vectors is the null vector,
then the two vectors are orthogonal, i.e., have an angle of 90 degrees
between them, exactly when their scalar product is zero, and
vice-versa.
- "$vector_product =
$vector1->vector_product($vector2);"
Returns the vector product of vector
"$vector1" and vector
"$vector2".
Both vectors must be column vectors (i.e., a matrix having
several rows but only one column).
Currently, the vector product is only defined for 3 dimensions
(i.e., vectors with 3 rows); all other vectors trigger an error
message.
In 3 dimensions, the vector product of two vectors
"x" and "y" is defined as
| x[1] y[1] e[1] |
determinant | x[2] y[2] e[2] |
| x[3] y[3] e[3] |
where the ""x[i]""
and ""y[i]"" are the
components of the two vectors "x" and "y",
respectively, and the
""e[i]"" are unity vectors
(i.e., vectors with a length equal to one) with a one in row
"i" and zero's elsewhere (this means that you have numbers and
vectors as elements in this matrix!).
This determinant evaluates to the rather simple formula
z[1] = x[2] * y[3] - x[3] * y[2]
z[2] = x[3] * y[1] - x[1] * y[3]
z[3] = x[1] * y[2] - x[2] * y[1]
A characteristic property of the vector product is that the
resulting vector is orthogonal to both of the input vectors (if neither
of both is the null vector, otherwise this is trivial), i.e., the scalar
product of each of the input vectors with the resulting vector is always
zero.
- "$length = $vector->length();"
This is actually a shortcut for
$length = sqrt( $vector->scalar_product($vector) );
and returns the length of a given column or row vector
"$vector".
Note that the "length" calculated by this method is
in fact the "two"-norm (also know as the Euclidean norm) of a
vector "$vector"!
The general definition for norms of vectors is the
following:
sub vector_norm
{
croak "Usage: \$norm = \$vector->vector_norm(\$n);"
if (@_ != 2);
my($vector,$n) = @_;
my($rows,$cols) = ($vector->[1],$vector->[2]);
my($k,$comp,$sum);
croak "Math::MatrixReal::vector_norm(): vector is not a column vector"
unless ($cols == 1);
croak "Math::MatrixReal::vector_norm(): norm index must be > 0"
unless ($n > 0);
croak "Math::MatrixReal::vector_norm(): norm index must be integer"
unless ($n == int($n));
$sum = 0;
for ( $k = 0; $k < $rows; $k++ )
{
$comp = abs( $vector->[0][$k][0] );
$sum += $comp ** $n;
}
return( $sum ** (1 / $n) );
}
Note that the case "n = 1" is the
"one"-norm for matrices applied to a vector, the case "n
= 2" is the euclidian norm or length of a vector, and if
"n" goes to infinity, you have the "infinity"- or
"maximum"-norm for matrices applied to a vector!
- "$xn_vector =
$matrix->""solve_GSM($x0_vector,$b_vector,$epsilon);"
- "$xn_vector =
$matrix->""solve_SSM($x0_vector,$b_vector,$epsilon);"
- "$xn_vector =
$matrix->""solve_RM($x0_vector,$b_vector,$weight,$epsilon);"
In some cases it might not be practical or desirable to solve
an equation system ""A * x =
b"" using an analytical algorithm like the
"decompose_LR()" and "solve_LR()"
method pair.
In fact in some cases, due to the numerical properties (the
"condition") of the matrix "A", the numerical error
of the obtained result can be greater than by using an approximative
(iterative) algorithm like one of the three implemented here.
All three methods, GSM ("Global Step Method" or
"Gesamtschrittverfahren"), SSM ("Single Step Method"
or "Einzelschrittverfahren") and RM ("Relaxation
Method" or "Relaxationsverfahren"), are fix-point
iterations, that is, can be described by an iteration function
""x(t+1) = Phi( x(t) )""
which has the property:
Phi(x) = x <==> A * x = b
We can define "Phi(x)" as
follows:
Phi(x) := ( En - A ) * x + b
where "En" is a matrix of the same size as
"A" ("n" rows and columns) with one's on its main
diagonal and zero's elsewhere.
This function has the required property.
Proof:
A * x = b
<==> -( A * x ) = -b
<==> -( A * x ) + x = -b + x
<==> -( A * x ) + x + b = x
<==> x - ( A * x ) + b = x
<==> ( En - A ) * x + b = x
This last step is true because
x[i] - ( a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] ) + b[i]
is the same as
( -a[i,1] x[1] + ... + (1 - a[i,i]) x[i] + ... + -a[i,n] x[n] ) + b[i]
qed
Note that actually solving the equation system
""A * x = b"" means to
calculate
a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] = b[i]
<==> a[i,i] x[i] =
b[i]
- ( a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] )
+ a[i,i] x[i]
<==> x[i] =
( b[i]
- ( a[i,1] x[1] + ... + a[i,i] x[i] + ... + a[i,n] x[n] )
+ a[i,i] x[i]
) / a[i,i]
<==> x[i] =
( b[i] -
( a[i,1] x[1] + ... + a[i,i-1] x[i-1] +
a[i,i+1] x[i+1] + ... + a[i,n] x[n] )
) / a[i,i]
There is one major restriction, though: a fix-point iteration
is guaranteed to converge only if the first derivative of the iteration
function has an absolute value less than one in an area around the point
"x(*)" for which
""Phi( x(*) ) = x(*)"" is to
be true, and if the start vector
"x(0)" lies within that area!
This is best verified graphically, which unfortunately is
impossible to do in this textual documentation!
See literature on Numerical Analysis for details!
In our case, this restriction translates to the following
three conditions:
There must exist a norm so that the norm of the matrix of the
iteration function, "( En - A )", has
a value less than one, the matrix "A" may not have any zero
value on its main diagonal and the initial vector
"x(0)" must be "good
enough", i.e., "close enough" to the solution
"x(*)".
(Remember school math: the first derivative of a straight line
given by ""y = a * x + b""
is "a"!)
The three methods expect a (quadratic!) matrix
"$matrix" as their first argument, a
start vector "$x0_vector", a vector
"$b_vector" (which is the vector
"b" in your equation system ""A * x
= b""), in the case of the "Relaxation
Method" ("RM"), a real number
"$weight" best between zero and two,
and finally an error limit (real number)
"$epsilon".
(Note that the weight
"$weight" used by the "Relaxation
Method" ("RM") is NOT checked to lie within any
reasonable range!)
The three methods first test the first two conditions of the
three conditions listed above and return an empty list if these
conditions are not fulfilled.
Therefore, you should always test their return value using
some code like:
if ( $xn_vector = $A_matrix->solve_GSM($x0_vector,$b_vector,1E-12) )
{
# do something with the solution...
}
else
{
# do something with the fact that there is no solution...
}
Otherwise, they iterate until "abs(
Phi(x) - x ) < epsilon".
(Beware that theoretically, infinite loops might result if the
starting vector is too far "off" the solution! In practice,
this shouldn't be a problem. Anyway, you can always press <ctrl-C>
if you think that the iteration takes too long!)
The difference between the three methods is the following:
In the "Global Step Method" ("GSM"), the
new vector ""x(t+1)""
(called "y" here) is calculated from the vector
"x(t)" (called "x" here)
according to the formula:
y[i] =
( b[i]
- ( a[i,1] x[1] + ... + a[i,i-1] x[i-1] +
a[i,i+1] x[i+1] + ... + a[i,n] x[n] )
) / a[i,i]
In the "Single Step Method" ("SSM"), the
components of the vector
""x(t+1)"" which have
already been calculated are used to calculate the remaining components,
i.e.
y[i] =
( b[i]
- ( a[i,1] y[1] + ... + a[i,i-1] y[i-1] + # note the "y[]"!
a[i,i+1] x[i+1] + ... + a[i,n] x[n] ) # note the "x[]"!
) / a[i,i]
In the "Relaxation method" ("RM"), the
components of the vector
""x(t+1)"" are calculated by
"mixing" old and new value (like cold and hot water), and the
weight "$weight" determines the
"aperture" of both the "hot water tap" as well as of
the "cold water tap", according to the formula:
y[i] =
( b[i]
- ( a[i,1] y[1] + ... + a[i,i-1] y[i-1] + # note the "y[]"!
a[i,i+1] x[i+1] + ... + a[i,n] x[n] ) # note the "x[]"!
) / a[i,i]
y[i] = weight * y[i] + (1 - weight) * x[i]
Note that the weight
"$weight" should be greater than zero
and less than two (!).
The three methods are supposed to be of different efficiency.
Experiment!
Remember that in most cases, it is probably advantageous to
first "normalize()" your equation system prior to
solving it!
- Unary operators:
""-"",
""~"",
""abs"",
"test",
""!"",
'""'
- Binary operators:
"".""
Binary (arithmetic) operators:
""+"",
""-"",
""*"",
""**"",
""+="",
""-="",
""*="",
""/="",""**=""
- Binary (relational) operators:
""=="",
""!="",
""<"",
""<="",
"">"",
"">=""
""eq"",
""ne"",
""lt"",
""le"",
""gt"",
""ge""
Note that the latter
(""eq"",
""ne"", ... ) are just
synonyms of the former
(""=="",
""!="", ... ), defined for
convenience only.
- '.'
- Concatenation
Returns the two matrices concatenated side by side.
Example: $c =
$a . $b;
For example, if
$a=[ 1 2 ] $b=[ 5 6 ]
[ 3 4 ] [ 7 8 ]
then
$c=[ 1 2 5 6 ]
[ 3 4 7 8 ]
Note that only matrices with the same number of rows may be
concatenated.
- '-'
- Unary minus
Returns the negative of the given matrix, i.e., the matrix
with all elements multiplied with the factor "-1".
Example:
$matrix = -$matrix;
- '~'
- Transposition
Returns the transposed of the given matrix.
Examples:
$temp = ~$vector * $vector;
$length = sqrt( $temp->element(1,1) );
if (~$matrix == $matrix) { # matrix is symmetric ... }
- abs
- Norm
Returns the "one"-Norm of the given matrix.
Example:
$error = abs( $A * $x - $b );
- test
- Boolean test
Tests wether there is at least one non-zero element in the
matrix.
Example:
if ($xn_vector) { # result of iteration is not zero ... }
- '!'
- Negated boolean test
Tests wether the matrix contains only zero's.
Examples:
if (! $b_vector) { # heterogenous equation system ... }
else { # homogenous equation system ... }
unless ($x_vector) { # not the null-vector! }
- '""""'
- "Stringify" operator
Converts the given matrix into a string.
Uses scientific representation to keep precision loss to a
minimum in case you want to read this string back in again later with
"new_from_string()".
By default a 13-digit mantissa and a 20-character field for
each element is used so that lines will wrap nicely on an 80-column
screen.
Examples:
$matrix = Math::MatrixReal->new_from_string(<<"MATRIX");
[ 1 0 ]
[ 0 -1 ]
MATRIX
print "$matrix";
[ 1.000000000000E+00 0.000000000000E+00 ]
[ 0.000000000000E+00 -1.000000000000E+00 ]
$string = "$matrix";
$test = Math::MatrixReal->new_from_string($string);
if ($test == $matrix) { print ":-)\n"; } else { print ":-(\n"; }
- '+'
- Addition
Returns the sum of the two given matrices.
Examples:
$matrix_S = $matrix_A + $matrix_B;
$matrix_A += $matrix_B;
- '-'
- Subtraction
Returns the difference of the two given matrices.
Examples:
$matrix_D = $matrix_A - $matrix_B;
$matrix_A -= $matrix_B;
Note that this is the same as:
$matrix_S = $matrix_A + -$matrix_B;
$matrix_A += -$matrix_B;
(The latter are less efficient, though)
- '*'
- Multiplication
Returns the matrix product of the two given matrices or the
product of the given matrix and scalar factor.
Examples:
$matrix_P = $matrix_A * $matrix_B;
$matrix_A *= $matrix_B;
$vector_b = $matrix_A * $vector_x;
$matrix_B = -1 * $matrix_A;
$matrix_B = $matrix_A * -1;
$matrix_A *= -1;
- '/'
- Division
Currently a shortcut for doing $a *
$b ** -1 is $a /
$b, which works for square matrices. One can
also use 1/$a .
- '**'
- Exponentiation
Returns the matrix raised to an integer power. If 0 is passed,
the identity matrix is returned. If a negative integer is passed, it
computes the inverse (if it exists) and then raised the inverse to the
absolute value of the integer. The matrix must be quadratic.
Examples:
$matrix2 = $matrix ** 2;
$matrix **= 2;
$inv2 = $matrix ** -2;
$ident = $matrix ** 0;
- '=='
- Equality
Tests two matrices for equality.
Example:
if ( $A * $x == $b ) { print "EUREKA!\n"; }
Note that in most cases, due to numerical errors (due to the
finite precision of computer arithmetics), it is a bad idea to compare
two matrices or vectors this way.
Better use the norm of the difference of the two matrices you
want to compare and compare that norm with a small number, like
this:
if ( abs( $A * $x - $b ) < 1E-12 ) { print "BINGO!\n"; }
- '!='
- Inequality
Tests two matrices for inequality.
Example:
while ($x0_vector != $xn_vector) { # proceed with iteration ... }
(Stops when the iteration becomes stationary)
Note that (just like with the '==' operator), it is usually a
bad idea to compare matrices or vectors this way. Compare the norm of
the difference of the two matrices with a small number instead.
- '<'
- Less than
Examples:
if ( $matrix1 < $matrix2 ) { # ... }
if ( $vector < $epsilon ) { # ... }
if ( 1E-12 < $vector ) { # ... }
if ( $A * $x - $b < 1E-12 ) { # ... }
These are just shortcuts for saying:
if ( abs($matrix1) < abs($matrix2) ) { # ... }
if ( abs($vector) < abs($epsilon) ) { # ... }
if ( abs(1E-12) < abs($vector) ) { # ... }
if ( abs( $A * $x - $b ) < abs(1E-12) ) { # ... }
Uses the "one"-norm for matrices and Perl's built-in
"abs()" for scalars.
- '<='
- Less than or equal
As with the '<' operator, this is just a shortcut for the
same expression with "abs()" around all arguments.
Example:
if ( $A * $x - $b <= 1E-12 ) { # ... }
which in fact is the same as:
if ( abs( $A * $x - $b ) <= abs(1E-12) ) { # ... }
Uses the "one"-norm for matrices and Perl's built-in
"abs()" for scalars.
- '>'
- Greater than
As with the '<' and '<=' operator, this
if ( $xn - $x0 > 1E-12 ) { # ... }
is just a shortcut for:
if ( abs( $xn - $x0 ) > abs(1E-12) ) { # ... }
Uses the "one"-norm for matrices and Perl's built-in
"abs()" for scalars.
- '>='
- Greater than or equal
As with the '<', '<=' and '>' operator, the
following
if ( $LR >= $A ) { # ... }
is simply a shortcut for:
if ( abs($LR) >= abs($A) ) { # ... }
Uses the "one"-norm for matrices and Perl's built-in
"abs()" for scalars.
Math::VectorReal, Math::PARI, Math::MatrixBool, Math::Vec, DFA::Kleene,
Math::Kleene, Set::IntegerRange, Set::IntegerFast .
This man page documents Math::MatrixReal version 2.13
The latest code can be found at
https://github.com/leto/math--matrixreal .
Steffen Beyer <sb@engelschall.com>, Rodolphe Ortalo
<ortalo@laas.fr>, Jonathan "Duke" Leto
<jonathan@leto.net>.
Currently maintained by Jonathan "Duke" Leto, send all
bugs/patches to Github Issues:
https://github.com/leto/math--matrixreal/issues
Many thanks to Prof. Pahlings for stoking the fire of my enthusiasm for Algebra
and Linear Algebra at the university (RWTH Aachen, Germany), and to Prof.
Esser and his assistant, Mr. Jarausch, for their fascinating lectures in
Numerical Analysis!
Copyright (c) 1996-2016 by various authors including the original developer
Steffen Beyer, Rodolphe Ortalo, the current maintainer Jonathan
"Duke" Leto and all the wonderful people in the AUTHORS file. All
rights reserved.
This package is free software; you can redistribute it and/or modify it under
the same terms as Perl itself. Fuck yeah.
Hey! The above document had some coding errors, which are explained
below:
- Around line 4032:
- '=item' outside of any '=over'
Visit the GSP FreeBSD Man Page Interface. Output converted with ManDoc. |