Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Download

GAP 4.8.9 installation with standard packages -- copy to your CoCalc project to get it

563675 views
1
2
2 Extending Gauss Functionality
3
4
5
2.1 The need for extended functionality
6
7
GAP has a lot of functionality for row echelon forms of matrices. These can
8
be called by SemiEchelonForm and similar commands. All of these work for the
9
GAP matrix type over fields. However, these algorithms are not capable of
10
computing a reduced row echelon form (RREF) of a matrix, there is no way to
11
"Gauss upwards". While this is not neccessary for things like Rank or Kernel
12
computations, this was one in a number of missing features important for the
13
development of the GAP package homalg by M. Barakat [Bar09].
14
15
Parallel to this development I worked on SCO [Gör08b], a package for
16
creating simplicial sets and computing the cohomology of orbifolds, based on
17
the paper "Simplicial Cohomology of Orbifolds" by I. Moerdijk and D. A.
18
Pronk [MP99]. Very early on it became clear that the cohomology matrices
19
(with entries in ℤ or finite quotients of ℤ) would grow exponentially in
20
size with the cohomology degree. At one point in time, for example, a 50651
21
x 1133693 matrix had to be handled.
22
23
It should be quite clear that there was a need for a sparse matrix data type
24
and corresponding Gaussian algorithms. After an unfruitful search for a
25
computer algebra system capable of this task, the Gauss package was born -
26
to provide not only the missing RREF algorithms, but also support a new data
27
type, enabling GAP to handle sparse matrices of almost arbritrary size.
28
29
I am proud to tell you that, thanks to optimizing the algorithms for
30
matrices over GF(2), it was possible to compute the GF(2)-Rank of the matrix
31
mentioned above in less than 20 minutes with a memory usage of about 3 GB.
32
33
34
2.2 The applications of the Gauss package algorithms
35
36
Please refer to [ht09] to find out more about the homalg project and its
37
related packages. Most of the motivation for the algorithms in the Gauss
38
package can be found there. If you are interested in this project, you might
39
also want to check out my GaussForHomalg [Gör08a] package, which, just as
40
RingsForHomalg [BGKLH08] does for external Rings, serves as the connection
41
between homalg and Gauss. By allowing homalg to delegate computational tasks
42
to Gauss this small package extends homalg's capabilities to dense and
43
sparse matrices over fields and rings of the form ℤ / ⟨ p^n ⟩.
44
45
For those unfamiliar with the homalg project let me explain a couple of
46
points. As outlined in [BR08] by D. Robertz and M. Barakat homological
47
computations can be reduced to three basic tasks:
48
49
 Computing a row basis of a module (BasisOfRowModule).
50
51
 Reducing a module with a basis (DecideZeroRows).
52
53
 Compute the relations between module elements
54
(SyzygiesGeneratorsOfRows).
55
56
In addition to these tasks only relatively easy tools for matrix
57
manipulation are needed, ranging from addition and multiplication to finding
58
the zero rows in a matrix. However, to reduce the need for communication it
59
might be helpful to supply homalg with some more advanced procedures.
60
61
While the above tasks can be quite difficult when, for example, working in
62
noncommutative polynomial rings, in the Gauss case they can all be done as
63
long as you can compute a Reduced Row Echelon Form. This is clear for
64
BasisOfRowModule, as the rows of the RREF of the matrix are already a basis
65
of the module. EchelonMat (4.2-1) is used to compute RREFs, based on the GAP
66
internal method SemiEchelonMat for Row Echelon Forms.
67
68
Lets look at the second point, the basic function DecideZeroRows: When you
69
face the task of reducing a module A with a given basis B, you can compute
70
the RREF of the following block matrix:
71
72
┌────┬───┐
73
│ Id │ A │
74
├────┼───┤
75
│ 0 │ B │
76
└────┴───┘
77
78
By computing the RREF (notice how important "Gaussing upwards" is here) A is
79
reduced with B. However, the left side of the matrix just serves the single
80
purpose of tricking the Gaussian algorithms into doing what we want.
81
Therefore, it was a logical step to implement ReduceMat (4.2-3), which does
82
the same thing but without needing unneccessary columns.
83
84
Note: When, much later, it became clear that it was important to compute the
85
transformation matrices of the reduction, ReduceMatTransformation (4.2-4)
86
was born, similar to EchelonMatTransformation (4.2-2). This corresponds to
87
the homalg procedure DecideZeroRowsEffectively.
88
89
The third procedure, SygygiesGeneratorsOfRows, is concerned with the
90
relations between rows of a matrix, each row representing a module element.
91
Over a field these relations are exactly the kernel of the matrix. One can
92
easily see that this can be achieved by taking a matrix
93
94
┌───┬────┐
95
│ A │ Id │
96
└───┴────┘
97
98
and computing its Row Echelon Form. Then the row relations are generated by
99
the rows to the right of the zero rows of the REF. There are two problems
100
with this approach: The computation diagonalizes the kernel, which might not
101
be wanted, and, much worse, it does not work at all for rings with zero
102
divisors. For example, the 1 × 1 matrix [2 + 8ℤ] has a row relation [4 + 8ℤ]
103
which would not have been found by this method.
104
105
Approaching this problem led to the method EchelonMatTransformation (4.2-2),
106
which additionally computes the transformation matrix T, such that RREF = T
107
⋅ M. Similar to SemiEchelonMatTransformation, T is split up into the rows
108
needed to create the basis vectors of the RREF, and the relations that led
109
to zero rows. Focussing on the computations over fields, it was an easy step
110
to write KernelMat (4.2-5), which terminates after the REF and returns the
111
kernel generators.
112
113
The syzygy computation over ℤ / ⟨ p^n ⟩ was solved by carefully keeping
114
track of basis vectors with a zero-divising head. If, for v =
115
(0,...,0,h,*,...,*), h ≠ 0, there exists g ≠ 0 such that g ⋅ h = 0, the
116
vector g ⋅ v is regarded as an additional row vector which has to be reduced
117
and can be reduced with. After some more work this allowed for the
118
implementation of KernelMat (4.2-5) for matrices over ℤ / ⟨ p^n ⟩.
119
120
This concludes the explanation of the so-called basic tasks Gauss has to
121
handle when called by homalg to do matrix calculations. Here is a tabular
122
overview of the current capabilities of Gauss (p is a prime, n ∈ ℕ):
123
124
┌───────────────────┬────── ─────────── ────── ────── ─────────── ─┐
125
│ Matrix Type: │ Dense c Dense c Sparse c Sparse c Sparse c
126
├───────────────────┼────── ─────────── ────── ────── ─────────── ─┤
127
│ Base Ring: │ Field c ℤ / ⟨ p^n ⟩ c Field c GF(2) c ℤ / ⟨ p^n ⟩ c
128
├───────────────────┼────── ─────────── ────── ────── ─────────── ─┤
129
├───────────────────┼────── ─────────── ────── ────── ─────────── ─┤
130
│ RankMat │ GAP c n.a. c + c ++ c n.a. c
131
├───────────────────┼────── ─────────── ────── ────── ─────────── ─┤
132
│ EchelonMat │ + c - c + c ++ c + c
133
├───────────────────┼────── ─────────── ────── ────── ─────────── ─┤
134
│ EchelonMatTransf. │ + c - c + c ++ c + c
135
├───────────────────┼────── ─────────── ────── ────── ─────────── ─┤
136
│ ReduceMat │ + c - c + c ++ c + c
137
├───────────────────┼────── ─────────── ────── ────── ─────────── ─┤
138
│ ReduceMatTransf. │ + c - c + c ++ c + c
139
├───────────────────┼────── ─────────── ────── ────── ─────────── ─┤
140
│ KernelMat │ + c - c + c ++ c + c
141
└───────────────────┴────── ─────────── ────── ────── ─────────── ─┘
142
143
As you can see, the development of hermite algorithms was not continued for
144
dense matrices. There are two reasons for that: GAP already has very good
145
algorithms for ℤ, and for small matrices the disadvantage of computing over
146
ℤ, potentially leading to coefficient explosion, is marginal.
147
148
149