Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Download

GAP 4.8.9 installation with standard packages -- copy to your CoCalc project to get it

563636 views
1
<?xml version="1.0" encoding="UTF-8"?>
2
3
<!DOCTYPE Book SYSTEM "gapdoc.dtd"
4
[ <!ENTITY see '<Alt Only="LaTeX">$\to$</Alt><Alt Not="LaTeX">--&gt;</Alt>'>
5
<!ENTITY C "<Package>C</Package>">
6
<!ENTITY Gauss "<Package>Gauss</Package>">
7
<!ENTITY GaussForHomalg "<Package>GaussForHomalg</Package>">
8
<!ENTITY homalg "<Package>homalg</Package>">
9
<!ENTITY RingsForHomalg "<Package>RingsForHomalg</Package>">
10
<!ENTITY SCO "<Package>SCO</Package>">
11
<!ENTITY GAPDoc "<Package>GAPDoc</Package>">
12
]>
13
14
<Book Name="Gauss">
15
16
<TitlePage>
17
<Title> The &Gauss; Package Manual</Title>
18
<Subtitle>Extended Gauss Functionality for &GAP;</Subtitle>
19
<Version>
20
Version <#Include SYSTEM "../VERSION">
21
</Version>
22
<Author>Simon Goertzen<Alt Only="LaTeX"><Br/></Alt>
23
<Email>[email protected]</Email>
24
<Homepage>http://wwwb.math.rwth-aachen.de/goertzen/</Homepage>
25
<Address>
26
Lehrstuhl B für Mathematik<Br/>
27
Templergraben 64<Br/>
28
52062 Aachen<Br/>
29
(Germany)
30
</Address>
31
</Author>
32
<Date>March 2013</Date>
33
<Abstract>This document explains the primary uses of the &Gauss; package.
34
Included is a documented list of the most important methods
35
and functions needed to work with sparse matrices and the
36
algorithms provided by the &Gauss; package.
37
</Abstract>
38
<Copyright>&copyright; 2007-2013 by Simon Goertzen<P/>
39
This package may be distributed under the terms and conditions of
40
the GNU Public License Version 2.
41
</Copyright>
42
<Acknowledgements>The &Gauss; package would not have been possible without the helpful contributions by
43
<List>
44
<Item>Max Neunhöffer, University of St Andrews, and</Item>
45
<Item>Mohamed Barakat, Lehrstuhl B für Mathematik, RWTH Aachen.</Item>
46
</List>
47
Many thanks to these two and the Lehrstuhl B für Mathematik in general.
48
It should be noted that the &GAP; algorithms for
49
<C>SemiEchelonForm</C> and other methods formed an important and
50
informative basis for the development of the extended Gaussian
51
algorithms. This manual was created with the help of the &GAPDoc;
52
package by F. Lübeck and M. Neunhöffer <Cite Key="GAPDoc"/>.
53
</Acknowledgements>
54
</TitlePage>
55
56
<TableOfContents/>
57
58
<Body>
59
60
<Chapter Label="chap:intro"><Heading>Introduction</Heading>
61
62
<Section Label="sec:overview">
63
<Heading>Overview over this manual</Heading>
64
65
Chapter <Ref Chap="chap:intro"/> is concerned with the technical details of
66
installing and running this package. Chapter <Ref Chap="chap:EGF"/>
67
answers the question why and how the &GAP; functionality concerning a
68
sparse matrix type and gaussian algorithms was extended. The following
69
chapters are concerned with the workings of the sparse matrix type
70
(<Ref Chap="chap:SM"/>) and sparse Gaussian algorithms (<Ref
71
Chap="chap:Gauss"/>). Included is a documented list of the most
72
important methods and functions needed to work with sparse matrices
73
and the algorithms provided by the &Gauss; package. Anyone interested
74
in source code should just check out the files in the
75
<F>gap/pkg/Gauss/gap/</F> folder (&see; Appendix <Ref Label="FileOverview"/>).
76
77
</Section>
78
79
<#Include SYSTEM "install.xml"/>
80
81
</Chapter>
82
83
<Chapter Label="chap:EGF"><Heading>Extending Gauss Functionality</Heading>
84
85
<Section Label="sec:need"><Heading>The need for extended functionality</Heading>
86
87
&GAP; has a lot of functionality for row echelon forms of
88
matrices. These can be called by <C>SemiEchelonForm</C> and
89
similar commands. All of these work for the &GAP; matrix type over
90
fields. However, these algorithms are not capable of computing a
91
reduced row echelon form (RREF) of a matrix, there is no way to
92
"Gauss upwards". While this is not neccessary for things like Rank
93
or Kernel computations, this was one in a number of missing features
94
important for the development of the &GAP; package &homalg; by
95
M. Barakat <Cite Key="homalg-package"/>.<P/><P/>
96
97
Parallel to this development I worked on &SCO; <Cite Key="SCO"/>,
98
a package for creating simplicial sets and computing the
99
cohomology of orbifolds, based on the paper "Simplicial Cohomology
100
of Orbifolds" by I. Moerdijk and D. A. Pronk <Cite
101
Key="MP_SCO"/>. Very early on it became clear that the cohomology
102
matrices (with entries in &ZZ; or finite quotients of &ZZ;) would
103
grow exponentially in size with the cohomology degree. At one
104
point in time, for example, a 50651 x 1133693 matrix had to be
105
handled.<P/><P/>
106
107
It should be quite clear that there was a need for a sparse matrix
108
data type and corresponding Gaussian algorithms. After an
109
unfruitful search for a computer algebra system capable of this
110
task, the &Gauss; package was born - to provide not only the
111
missing RREF algorithms, but also support a new data type,
112
enabling &GAP; to handle sparse matrices of almost arbritrary
113
size.<P/><P/>
114
115
I am proud to tell you that, thanks to optimizing the algorithms
116
for matrices over GF(2), it was possible to compute the GF(2)-Rank
117
of the matrix mentioned above in less than 20 minutes with a
118
memory usage of about 3 GB.
119
120
</Section>
121
122
<Section Label="sec:app"><Heading>The applications of the &Gauss; package algorithms</Heading>
123
124
Please refer to <Cite Key="homalg-project"/> to find out more about the
125
&homalg; project and its related packages. Most of the motivation
126
for the algorithms in the &Gauss; package can be found there. If
127
you are interested in this project, you might also want to check
128
out my &GaussForHomalg; <Cite Key="GaussForHomalg"/> package,
129
which, just as &RingsForHomalg; <Cite Key="RingsForHomalg"/> does
130
for external Rings, serves as the connection between &homalg;
131
and &Gauss;. By allowing &homalg; to delegate computational tasks
132
to &Gauss; this small package extends &homalg;'s capabilities to
133
dense and sparse matrices over fields and rings of the form
134
<M>&ZZ; / \langle p^n \rangle</M>.<P/>
135
136
For those unfamiliar with the &homalg; project let me explain a
137
couple of points. As outlined in <Cite Key="BR"/> by D. Robertz
138
and M. Barakat homological computations can be reduced to three
139
basic tasks:<P/>
140
141
<List>
142
<Item>Computing a row basis of a module (<C>BasisOfRowModule</C>).</Item>
143
<Item>Reducing a module with a basis (<C>DecideZeroRows</C>).</Item>
144
<Item>Compute the relations between module elements (<C>SyzygiesGeneratorsOfRows</C>).</Item>
145
</List>
146
147
In addition to these tasks only relatively easy tools for matrix
148
manipulation are needed, ranging from addition and multiplication
149
to finding the zero rows in a matrix. However, to reduce the need for
150
communication it might be helpful to supply &homalg; with some
151
more advanced procedures.<P/><P/>
152
153
While the above tasks can be quite difficult when, for example,
154
working in noncommutative polynomial rings, in the &Gauss; case
155
they can all be done as long as you can compute a Reduced Row
156
Echelon Form. This is clear for <C>BasisOfRowModule</C>, as the
157
rows of the RREF of the matrix are already a basis of the
158
module. <Ref Meth="EchelonMat"/> is used to compute RREFs, based
159
on the &GAP; internal method <C>SemiEchelonMat</C> for Row Echelon
160
Forms.<P/><P/>
161
162
Lets look at the second point, the basic function
163
<C>DecideZeroRows</C>: When you face the task of reducing a module
164
<M>A</M> with a given basis <M>B</M>, you can compute the RREF of
165
the following block matrix:
166
<Table Align="|c|c|">
167
<HorLine/>
168
<Row>
169
<Item><Alt Not="LaTeX">Id</Alt>
170
<Alt Only="LaTeX"><![CDATA[
171
$\begin{array}{ccc}
172
1&\\
173
&\ddots&\\
174
&&1\\
175
\end{array}$
176
]]></Alt></Item>
177
<Item>A</Item>
178
</Row>
179
<HorLine/>
180
<Row>
181
<Item>0</Item>
182
<Item>B</Item>
183
</Row>
184
<HorLine/>
185
</Table>
186
By computing the RREF (notice how important "Gaussing upwards" is
187
here) <M>A</M> is reduced with <M>B</M>. However, the left side of
188
the matrix just serves the single purpose of tricking the Gaussian
189
algorithms into doing what we want. Therefore, it was a logical
190
step to implement <Ref Meth="ReduceMat"/>, which does the same
191
thing but without needing unneccessary columns.<P/>
192
193
Note: When, much later, it became clear that it was important to compute
194
the transformation matrices of the reduction, <Ref
195
Meth="ReduceMatTransformation"/> was born, similar to <Ref
196
Meth="EchelonMatTransformation"/>. This corresponds to the
197
&homalg; procedure <C>DecideZeroRowsEffectively</C>.<P/><P/>
198
199
The third procedure, <C>SygygiesGeneratorsOfRows</C>, is concerned with the
200
relations between rows of a matrix, each row representing a module
201
element. Over a field these relations are exactly the kernel of
202
the matrix. One can easily see that this can be achieved by taking
203
a matrix
204
<Table Align="|c|c|">
205
<HorLine/>
206
<Row>
207
<Item>A</Item>
208
<Item><Alt Not="LaTeX">Id</Alt>
209
<Alt Only="LaTeX"><![CDATA[
210
$\begin{array}{ccc}
211
1&\\
212
&\ddots&\\
213
&&1\\
214
\end{array}$
215
]]></Alt></Item>
216
</Row>
217
<HorLine/>
218
</Table>
219
and computing its Row Echelon Form. Then the row relations are
220
generated by the rows to the right of the zero rows of the
221
REF. There are two problems with this approach: The computation
222
diagonalizes the kernel, which might not be wanted, and, much
223
worse, it does not work at all for rings with zero divisors. For
224
example, the <M>1 \times 1</M> matrix <M>[2 + 8&ZZ;]</M> has a row
225
relation <M>[4 + 8&ZZ;]</M> which would not have been found by
226
this method.<P/>
227
228
Approaching this problem led to the method <Ref
229
Meth="EchelonMatTransformation"/>, which additionally computes the
230
transformation matrix <M>T</M>, such that RREF <M>= T \cdot M</M>.
231
Similar to <C>SemiEchelonMatTransformation</C>, <M>T</M> is split
232
up into the rows needed to create the basis vectors of the RREF,
233
and the relations that led to zero rows. Focussing on the
234
computations over fields, it was an easy step to write <Ref
235
Meth="KernelMat"/>, which terminates after the REF and returns the
236
kernel generators.<P/>
237
238
The syzygy computation over <M>&ZZ; / \langle p^n \rangle</M> was solved by
239
carefully keeping track of basis vectors with a zero-divising
240
head. If, for <M> v = (0,\ldots,0,h,*,\ldots,*), h \neq 0,</M>
241
there exists <M>g \neq 0</M> such that <M>g \cdot h = 0</M>, the
242
vector <M>g \cdot v</M> is regarded as an additional row vector
243
which has to be reduced and can be reduced with. After some more
244
work this allowed for the implementation of <Ref
245
Meth="KernelMat"/> for matrices over <M>&ZZ; / \langle p^n \rangle</M>.<P/>
246
247
This concludes the explanation of the so-called basic tasks
248
&Gauss; has to handle when called by &homalg; to do matrix
249
calculations. Here is a tabular overview of the current
250
capabilities of &Gauss; (<M>p</M> is a prime, <M>n \in &NN;</M>):<P/>
251
252
<Table Align="|c||c|c|c|c|c|">
253
<HorLine/>
254
<Row>
255
<Item>Matrix Type:</Item>
256
<Item>Dense</Item>
257
<Item>Dense</Item>
258
<Item>Sparse</Item>
259
<Item>Sparse</Item>
260
<Item>Sparse</Item>
261
</Row>
262
<HorLine/>
263
<Row>
264
<Item>Base Ring:</Item>
265
<Item>Field</Item>
266
<Item><M>&ZZ; / \langle p^n \rangle</M></Item>
267
<Item>Field</Item>
268
<Item>GF(2)</Item>
269
<Item><M>&ZZ; / \langle p^n \rangle</M></Item>
270
</Row>
271
<HorLine/>
272
<HorLine/>
273
<Row>
274
<Item>RankMat</Item>
275
<Item>&GAP;</Item>
276
<Item>n.a.</Item>
277
<Item>+</Item>
278
<Item>++</Item>
279
<Item>n.a.</Item>
280
</Row>
281
<HorLine/>
282
<Row>
283
<Item>EchelonMat</Item>
284
<Item>+</Item>
285
<Item>-</Item>
286
<Item>+</Item>
287
<Item>++</Item>
288
<Item>+</Item>
289
</Row>
290
<HorLine/>
291
<Row>
292
<Item>EchelonMatTransf.</Item>
293
<Item>+</Item>
294
<Item>-</Item>
295
<Item>+</Item>
296
<Item>++</Item>
297
<Item>+</Item>
298
</Row>
299
<HorLine/>
300
<Row>
301
<Item>ReduceMat</Item>
302
<Item>+</Item>
303
<Item>-</Item>
304
<Item>+</Item>
305
<Item>++</Item>
306
<Item>+</Item>
307
</Row>
308
<HorLine/>
309
<Row>
310
<Item>ReduceMatTransf.</Item>
311
<Item>+</Item>
312
<Item>-</Item>
313
<Item>+</Item>
314
<Item>++</Item>
315
<Item>+</Item>
316
</Row>
317
<HorLine/>
318
<Row>
319
<Item>KernelMat</Item>
320
<Item>+</Item>
321
<Item>-</Item>
322
<Item>+</Item>
323
<Item>++</Item>
324
<Item>+</Item>
325
</Row>
326
<HorLine/>
327
</Table>
328
329
As you can see, the development of hermite algorithms was not
330
continued for dense matrices. There are two reasons for that:
331
&GAP; already has very good algorithms for &ZZ;, and for small
332
matrices the disadvantage of computing over &ZZ;, potentially
333
leading to coefficient explosion, is marginal.
334
335
</Section>
336
337
</Chapter>
338
339
<Chapter Label="chap:SM"><Heading>The Sparse Matrix Data Type</Heading>
340
341
<Section Label="sec:workings"><Heading>The inner workings of &Gauss;
342
sparse matrices</Heading>
343
344
When doing any kind of computation there is a constant conflict
345
between memory load and speed. On the one hand, memory usage is
346
bounded by the total available memory, on the other hand,
347
computation time should also not exceed certain
348
proportions. Memory usage and CPU time are generally
349
inversely proportional, because the computer needs more time to
350
perform operations on a compactified data structure. The
351
idea of sparse matrices mirrors exactly the need for less memory
352
load, therefore it is natural that sparse algorithms take more
353
time than dense ones. However, if the matrix is sufficiently large
354
and sparse at the same time, sparse algorithms can easily be
355
faster than dense ones while maintaining minimal memory load.<P/>
356
357
It should be noted that, although matrices that appear naturally
358
in homological algebra are almost always sparse, they do not have
359
to stay sparse under (R)REF algorithms, especially when the
360
computation is concerned with transformation matrices. Therefore,
361
in a perfect world there should be ways implemented to not only
362
find out which data structure to use, but also at what point to
363
convert from one to the other. This was, however, not the aim of
364
the &Gauss; package and is just one of many points in which this
365
package could be optimized or extended.
366
367
Take a look at this matrix <M>M</M>:
368
369
<Table Align="|ccccc|">
370
<HorLine/>
371
<Row>
372
<Item>0</Item><Item>0</Item><Item>2</Item><Item>9</Item><Item>0</Item>
373
</Row>
374
<Row>
375
<Item>0</Item><Item>5</Item><Item>0</Item><Item>0</Item><Item>0</Item>
376
</Row>
377
<Row>
378
<Item>0</Item><Item>0</Item><Item>0</Item><Item>1</Item><Item>0</Item>
379
</Row>
380
<HorLine/>
381
</Table>
382
383
The matrix <M>M</M> carries the same information as the following table,
384
if and only if you know how many rows and columns the matrix
385
has. There is also the matter of the base ring, but this is not
386
important for now:
387
388
<Table Align="|cc|">
389
<HorLine/>
390
<Row><Item>(i,j)</Item><Item>Entry</Item></Row>
391
<HorLine/>
392
<Row><Item>(1,3)</Item><Item>2</Item></Row>
393
<Row><Item>(1,4)</Item><Item>9</Item></Row>
394
<Row><Item>(2,2)</Item><Item>5</Item></Row>
395
<Row><Item>(3,4)</Item><Item>1</Item></Row>
396
<HorLine/>
397
</Table>
398
399
This table relates each index tuple to its nonzero entry, all
400
other matrix entries are defined to be zero. This only works for
401
known dimensions of the matrix, otherwise trailing zero rows and
402
columns could get lost (notice how the table gives no hint about
403
the existence of a 5th column). To convert the above table into a
404
sparse data structure, one could list the table entries in this
405
way:<P/>
406
407
<Table Align="c">
408
<Row><Item><M>[ [ 1, 3, 2 ], [ 1, 4, 9 ], [ 2, 2, 5 ], [ 3, 4, 1 ] ]</M></Item></Row>
409
</Table>
410
411
However, this data structure would not be very efficient. Whenever
412
you are interested in a row <M>i</M> of <M>M</M> (this happens all the time
413
when performing Gaussian elimination) the whole list would have
414
to be searched for 3-tuples of the form <M>[ i, *, *
415
]</M>. This is why I tried to manage the row index by putting the
416
tuples into the corresponding list entry:<Br/>
417
418
<Table Align = "l">
419
<Row><Item><M>[ [ 3, 2 ], [ 4, 9 ] ],</M></Item></Row>
420
<Row><Item><M>[ [ 2, 5 ] ],</M></Item></Row>
421
<Row><Item><M>[ [ 4, 1 ] ] ]</M></Item></Row>
422
</Table>
423
424
As you can see, this looks fairly complicated. However, the same
425
information can be stored in this form, which would become the
426
final data structure for &Gauss; sparse matrices:
427
428
<Table Align = "clcl">
429
<Row><Item>indices :=</Item><Item>[ [ 3, 4 ],</Item><Item>entries:=</Item><Item>[ [ 2, 9 ],</Item></Row>
430
<Row><Item></Item><Item> [ 2 ],</Item><Item></Item><Item> [ 5 ],</Item></Row>
431
<Row><Item></Item><Item> [ 4 ] ]</Item><Item></Item><Item> [ 1 ] ]</Item></Row>
432
</Table>
433
434
Although now the number of rows is equal to the Length of both
435
`indices' and `entries', it is still stored in the sparse
436
matrix. Here is the full data structure (&see;
437
<Ref Func="SparseMatrix" Label="constructor using gap matrices"/>):
438
439
<Listing Type="from SparseMatrix.gi">
440
DeclareRepresentation( "IsSparseMatrixRep",
441
IsSparseMatrix, [ "nrows", "ncols", "indices", "entries", "ring" ] );
442
</Listing>
443
444
As you can see, the matrix stores its ring to be on the safe side.
445
This is especially important for zero matrices, as there is no way
446
to determine the base ring from the sparse matrix structure. For
447
further information on sparse matrix construction and converting,
448
refer to <Ref Func="SparseMatrix" Label="constructor using gap matrices"/>.
449
450
<Subsection Label="sub:gf2"><Heading>A special case: GF(2)</Heading>
451
452
<Listing Type="from SparseMatrix.gi">
453
DeclareRepresentation( "IsSparseMatrixGF2Rep",
454
IsSparseMatrix, [ "nrows", "ncols", "indices", "ring" ] );
455
</Listing>
456
457
Because the nonzero entries of a matrix over GF(2) are all "1",
458
the entries of M are not stored at all. It is of course crucial
459
that all operations and algorithms make 100% sure that all
460
appearing zero entries are deleted from the `indices' as well as
461
the `entries' list as they arise.
462
463
</Subsection>
464
465
</Section>
466
467
<Section Label="sec:mfSM"><Heading>Methods and functions for sparse matrices</Heading>
468
<#Include Label="SparseMatrix">
469
<#Include Label="ConvertSparseMatrixToMatrix">
470
<#Include Label="CopyMat">
471
<#Include Label="GetEntry">
472
<#Include Label="SetEntry">
473
<#Include Label="AddToEntry">
474
<#Include Label="SparseZeroMatrix">
475
<#Include Label="SparseIdentityMatrix">
476
<#Include Label="TransposedSparseMat">
477
<#Include Label="CertainRows">
478
<#Include Label="CertainColumns">
479
<#Include Label="UnionOfRows">
480
<#Include Label="UnionOfColumns">
481
<#Include Label="SparseDiagMat">
482
<#Include Label="Nrows">
483
<#Include Label="Ncols">
484
<#Include Label="IndicesOfSparseMatrix">
485
<#Include Label="EntriesOfSparseMatrix">
486
<#Include Label="RingOfDefinition">
487
</Section>
488
489
</Chapter>
490
491
<Chapter Label="chap:Gauss"><Heading>Gaussian Algorithms</Heading>
492
493
<Section Label="sec:list"><Heading>A list of the available algorithms</Heading>
494
495
As decribed earlier, the main functions of &Gauss; are <Ref
496
Meth="EchelonMat"/> and <Ref Meth="EchelonMatTransformation"/>,
497
<Ref Meth="ReduceMat"/> and <Ref Meth="ReduceMatTransformation"/>,
498
<Ref Meth="KernelMat"/> and, additionally <Ref Meth="Rank"/>.
499
500
These are all documented in the next section, but of course rely on
501
specific algorithms depending on the base ring of the matrix. These
502
are not fully documented but it should be very easy to find out how
503
they work based on the documentation of the main functions.
504
505
<Table Align="lll">
506
<Row><Item>EchelonMat</Item></Row>
507
<Row><Item></Item><Item>Field:</Item><Item><C>EchelonMatDestructive</C></Item></Row>
508
<Row><Item></Item><Item>Ring:</Item><Item><C>HermiteMatDestructive</C></Item></Row>
509
<Row><Item>EchelonMatTransformation</Item></Row>
510
<Row><Item></Item><Item>Field:</Item><Item><C>EchelonMatTransformationDestructive</C></Item></Row>
511
<Row><Item></Item><Item>Ring:</Item><Item><C>HermiteMatTransformationDestructive</C></Item></Row>
512
<Row><Item>ReduceMat</Item></Row>
513
<Row><Item></Item><Item>Field:</Item><Item><C>ReduceMatWithEchelonMat</C></Item></Row>
514
<Row><Item></Item><Item>Ring:</Item><Item><C>ReduceMatWithHermiteMat</C></Item></Row>
515
<Row><Item>ReduceMatTransformation</Item></Row>
516
<Row><Item></Item><Item>Field:</Item><Item><C>ReduceMatWithEchelonMatTransformation</C></Item></Row>
517
<Row><Item></Item><Item>Ring:</Item><Item><C>ReduceMatWithHermiteMatTransformation</C></Item></Row>
518
<Row><Item>KernelMat</Item></Row>
519
<Row><Item></Item><Item>Field:</Item><Item><C>KernelEchelonMatDestructive</C></Item></Row>
520
<Row><Item></Item><Item>Ring:</Item><Item><C>KernelHermiteMatDestructive</C></Item></Row>
521
<Row><Item>Rank</Item></Row>
522
<Row><Item></Item><Item>Field (dense):</Item><Item><C>Rank</C> (&GAP; method)</Item></Row>
523
<Row><Item></Item><Item>Field (sparse):</Item><Item><C>RankDestructive</C></Item></Row>
524
<Row><Item></Item><Item>GF(2) (sparse):</Item><Item><C>RankOfIndicesListList</C></Item></Row>
525
<Row><Item></Item><Item>Ring:</Item><Item>n.a.</Item></Row>
526
</Table>
527
528
</Section>
529
530
<Section Label="sec:mfGauss"><Heading>Methods and Functions for &Gauss;ian algorithms</Heading>
531
<#Include Label="EchelonMat">
532
<#Include Label="EchelonMatTransformation">
533
<#Include Label="ReduceMat">
534
<#Include Label="ReduceMatTransformation">
535
<#Include Label="KernelMat">
536
<#Include Label="Rank">
537
</Section>
538
539
540
</Chapter>
541
542
</Body>
543
544
<Appendix Label="FileOverview">
545
<Heading>An Overview of the &Gauss; package source code</Heading>
546
<Table Align="l|l">
547
<Caption><E>The &Gauss; package files.</E></Caption>
548
<Row><Item>Filename</Item><Item>Content</Item></Row>
549
<HorLine/>
550
<Row><Item>SparseMatrix.gi</Item><Item>Definitions and methods for
551
the sparse matrix type</Item></Row>
552
<Row><Item>SparseMatrixGF2.gi</Item><Item>Special case GF(2): no
553
matrix entries needed</Item></Row>
554
<Row><Item>GaussDense.gi</Item><Item>Gaussian elmination for &GAP;
555
matrices over fields</Item></Row>
556
<Row><Item>Sparse.gi</Item><Item>Documentation and forking depending
557
on the base ring</Item></Row>
558
<Row><Item>GaussSparse.gi</Item><Item>Gaussian elimination for sparse
559
matrices over fields</Item></Row>
560
<Row><Item>HermiteSparse.gi</Item><Item>Hermite elimination for sparse
561
matrices over <M>&ZZ; / \langle p^n \rangle</M></Item></Row>
562
</Table>
563
</Appendix>
564
565
<Bibliography Databases="GaussBib.xml"/>
566
567
<TheIndex/>
568
569
</Book>
570
571