<<

. 4
( 14)



>>

4.2 The Isomorphism Theorems For Modules
If N is a submodule of the R-module M (notation N ¤ M ), then in particular N is an
additive subgroup of M , and we may form the quotient group M/N in the usual way.
In fact M/N becomes an R-module if we de¬ne r(x + N ) = rx + N . (This makes sense
because if x belongs to the submodule N , so does rx.) Since scalar multiplication in the
quotient module M/N is carried out via scalar multiplication in the original module M ,
we can check the module axioms without di¬culty. The canonical map π : M ’ M/N is
a module homomorphism with kernel N . Just as with groups and rings, we can establish
the basic isomorphism theorems for modules.

4.2.1 Factor Theorem For Modules
Any module homomorphism f : M ’ M whose kernel contains N can be factored through
M/N . In other words, there is a unique module homomorphism f : M/N ’ M such that
f (x+N ) = f (x). Furthermore, (i) f is an epimorphism if and only if f is an epimorphism;
(ii) f is a monomorphism if and only if ker f = N ; (iii) f is an isomorphism if and only
if f is a epimorphism and kerf = N .
Proof. Exactly as in (2.3.1), with appropriate notational changes. (In Figure 2.3.1, replace
R by M , S by M and I by N .) ™

4.2.2 First Isomorphism Theorem For Modules
If f : M ’ M is a module homomorphism with kernel N , then the image of f is isomor-
phic to M/N .
Proof. Apply the factor theorem, and note that f is an epimorphism onto its image. ™

4.2.3 Second Isomorphism Theorem For Modules
Let S and T be submodules of M , and let S + T = {x + y : x ∈ S, y ∈ T }. Then S + T
and S © T are submodules of M and
(S + T )/T ∼ S/(S © T ).
=
Proof. The module axioms for S + T and S © T can be checked in routine fashion. De¬ne
a map f : S ’ M/T by f (x) = x + T . Then f is a module homomorphism whose kernel
is S © T and whose image is {x + T : x ∈ S} = (S + T )/T . The ¬rst isomorphism theorem
for modules gives the desired result. ™
6 CHAPTER 4. MODULE FUNDAMENTALS

4.2.4 Third Isomorphism Theorem For Modules
If N ¤ L ¤ M , then

M/L ∼ (M/N )/(L/N ).
=

Proof. De¬ne f : M/N ’ M/L by f (x + N ) = x + L. As in (2.3.4), the kernel of f is
{x + N : x ∈ L} = L/N , and the image of f is {x + L : x ∈ M } = M/L. The result follows
from the ¬rst isomorphism theorem for modules. ™


4.2.5 Correspondence Theorem For Modules
Let N be a submodule of the R-module M . The map S ’ S/N sets up a one-to-one
correspondence between the set of all submodules of M containing N and the set of all
submodules of M/N . The inverse of the map is T ’ π ’1 (T ), where π is the canonical
map: M ’ M/N .


Proof. The correspondence theorem for groups yields a one-to-one correspondence be-
tween additive subgroups of M containing N and additive subgroups of M/N . We
must check that submodules correspond to submodules, and it is su¬cient to show
that if S1 /N ¤ S2 /N , then S1 ¤ S2 (the converse is immediate). If x ∈ S1 , then
x + N ∈ S1 /N ⊆ S2 /N , so x + N = y + N for some y ∈ S2 . Thus x ’ y ∈ N ⊆ S2 , and
since y ∈ S2 we must have x ∈ S2 as well. Therefore S1 ¤ S2 . ™


We now look at modules that have a particularly simple structure, and can be used
as building blocks for more complicated modules.


4.2.6 De¬nitions and Comments
An R-module M is cyclic if it is generated by a single element x. In other words,

M = Rx = {rx : r ∈ R}.

Thus every element of M is a scalar multiple of x. (If x = 0, then M = {0}, which is
called the zero module and is often written simply as 0.) A cyclic vector space over a ¬eld
is a one-dimensional space, assuming that x = 0.
The annihilator of an element y in the R-module M is Iy = {r ∈ R : ry = 0}, a left
ideal of R. If R is commutative, and M is cyclic with generator x, then M ∼ R/Ix .=
To see this, apply the ¬rst isomorphism theorem for modules to the map r ’ rx of R
onto M . The annihilator of the module M is Io = {r ∈ R : ry = 0 for every y ∈ M }.
Note that Io is a two-sided ideal, because if r ∈ I0 and s ∈ R, then for every y ∈ M we
have (rs)y = r(sy) = 0. When R is commutative, annihilating the generator of a cyclic
module is equivalent to annihilating the entire module.
4.3. DIRECT SUMS AND FREE MODULES 7

4.2.7 Lemma
(a) If x generates a cyclic module M over the commutative ring R, then Ix = Io , so
that M ∼ R/Io . (In this situation, Io is frequently referred to as the order ideal of M .)
=
(b) Two cyclic R-modules over a commutative ring are isomorphic if and only if they
have the same annihilator.
Proof. (a) If rx = 0 and y ∈ M , then y = sx for some s ∈ R, so ry = r(sx) = s(rx) =
s0 = 0. Conversely, if r annihilates M , then in particular, rx = 0.
(b) The “if” part follows from (a), so assume that g : Rx ’ Ry is an isomorphism
of cyclic R-modules. Since g is an isomorphism, g(x) must be a generator of Ry, so we
may as well take g(x) = y. Then g(rx) = rg(x) = ry, so rx corresponds to ry under the
isomorphism. Therefore r belongs to the annihilator of Rx if and only if r belongs to the
annihilator of Ry. ™

Problems For Section 4.2
1. Show that every submodule of the quotient module M/N can be expressed as (L+N )/N
for some submodule L of M .
2. In Problem 1, must L contain N ?
3. In the matrix ring Mn (R), let M be the submodule generated by E11 , the matrix with
1 in row 1, column 1, and 0™s elsewhere. Thus M = {AE11 : A ∈ Mn (R)}. Show that
M consists of all matrices whose entries are zero except perhaps in column 1.
4. Continuing Problem 3, show that the annihilator of E11 consists of all matrices whose
¬rst column is zero, but the annihilator of M is {0}.
5. If I is an ideal of the ring R, show that R/I is a cyclic R-module.
6. Let M be an R-module, and let I be an ideal of R. We wish to make M into an
R/I-module via (r + I)m = rm, r ∈ R, m ∈ M . When will this be legal?
7. Assuming legality in Problem 6, let M1 be the resulting R/I-module, and note that as
sets, M1 = M . Let N be a subset of M and consider the following two statements:
(a) N is an R-submodule of M ;
(b) N is an R/I-submodule of M1 .
Can one of these statements be true and the other false?


4.3 Direct Sums and Free Modules
4.3.1 Direct Products
In Section 1.5, we studied direct products of groups, and the basic idea seems to carry
over to modules. Suppose that we have an R-module Mi for each i in some index set I
(possibly in¬nite). The members of the direct product of the Mi , denoted by i∈I Mi , are
all families (ai , i ∈ I), where ai ∈ Mi . (A family is just a function on I whose value at the
element i is ai .) Addition is described by (ai ) + (bi ) = (ai + bi ) and scalar multiplication
by r(ai ) = (rai ).
8 CHAPTER 4. MODULE FUNDAMENTALS

There is nothing wrong with this de¬nition, but the resulting mathematical object has
some properties that are undesirable. If (ei ) is the family with 1 in position i and zeros
elsewhere, then (thinking about vector spaces) it would be useful to express an arbitrary
element of the direct product in terms of the ei . But if the index set I is in¬nite, we
will need concepts of limit and convergence, and this will take us out of algebra and into
analysis. Another approach is to modify the de¬nition of direct product.


4.3.2 De¬nitions
The external direct sum of the modules Mi , i ∈ I, denoted by •i∈I Mi , consists of all
families (ai , i ∈ I) with ai ∈ Mi , such that ai = 0 for all but ¬nitely many i. Addition
and scalar multiplication are de¬ned exactly as for the direct product, so that the external
direct sum coincides with the direct product when the index set I is ¬nite.
The R-module M is the internal direct sum of the submodules Mi if each x ∈ M can
be expressed uniquely as xi1 + · · · + xin where 0 = xik ∈ Mik , k = 1, . . . , n. (The positive
integer n and the elements xik depend on x. In any expression of this type, the indices
ik are assumed distinct.)
Just as with groups, the internal and external direct sums are isomorphic. To see this
without a lot of formalism, let the element xik ∈ Mik correspond to the family that has
xik in position ik and zeros elsewhere. We will follow standard practice and refer to the
“direct sum” without the qualifying adjective. Again as with groups, the next result may
help us to recognize when a module can be expressed as a direct sum.


4.3.3 Proposition
The module M is the direct sum of submodules Mi if and only if both of the following
conditions are satis¬ed:

(1) M = i Mi , that is, each x ∈ M is a ¬nite sum of the form xi1 + · · · + xin , where
xik ∈ Mik ;

(2) For each i, Mi © Mj = 0.
j=i

(Note that in condition (1), we do not assume that the representation is unique. Observe
also that another way of expressing (2) is that if xi1 + · · · + xin = 0, with xik ∈ Mik , then
xik = 0 for all k.)

Proof. The necessity of the conditions follows from the de¬nition of external direct sum,
so assume that (1) and (2) hold. If x ∈ M then by (1), x is a ¬nite sum of elements
from various Mi ™s. For convenience in notation, say x = x1 + x2 + x3 + x4 with xi ∈ Mi ,
i = 1, 2, 3, 4. If the representation is not unique, say x = y1 + y2 + y4 + y5 + y6 with
yi ∈ Mi , i = 1, 2, 4, 5, 6. Then x3 is a sum of terms from modules other than M3 , so
by (2), x3 = 0. Similarly, y5 = y6 = 0 and we have x1 + x2 + x4 = y1 + y2 + y4 . But
then x1 ’ y1 is a sum of terms from modules other than M1 , so by (2), x1 = y1 . Similarly
x2 = y2 , x4 = y4 , and the result follows. ™
4.3. DIRECT SUMS AND FREE MODULES 9

A basic property of the direct sum M = •i∈I Mi is that homomorphisms fi : Mi ’ N
can be “lifted” to M . In other words, there is a unique homomorphism f : M ’ N such
that for each i, f = fi on Mi . Explicitly,

f (xi1 + · · · + xir ) = fi1 (xi1 ) + · · · + fir (xir ).

[No other choice is possible for f , and since each fi is a homomorphism, so is f .]
We know that every vector space has a basis, but not every module is so fortunate;
see Section 4.1, Problem 5. We now examine modules that have this feature.

4.3.4 De¬nitions and Comments
Let S be a subset of the R-module M . We say that S is linearly independent over R if
»1 x1 + · · · + »k xk = 0 implies that all »i = 0 (»i ∈ R, xi ∈ S, k = 1, 2, . . . ). We say that S
is a spanning (or generating) set for M over R, or that S spans (generates) M over R if
each x ∈ M can be written as a ¬nite linear combination of elements of S with coe¬cients
in R. We will usually omit “over R” if the underlying ring R is clearly identi¬ed. A basis
is a linearly independent spanning set, and a module that has a basis is said to be free.
Suppose that M is a free module with basis (bi , i ∈ I), and we look at the submodule
Mi spanned by the basis element bi . (In general, the submodule spanned (or generated) by
a subset T of M consists of all ¬nite linear combinations of elements of T with coe¬cients
in R. Thus the submodule spanned by bi is the set of all rbi , r ∈ R.) If R is regarded as
a module over itself, then the map r ’ rbi is an R-module isomorphism of R and Mi ,
because {bi } is a linearly independent set. Since the bi span M , it follows that M is the
sum of the submodules Mi , and by linear independence of the bi , the sum is direct. Thus
we have an illuminating interpretation of a free module:

A free module is a direct sum of isomorphic copies of the underlying ring R.

Conversely, a direct sum of copies of R is a free R-module. If ei has 1 as its ith component
and zeros elsewhere, the ei form a basis.
This characterization allows us to recognize several examples of free modules.

1. For any positive integer n, Rn is a free R-module.

2. The matrix ring Mmn (R) is a free R-module with basis Eij , i = 1, . . . , m,
j = 1, . . . , n.

3. The polynomial ring R[X] is a free R-module with basis 1, X, X 2 , . . . .

We will adopt the standard convention that the zero module is free with the empty set
as basis.
Any two bases for a vector space over a ¬eld have the same cardinality. This property
does not hold for arbitrary free modules, but the following result covers quite a few cases.

4.3.5 Theorem
Any two bases for a free module M over a commutative ring R have the same cardinality.
10 CHAPTER 4. MODULE FUNDAMENTALS

Proof. If I is a maximal ideal of R, then k = R/I is a ¬eld, and V = M/IM is a vector
ai xi with ai ∈ I and xi ∈ M ; thus IM
space over k. [By IM we mean all ¬nite sums
is a submodule of M . If r + I ∈ k and x + IM ∈ M/IM , we take (r + I)(x + IM ) to be
rx + IM . The scalar multiplication is well-de¬ned because r ∈ I or x ∈ IM implies that
rx ∈ IM . We can express this in a slightly di¬erent manner by saying that I annihilates
M/IM . The requirements for a vector space can be checked routinely.]
Now if (xi ) is a basis for M , let xi = xi +IM . Since the xi span M , the xi span M/IM .
ai xi = 0, where ai = ai + I, ai ∈ R, then ai xi ∈ IM . Thus
If ai xi = bj xj with
bj ∈ I. Since the xi form a basis, we must have ai = bj for some j. Consequently ai ∈ I,
so that ai = 0 in k. We conclude that the xi form a basis for V over k, and since the
dimension of V over k depends only on M , R and I, and not on a particular basis for M ,
the result follows. ™

4.3.6 Some Key Properties of Free Modules
Suppose that M is a free module with basis (xi ), and we wish to construct a module
homomorphism f from M to an arbitrary module N . Just as with vector spaces, we can
specify f (xi ) = yi ∈ N arbitrarily on basis elements, and extend by linearity. Thus if
ai xi ∈ M , we have f (x) =
x= ai yi . (The idea should be familiar; for example,
a linear transformation on Euclidean 3-space is determined by what it does to the three
standard basis vectors.) Now let™s turn this process around:

If N is an arbitrary module, we can express N as a homomorphic image of a
free module.

All we need is a set (yi , i ∈ I) of generators for N . (If all else fails, we can take the yi to
be all the elements of N .) We then construct a free module with basis (xi , i ∈ I). (To do
this, take the direct sum of copies of R, as many copies as there are elements of I.) Then
map xi to yi for each i.
Note that by the ¬rst isomorphism theorem, every module is a quotient of a free
module.

Problems For Section 4.3
1. Show that in Proposition 4.3.3, (2) can be replaced by the weaker condition that for
each i, Mi © j<i Mj = 0. (Assume a ¬xed total ordering on the index set.)
2. Let A be a ¬nite abelian group. Is it possible for A to be a free Z-module?
3. Let r and s be elements in the ideal I of the commutative ring R. Show that r and s
are linearly dependent over R.
4. In Problem 3, regard I as an R-module. Can I be free?
5. Give an example of an in¬nite abelian group that is a free Z-module, and an example
of an in¬nite abelian group that is not free.
6. Show that a module M is free if and only if M has a subset S such that any function
f from S to a module N can be extended uniquely to a module homomorphism from
M to N .
4.4. HOMOMORPHISMS AND MATRICES 11

7. Let M be a free module, expressed as the direct sum of ± copies of the underlying ring
R, where ± and |R| are in¬nite cardinals. Find the cardinality of M .
8. In Problem 7, assume that all bases B have the same cardinality, e.g., R is commutative.
Find the cardinality of B.


4.4 Homomorphisms and Matrices
Suppose that M is a free R-module with a ¬nite basis of n elements v1 , . . . , vn , sometimes
called a free module of rank n. We know from Section 4.3 that M is isomorphic to the
direct sum of n copies of R. Thus we can regard M as Rn , the set of all n-tuples with
components in R. Addition and scalar multiplication are performed componentwise, as
in (4.1.3), Example 3. Note also that the direct sum coincides with the direct product,
since we are summing only ¬nitely many modules.
Let N be a free R-module of rank m, with basis w1 , . . . , wm , and suppose that f is a
module homomorphism from M to N . Just as in the familiar case of a linear transforma-
tion on a ¬nite-dimensional vector space, we are going to represent f by a matrix. For
each j, f (vj ) is a linear combination of the basis elements wj , so that
m
f (vj ) = aij wi , j = 1, . . . , n (1)
i=1

where the aij belong to R.
It is natural to associate the m — n matrix A with the homomorphism f , and it
appears that we have an isomorphism of some sort, but an isomorphism of what? If f
and g are homomorphisms of M into N , then f and g can be added (and subtracted):
(f + g)(x) = f (x) + g(x). If f is represented by the matrix A and g by B, then f + g
corresponds to A + B. This gives us an abelian group isomorphism of HomR (M, N ), the
set of all R-module homomorphisms from M to N , and Mmn (R), the set of all m — n
matrices with entries in R. In addition, Mmn (R) is an R-module, so it is tempting to
say “obviously, we have an R-module isomorphism”. But we must be very careful here.
If f ∈ HomR (M, N ) and s ∈ R, we can de¬ne sf in the natural way: (sf )(x) = sf (x).
However, if we carry out the “routine” check that sf ∈ HomR (M, N ), there is one step
that causes alarm bells to go o¬:

(sf )(rx) = sf (rx) = srf (x), but r(sf )(x) = rsf (x)

and the two expressions can disagree if R is not commutative. Thus HomR (M, N ) need
not be an R-module. Let us summarize what we have so far.

4.4.1 The Correspondence Between Homomorphisms
and Matrices
Associate with each f ∈ HomR (M, N ) a matrix A as in (1) above. This yields an abelian
group isomorphism, and also an R-module isomorphism if R is commutative.
Now let m = n, so that the dimensions are equal and the matrices are square, and
take vi = wi for all i. A homomorphism from M to itself is called an endomorphism of
12 CHAPTER 4. MODULE FUNDAMENTALS

M , and we use the notation EndR (M ) for HomR (M, M ). Since EndR (M ) is a ring under
composition of functions, and Mn (R) is a ring under matrix multiplication, it is plausible
to conjecture that we have a ring isomorphism. If f corresponds to A and g to B, then
we can apply g to both sides of (1) to obtain
n n n n
g(f (vj )) = aij bki vk = ( aij bki )vk . (2)
i=1 k=1 i=1
k=1

If R is commutative, then aij bki = bki aij , and the matrix corresponding to gf = g —¦ f is
BA, as we had hoped. In the noncommutative case, we will not be left empty-handed if
we de¬ne the opposite ring Ro , which has exactly the same elements as R and the same
addition structure. However, multiplication is done backwards, i.e., ab in Ro is ba in R.
It is convenient to attach a superscript o to the elements of Ro , so that
ao bo = ba (more precisely, ao bo = (ba)o ).
Thus in (2) we have aij bki = bo ao . To summarize,
ki ij
The endomorphism ring EndR (M ) is isomorphic to the ring of n — n matrices
with coe¬cients in the opposite ring Ro . If R is commutative, then EndR (M )
is ring-isomorphic to Mn (R).

4.4.2 Preparation For The Smith Normal Form
We now set up some basic machinery to be used in connection with the Smith normal form
and its applications. Assume that M is a free Z-module of rank n, with basis x1 , . . . , xn ,
and that K is a submodule of M with ¬nitely many generators u1 , . . . , um . (We say that
K is ¬nitely generated.) We change to a new basis y1 , . . . , yn via Y = P X, where X
[resp. Y ] is a column vector with components xi [resp. yi ]. Since X and Y are bases, the
n — n matrix P must be invertible, and we need to be very clear on what this means.
If the determinant of P is nonzero, we can construct P ’1 , for example by the “adjoint
divided by determinant” formula given in Cramer™s rule. But the underlying ring is Z,
not Q, so we require that the coe¬cients of P ’1 be integers. (For a more transparent
equivalent condition, see Problem 1.) Similarly, we are going to change generators of K
via V = QU , where Q is an invertible m — m matrix and U is a column vector with
components ui .
The generators of K are linear combinations of basis elements, so we have an equation
of the form U = AX, where A is an m — n matrix called the relations matrix. Thus
V = QU = QAX = QAP ’1 Y.
so the new relations matrix is
B = QAP ’1 .
Thus B is obtained from A by pre-and postmultiplying by invertible matrices, and we
say that A and B are equivalent. We will see that two matrices are equivalent i¬ they
have the same Smith normal form. The point we wish to emphasize now is that if we
know the matrix P , we can compute the new basis Y , and if we know the matrix Q,
we can compute the new system of generators V . In our applications, P and Q will be
constructed by elementary row and column operations.
4.5. SMITH NORMAL FORM 13

Problems For Section 4.4
1. Show that a square matrix P over the integers has an inverse with integer entries if
and only if P is unimodular, that is, the determinant of P is ±1.
2. Let V be the direct sum of the R-modules V1 , . . . , Vn , and let W be the direct sum of
R-modules W1 , . . . , Wm . Indicate how a module homomorphism from V to W can be
represented by a matrix. (The entries of the matrix need not be elements of R.)
3. Continuing Problem 2, show that if V n is the direct sum of n copies of the R-module
V , then we have a ring isomorphism

EndR (V n ) ∼ Mn (EndR (V )).
=

4. Show that if R is regarded as an R-module, then EndR (R) is isomorphic to the opposite
ring Ro .
5. Let R be a ring, and let f ∈ EndR (R). Show that for some r ∈ R we have f (x) = xr
for all x ∈ R.
6. Let M be a free R-module of rank n. Show that EndR (M ) ∼ Mn (Ro ), a ring isomor-
=
phism.
7. Continuing Problem 6, if R is commutative, show that the ring isomorphism is in fact
an R-algebra isomorphism.


4.5 Smith Normal Form
We are going to describe a procedure that is very similar to reduction of a matrix to
echelon form. The result is that every matrix over a principal ideal domain is equivalent
to a matrix in Smith normal form. Explicitly, the Smith matrix has nonzero entries only
on the main diagonal. The main diagonal entries are, from the top, a1 , . . . , ar (possibly
followed by zeros), where the ai are nonzero and ai divides ai+1 for all i.
We will try to convey the basic ideas via a numerical example. This will allow us to
give informal but convincing proofs of some major theorems. A formal development is
given in Jacobson, Basic Algebra I, Chapter 3. All our computations will be in the ring
of integers, but we will indicate how the results can be extended to an arbitrary principal
ideal domain. Let™s start with the following matrix:
® 
0 0 22 0
°’2 2 ’6 ’4»
226 8

As in (4.4.2), we assume a free Z-module M with basis x1 , x2 , x3 , x4 , and a submodule K
generated by u1 , u2 , u3 , where u1 = 22x3 , u2 = ’2x1 + 2x2 ’ 6x3 ’ 4x4 , u3 = 2x1 + 2x2 +
6x3 + 8x4 . The ¬rst step is to bring the smallest positive integer to the 1-1 position. Thus
interchange rows 1 and 3 to obtain
® 
226 8
°’2 2 ’6 ’4»
0 0 22 0
14 CHAPTER 4. MODULE FUNDAMENTALS

Since all entries in column 1, and similarly in row 1, are divisible by 2, we can pivot about
the 1-1 position, in other words, use the 1-1 entry to produce zeros. Thus add row 1 to
row 2 to get
® 
2268
°0 4 0 4»
0 0 22 0

Add ’1 times column 1 to column 2, then add ’3 times column 1 to column 3, and add
’4 times column 1 to column 4. The result is
® 
2000
°0 4 0 4»
0 0 22 0

Now we have “peeled o¬” the ¬rst row and column, and we bring the smallest positive
integer to the 2-2 position. It™s already there, so no action is required. Furthermore, the
2-2 element is a multiple of the 1-1 element, so again no action is required. Pivoting about
the 2-2 position, we add ’1 times column 2 to column 4, and we have
® 
2000
°0 4 0 0»
0 0 22 0

Now we have peeled o¬ the ¬rst two rows and columns, and we bring the smallest positive
integer to the 3-3 position; again it™s already there. But 22 is not a multiple of 4, so we
have more work to do. Add row 3 to row 2 to get
® 
2000
°0 4 22 0»
0 0 22 0

4 does not divide 22, but if we add ’5 times
Again we pivot about the 2-2 position;
column 2 to column 3, we have
® 
2 0 0 0
°0 0»
4 2
0 0 22 0

Interchange columns 2 and 3 to get
® 
2 0 00
°0 4 0»
2
0 22 00

Add ’11 times row 2 to row 3 to obtain
® 
20 0 0
°0 2 0»
4
’44
00 0
4.5. SMITH NORMAL FORM 15

Finally, add ’2 times column 2 to column 3, and then (as a convenience to get rid of the
minus sign) multiply row (or column) 3 by ’1; the result is
® 
2000
°0 2 0 0»
0 0 44 0
which is the Smith normal form of the original matrix. Although we had to backtrack to
produce a new pivot element in the 2-2 position, the new element is smaller than the old
one (since it is a remainder after division by the original number). Thus we cannot go
into an in¬nite loop, and the algorithm will indeed terminate in a ¬nite number of steps.
In view of (4.4.2), we have the following interpretation.
We have a new basis y1 , y2 , y3 , y4 for M , and new generators v1 , v2 , v3 for K, where
v1 = 2y1 , v2 = 2y2 , and v3 = 44y3 . In fact since the vj ™s are nonzero multiples of the
corresponding yj ™s, they are linearly independent, and consequently form a basis of K.
The new basis and set of generators can be expressed in terms of the original sets; see
Problems 1“3 for the technique.
The above discussion indicates that the Euclidean algorithm guarantees that the Smith
normal form can be computed in ¬nitely many steps. Therefore the Smith procedure can
be carried out in any Euclidean domain. In fact we can generalize to a principal ideal
domain. Suppose that at a particular stage of the computation, the element a occupies
the 1-1 position of the Smith matrix S, and the element b is in row 1, column 2. To use a
as a pivot to eliminate b, let d be the greatest common divisor of a and b, and let r and s
be elements of R such that ar + bs = d (see (2.7.2)). We postmultiply the Smith matrix
by a matrix T of the following form (to aid in the visualization, we give a concrete 5 — 5
example):
® 
r b/d 0 0 0
s ’a/d 0 0 0
 
0 1 0 0
0
 
°0 0 1 0»
0
0 0 001
The 2 — 2 matrix in the upper left hand corner has determinant ’1, and is therefore
invertible over R. The element in the 1-1 position of ST is ar + bs = d, and the element
in the 1-2 position is ab/d ’ ba/d = 0, as desired. We have replaced the pivot element a
by a divisor d, and this will decrease the number of prime factors, guaranteeing the ¬nite
termination of the algorithm. Similarly, if b were in the 2-1 position, we would premultiply
S by the transpose of T ; thus in the upper left hand corner we would have
r s
’a/d
b/d

Problems For Section 4.5
1. Let A be the matrix
® 
’2 3 0
° ’3 0»
3
’12 12 6
16 CHAPTER 4. MODULE FUNDAMENTALS

over the integers. Find the Smith normal form of A. (It is convenient to begin by
adding column 2 to column 1.)
2. Continuing Problem 1, ¬nd the matrices P and Q, and verify that QAP ’1 is the Smith
normal form.
3. Continuing Problem 2, if the original basis for M is {x1 , x2 , x3 } and the original set of
generators of K is {u1 , u2 , u3 }, ¬nd the new basis and set of generators.
It is intuitively reasonable, but a bit messy to prove, that if a matrix A over a PID
is multiplied by an invertible matrix, then the greatest common divisor of all the i — i
minors of A is unchanged. Accept this fact in doing Problems 4 and 5.
4. The nonzero components ai of the Smith normal form S of A are called the invariant
factors of A. Show that the invariant factors of A are unique (up to associates).
5. Show that two m—n matrices are equivalent if and only if they have the same invariant
factors, i.e. (by Problem 4), if and only if they have the same Smith normal form.
6. Recall that when a matrix over a ¬eld is reduced to row-echelon form (only row opera-
tions are involved), a pivot column is followed by non-pivot columns whose entries are
zero in all rows below the pivot element. When a similar computation is carried out
over the integers, or more generally over a Euclidean domain, the resulting matrix is
said to be in Hermite normal form. We indicate the procedure in a typical example.
Let
® 
6 4 13 5
A=°9 6 0 7 ».
12 8 ’1 12
Carry out the following sequence of steps:
1. Add ’1 times row 1 to row 2
2. Interchange rows 1 and 2
3. Add ’2 times row 1 to row 2, and then add ’4 times row 1 to row 3
4. Add ’1 times row 2 to row 3
5. Interchange rows 2 and 3
6. Add ’3 times row 2 to row 3
7. Interchange rows 2 and 3
8. Add ’4 times row 2 to row 3
9. Add 5 times row 2 to row 1 (this corresponds to choosing 0, 1, . . . , m ’ 1 as a
complete system of residues mod m)
10. Add 2 times row 3 to row 1, and then add row 3 to row 2
We now have reduced A to Hermite normal form.
7. Continuing Problem 6, consider the simultaneous equations
6x + 4y + 13z ≡ 5, 9x + 6y ≡ 7, 12x + 8y ’ z ≡ 12 (mod m)
For which values of m ≥ 2 will the equations be consistent?
4.6. FUNDAMENTAL STRUCTURE THEOREMS 17

4.6 Fundamental Structure Theorems
The Smith normal form yields a wealth of information about modules over a principal
ideal domain. In particular, we will be able to see exactly what ¬nitely generated abelian
groups must look like.
Before we proceed, we must mention a result that we will use now but not prove until
later (see (7.5.5), Example 1, and (7.5.9)). If M is a ¬nitely generated module over a PID
R, then every submodule of M is ¬nitely generated. [R is a Noetherian ring, hence M is
a Noetherian R-module.] To avoid gaps in the current presentation, we can restrict our
attention to ¬nitely generated submodules.


4.6.1 Simultaneous Basis Theorem
Let M be a free module of ¬nite rank n ≥ 1 over the PID R, and let K be a submodule
of M . Then there is a basis {y1 , . . . , yn } for M and nonzero elements a1 , . . . , ar ∈ R such
that r ¤ n, ai divides ai+1 for all i, and {a1 y1 , . . . , ar yr } is a basis for K.

Proof. This is a corollary of the construction of the Smith normal form, as explained in
Section 4.5. ™


4.6.2 Corollary
Let M be a free module of ¬nite rank n over the PID R. Then every submodule of M is
free of rank at most n.

Proof. By (4.6.1), the submodule K has a basis with r ¤ n elements. ™

In (4.6.2), the hypothesis that M has ¬nite rank can be dropped, as the following
sketch suggests. We can well-order the generators u± of K, and assume as a trans¬nite
induction hypothesis that for all β < ±, the submodule Kβ spanned by all the generators
up to uβ is free of rank at most that of M , and that if γ < β, then the basis of Kγ is
contained in the basis of Kβ . The union of the bases Sβ of the Kβ is a basis S± for K± .
Furthermore, the inductive step preserves the bound on rank. This is because |Sβ | ¤ rank
M for all β < ±, and |S± | is the smallest cardinal bounded below by all |Sβ |, β < ±. Thus
|S± | ¤ rank M .


4.6.3 Fundamental Decomposition Theorem
Let M be a ¬nitely generated module over the PID R. Then there are ideals I1 = a1 ,
I2 = a2 , . . . , In = an of R such that I1 ⊇ I2 ⊇ · · · ⊇ In (equivalently, a1 | a2 | · · · | an )
and

M ∼ R/I1 • R/I2 • · · · • R/In .
=

Thus M is a direct sum of cyclic modules.
18 CHAPTER 4. MODULE FUNDAMENTALS

Proof. By (4.3.6), M is the image of a free module Rn under a homomorphism f . If K
is the kernel of f , then by (4.6.1) we have a basis y1 , . . . , yn for Rn and a corresponding
basis a1 y1 , . . . , ar yr for K. We set ai = 0 for r < i ¤ n. Then
n
Ry1 • · · · • Ryn ∼
M ∼ Rn /K ∼ Ryi /Rai yi .
= = =
Ra1 y1 • · · · • Ran yn i=1

(To justify the last step, apply the ¬rst isomorphism theorem to the map

r1 y1 + · · · + rn yn ’ (r1 y1 + Ra1 y1 , . . . , rn yn + Ran yn .)

But

Ryi /Rai yi ∼ R/Rai ,
=

as can be seen via an application of the ¬rst isomorphism theorem to the map
r ’ ryi + Rai yi . Thus if Ii = Rai , i = 1, . . . , n, we have
n
M∼ R/Ii
=
i=1

and the result follows. ™
Remark It is plausible, and can be proved formally, that the uniqueness of invariant
factors in the Smith normal form implies the uniqueness of the decomposition (4.6.3).
Intuitively, the decomposition is completely speci¬ed by the sequence a1 , . . . , an , as the
proof of (4.6.3) indicates.

4.6.4 Finite Abelian Groups
Suppose that G is a ¬nite abelian group of order 1350; what can we say about G? In
the decomposition theorem (4.6.3), the components of G are of the form Z/Zai , that is,
cyclic groups of order ai . We must have ai | ai+1 for all i, and since the order of a direct
sum is the product of the orders of the components, we have a1 · · · ar = 1350.
The ¬rst step in the analysis is to ¬nd the prime factorization of 1350, which is
(2)(33 )(52 ). One possible choice of the ai is a1 = 3, a2 = 3, a3 = 150. It is convenient to
display the prime factors of the ai , which are called elementary divisors, as follows:

a1 = 3 = 20 31 50
a2 = 3 = 20 31 50
a3 = 150 = 21 31 52

Since a1 a2 a3 = 21 33 52 , the sum of the exponents of 2 must be 1, the sum of the exponents
of 3 must be 3, and the sum of the exponents of 5 must be 2. A particular distribution
of exponents of a prime p corresponds to a partition of the sum of the exponents. For
example, if the exponents of p were 0, 1, 1 and 2, this would correspond to a partition
of 4 as 1 + 1 + 2. In the above example, the partitions are 1 = 1, 3 = 1 + 1 + 1, 2 = 2. We
4.6. FUNDAMENTAL STRUCTURE THEOREMS 19

can count the number of abelian groups of order 1350 (up to isomorphism) by counting
partitions. There is only one partition of 1, there are two partitions of 2 (2 and 1 + 1)
and three partitions of 3 (3, 1 + 2 and 1 + 1 + 1). [This pattern does not continue; there
are ¬ve partitions of 4, namely 4, 1 + 3, 1 + 1 + 2, 1 + 1 + 1 + 1, 2 + 2, and seven partitions
of 5, namely 5, 1 + 4, 1 + 1 + 3, 1 + 1 + 1 + 2, 1 + 1 + 1 + 1 + 1, 1 + 2 + 2, 2 + 3.] We specify
a group by choosing a partition of 1, a partition of 3 and a partition of 2, and the number
of possible choices is (1)(3)(2) = 6. Each choice of a sequence of partitions produces a
di¬erent sequence of invariant factors. Here is the entire list; the above example appears
as entry (5).

(1) a1 = 21 33 52 = 1350, G ∼ Z1350
=
(2) a1 = 20 30 51 = 5, a2 = 21 33 51 = 270, G ∼ Z5 • Z270
=
(3) a1 = 20 31 50 = 3, a2 = 21 32 52 = 450, G ∼ Z3 • Z450
=
(4) a1 = 20 31 51 = 15, a2 = 21 32 51 = 90, G ∼ Z15 • Z90
=
(5) a1 = 20 31 50 = 3, a2 = 20 31 50 = 3, a3 = 21 31 52 = 150, G ∼ Z3 • Z3 • Z150
=
(6) a1 = 20 31 50 = 3, a2 = 20 31 51 = 15, a3 = 21 31 51 = 30, G ∼ Z3 • Z15 • Z30 .
=
In entry (6) for example, the maximum number of summands in a partition is 3 (= 1 +
1 + 1), and this reveals that there will be three invariant factors. The partition 2 = 1 + 1
has only two summands, and it is “pushed to the right” so that 51 appears in a2 and
a3 but not a1 . (Remember that we must have a1 | a2 | a3 .). Also, we can continue to
decompose some of the components in the direct sum representation of G. (If m and n
are relatively prime, then Zmn ∼ Zm • Zn by the Chinese remainder theorem.) However,
=
this does not change the conclusion that there are only 6 mutually nonisomorphic abelian
groups of order 1350.
Before examining in¬nite abelian groups, let™s come back to the fundamental decom-
position theorem.

4.6.5 De¬nitions and Comments
If x belongs to the R-module M , where R is any integral domain, then x is a torsion
element if rx = 0 for some nonzero r ∈ R. The torsion submodule T of M is the set of
torsion elements. (T is indeed a submodule; if rx = 0 and sy = 0, then rs(x + y) = 0.) M
is a torsion module if T is all of M , and M is torsion-free if T consists of 0 alone, in other
words, rx = 0 implies that either r = 0 or x = 0. A free module must be torsion-free,
by de¬nition of linear independence. Now assume that R is a PID, and decompose M as
in (4.6.3), where a1 , . . . , ar are nonzero and ar+1 = · · · = an = 0. Each module R/ ai ,
1 ¤ i ¤ r, is torsion (it is annihilated by ai ), and the R/ ai , r + 1 ¤ i ¤ n, are copies
of R. Thus •n i=r+1 R/ ai is free. We conclude that

(*) every ¬nitely generated module over a PID is the direct sum of its torsion submodule
and a free module

and
20 CHAPTER 4. MODULE FUNDAMENTALS

(**) every ¬nitely generated torsion-free module over a PID is free.

In particular, a ¬nitely generated abelian group is the direct sum of a number (possibly
zero) of ¬nite cyclic groups and a free abelian group (possibly {0}).

4.6.6 Abelian Groups Speci¬ed by Generators and Relations
Suppose that we have a free abelian group F with basis x1 , x2 , x3 , and we impose the
following constraints on the xi :

’2x1 + 2x2 + 4x3 = 0.
2x1 + 2x2 + 8x3 = 0, (1)

What we are doing is forming a “submodule of relations” K with generators

and u2 = ’2x1 + 2x2 + 4x3
u1 = 2x1 + 2x2 + 8x3 (2)

and we are identifying every element in K with zero. This process yields the abelian group
G = F/K, which is generated by x1 + K, x2 + K and x3 + K. The matrix associated
with (2) is

2 28
’2 24

and a brief computation gives the Smith normal form

2 0 0
.
0 4 0

Thus we have a new basis y1 , y2 , y3 for F and new generators 2y1 , 4y2 for K. The quotient
group F/K is generated by y1 +K, y2 +K and y3 +K, with 2(y1 +K) = 4(y2 +K) = 0+K.
In view of (4.6.3) and (4.6.5), we must have

F/K ∼ Z2 • Z4 • Z.
=

Canonical forms of a square matrix A can be developed by reducing the matrix xI ’ A
to Smith normal form. In this case, R is the polynomial ring F [X] where F is a ¬eld.
But the analysis is quite lengthy, and I prefer an approach in which the Jordan canonical
form is introduced at the very beginning, and then used to prove some basic results in
the theory of linear operators; see Ash, A Primer of Abstract Mathematics, MAA 1998.

Problems For Section 4.6
1. Classify all abelian groups of order 441.
2. Classify all abelian groups of order 40.
3. Identify the abelian group given by generators x1 , x2 , x3 and relations

x1 + 5x2 + 3x3 = 0, 2x1 ’ x2 + 7x3 = 0, 3x1 + 4x2 + 2x3 = 0.
4.7. EXACT SEQUENCES AND DIAGRAM CHASING 21

4. In (4.6.6), suppose we cancel a factor of 2 in Equation (1). This changes the matrix
associated with (2) to

1 14
,
’1 12

whose Smith normal form di¬ers from that given in the text. What™s wrong?
5. Let M , N and P be abelian groups. If M • N ∼ M • P , show by example that N
=
need not be isomorphic to P .
6. In Problem 5, show that M • N ∼ M • P does imply N ∼ P if M , N and P are
= =
¬nitely generated.


4.7 Exact Sequences and Diagram Chasing
4.7.1 De¬nitions and Comments
Suppose that the R-module M is the direct sum of the submodules A and B. Let f be
the inclusion or injection map of A into M (simply the identity function on A), and let
g be the natural projection of M on B, given by g(a + b) = b, a ∈ A, b ∈ B. The image
of f , namely A, coincides with the kernel of g, and we say that the sequence

GM GB
f g
(1)
A

is exact at M . A longer (possibly in¬nite) sequence of homomorphisms is said to be exact
if it is exact at each junction, that is, everywhere except at the left and right endpoints,
if they exist.
There is a natural exact sequence associated with any module homomorphism
g : M ’ N , namely

GA GM GB G0
f g
(2)
0

In the diagram, A is the kernel of g, f is the injection map, and B is the image of g. A
¬ve term exact sequence with zero modules at the ends, as in (2), is called a short exact
sequence. Notice that exactness at A is equivalent to ker f = 0, i.e., injectivity of f .
Exactness at B is equivalent to im g = B, i.e., surjectivity of g. Notice also that by the
¬rst isomorphism theorem, we may replace B by M/A and g by the canonical map of M
onto M/A, while preserving exactness.
Now let™s come back to (1), where M is the direct sum of A and B, and attach zero
modules to produce the short exact sequence (2). If we de¬ne h as the injection of B into
M and e as the projection of M on A, we have (see (3) below) g —¦ h = 1 and e —¦ f = 1,
where 1 stands for the identity map.

GAo o
e h
G0
GM GB (3)
0
g
f
22 CHAPTER 4. MODULE FUNDAMENTALS

The short exact sequence (2) is said to split on the right if there is a homomorphism
h : B ’ M such that g —¦h = 1, and split on the left if there is a homomorphism e : M ’ A
such that e —¦ f = 1. These conditions turn out to be equivalent, and both are equivalent
to the statement that M is essentially the direct sum of A and B. “Essentially” means
that not only is M isomorphic to A • B, but f can be identi¬ed with the injection of A
into the direct sum, and g with the projection of the direct sum on B. We will see how
to make this statement precise, but ¬rst we must turn to diagram chasing, which is a
technique for proving assertions about commutative diagrams by sliding from one vertex
to another. The best way to get accustomed to the method is to do examples. We will
work one out in great detail in the text, and there will be more practice in the exercises,
with solutions provided.
We will use the shorthand gf for g —¦ f and f m for f (m).

4.7.2 The Five Lemma
Consider the following commutative diagram with exact rows.

GA GM GB GC
f g
e h
D
s u v w
t
    
GA GM GB GC
D
e f g h


If s, t, v and w are isomorphisms, so is u. (In fact, the hypotheses on s and w can be
weakened to s surjective and w injective.)
Proof. The two parts of the proof are of interest in themselves, and are frequently called
the “four lemma”, since they apply to diagrams with four rather than ¬ve modules in
each row.
(i) If t and v are surjective and w is injective, then u is surjective.
(ii) If s is surjective and t and v are injective, then u is injective.
[The pattern suggests a “duality” between injective and surjective maps. This idea will be
explored in Chapter 10; see (10.1.4).] The ¬ve lemma follows from (i) and (ii). To prove (i),
let m ∈ M . Then g m ∈ B , and since v is surjective, we can write g m = vb for some
b ∈ B. By commutativity of the square on the right, h vb = whb. But h vb = h g m = 0
by exactness of the bottom row at B , and we then have whb = 0. Thus hb ∈ ker w, and
since w is injective, we have hb = 0, so that b ∈ ker h = im g by exactness of the top row
at B. So we can write b = gm for some m ∈ M . Now g m = vb (see above) = vgm = g um
by commutativity of the square M BB M . Therefore m ’ um ∈ ker g = imf by
exactness of the bottom row at M . Let m ’ um = f a for some a ∈ A . Since t
is surjective, a = ta for some a ∈ A, and by commutativity of the square AM M A ,
f ta = uf a, so m ’ um = uf a, so m = u(m + f a). Consequently, m belongs to the
image of u, proving that u is surjective.
To prove (ii), suppose m ∈ ker u. By commutativity, g um = vgm, so vgm = 0.
Since v is injective, gm = 0. Thus m ∈ ker g = im f by exactness, say m = f a. Then
4.7. EXACT SEQUENCES AND DIAGRAM CHASING 23

0 = um = uf a = f ta by commutativity. Thus ta ∈ ker f = im e by exactness. If
ta = e d , then since s is surjective, we can write d = sd, so ta = e sd. By commutativity,
e sd = ted, so ta = ted. By injectivity of t, a = ed. Therefore m = f a = f ed = 0 by
exactness. We conclude that u is injective. ™

4.7.3 Corollary: The Short Five Lemma
Consider the following commutative diagram with exact rows. (Throughout this section,
all maps in commutative diagrams and exact sequences are assumed to be R-module
homomorphisms.)

GA GM GB G0
f g
0
u v
t
  
GA GM GB G0
0
f g


If t and v are isomorphisms, so is u.
Proof. Apply the ¬ve lemma with C = D = C = D = 0, and s and w the identity
maps. ™
We can now deal with splitting of short exact sequences.

4.7.4 Proposition
Let

GA GM GB G0
f g
0

be a short exact sequence. The following conditions are equivalent, and de¬ne a split
exact sequence.
(i) The sequence splits on the right.
(ii) The sequence splits on the left.
(iii) There is an isomorphism u of M and A • B such that the following diagram is
commutative.

GA GM GB G0
f g
0
u

GA G A•B GB G0
0 π
i

Thus M is isomorphic to the direct sum of A and B, and in addition, f can be
identi¬ed with the injection i of A into A • B, and g with the projection π of the
direct sum onto B. (The double vertical bars indicate the identity map.)
24 CHAPTER 4. MODULE FUNDAMENTALS

Proof. It follows from our earlier discussion of diagram (3) that (iii) implies (i) and (ii).
To show that (i) implies (iii), let h be a homomorphism of B into M such that gh = 1.
We claim that
M = ker g • h(B).
First, suppose that m ∈ M . Write m = (m ’ hgm) + hgm; then hgm ∈ h(B) and
g(m’hgm) = gm’ghgm = gm’1gm = gm’gm = 0. Second, suppose m ∈ ker g©h(B),
with m = hb. Then 0 = gm = ghb = 1b = b, so m = hb = h0 = 0, proving the
claim. Now since ker g = im f by exactness, we may express any m ∈ M in the form
m = f a + hb. We take um = a + b, which makes sense because both f and h are injective
and f (A)©h(B) = 0. This forces the diagram of (iii) to be commutative, and u is therefore
an isomorphism by the short ¬ve lemma. Finally, we show that (ii) implies (iii). Let e be
a homomorphism of M into A such that ef = 1. In this case, we claim that
M = f (A) • ker e.
If m ∈ M then m = f em + (m ’ f em) and f em ∈ f (A), e(m ’ f em) = em ’ ef em =
em ’ em = 0. If m ∈ f (A) © ker e, then, with m = f a, we have 0 = em = ef a = a,
so m = 0, and the claim is veri¬ed. Now if m ∈ M we have m = f a + m with a ∈ A
and m ∈ ker e. We take u(m) = a + g(m ) = a + gm since gf = 0. (The de¬nition of u
is unambiguous because f is injective and f (A) © ker e = 0.) The choice of u forces the
diagram to be commutative, and again u is an isomorphism by the short ¬ve lemma. ™

4.7.5 Corollary
If the sequence

GA GM GB G0
f g
0
is split exact with splitting maps e and h as in (3), then the “backwards” sequence

0o Ao Mo Bo
e h
0
is also split exact, with splitting maps g and f .
Proof. Simply note that gh = 1 and ef = 1. ™
A device that I use to remember which way the splitting maps go (i.e., it™s ef = 1,
not f e = 1) is that the map that is applied ¬rst points inward toward the “center” M .

Problems For Section 4.7
Consider the following commutative diagram with exact rows:

GA GB GC
f g
0
v w
 
GA GB GC
0
f g
4.7. EXACT SEQUENCES AND DIAGRAM CHASING 25

Our objective in Problems 1“3 is to ¬nd a homomorphism u : A ’ A such that the square
ABB A , hence the entire diagram, is commutative.

1. Show that if u exists, it is unique.
2. If a ∈ A, show that vf a ∈ im f .
3. If vf a = f a , de¬ne ua appropriately.

Now consider another commutative diagram with exact rows:

GB GC G0
f g
A
w v
 
GB GC G0
A
f g


In Problems 4 and 5 we are to de¬ne u : C ’ C so that the diagram will commute.

4. If c ∈ C, then since g is surjective, c = gb for some b ∈ B. Write down the only
possible de¬nition of uc.
5. In Problem 4, b is not unique. Show that your de¬nition of u does not depend on the
particular b.
Problems 6“11 refer to the diagram of the short ¬ve lemma (4.7.3). Application of the
four lemma is very e¬cient, but a direct attack is also good practice.

6. If t and v are injective, so is u.
7. If t and v are surjective, so is u.
8. If t is surjective and u is injective, then v is injective.
9. If u is surjective, so is v.
By Problems 8 and 9, if t and u are isomorphisms, so is v.

10. If u is injective, so is t.
11. If u is surjective and v is injective, then t is surjective.

Note that by Problems 10 and 11, if u and v are isomorphisms, so is t.

12. If you have not done so earlier, do Problem 8 directly, without appealing to the four
lemma.
13. If you have not done so earlier, do Problem 11 directly, without appealing to the four
lemma.
Chapter 5

Some Basic Techniques of
Group Theory

5.1 Groups Acting on Sets
In this chapter we are going to analyze and classify groups, and, if possible, break down
complicated groups into simpler components. To motivate the topic of this section, let™s
look at the following result.


5.1.1 Cayley™s Theorem
Every group is isomorphic to a group of permutations.

Proof. The idea is that each element g in the group G corresponds to a permutation of
the set G itself. If x ∈ G, then the permutation associated with g carries x into gx. If
gx = gy, then premultiplying by g ’1 gives x = y. Furthermore, given any h ∈ G, we can
solve gx = h for x. Thus the map x ’ gx is indeed a permutation of G. The map from
g to its associated permutation is injective, because if gx = hx for all x ∈ G, then (take
x = 1) g = h. In fact the map is a homomorphism, since the permutation associated with
hg is multiplication by hg, which is multiplication by g followed by multiplication by h,
h —¦ g for short. Thus we have an embedding of G into the group of all permutations of
the set G. ™

In Cayley™s theorem, a group acts on itself in the sense that each g yields a permutation
of G. We can generalize to the notion of a group acting on an arbitrary set.


5.1.2 De¬nitions and Comments
The group G acts on the set X if for each g ∈ G there is a mapping x ’ gx of X into
itself, such that


1
2 CHAPTER 5. SOME BASIC TECHNIQUES OF GROUP THEORY

(1) h(gx) = (hg)x for every g, h ∈ G
(2) 1x = x for every x ∈ X.
As in (5.1.1), x ’ gx de¬nes a permutation of X. The main point is that the action
of g is a permutation because it has an inverse, namely the action of g ’1 . (Explicitly, the
inverse of x ’ gx is y ’ g ’1 y.) Again as in (5.1.1), the map from g to its associated
permutation ¦(g) is a homomorphism of G into the group SX of permutations of X. But
we do not necessarily have an embedding. If gx = hx for all x, then in (5.1.1) we were
able to set x = 1, the identity element of G, but this resource is not available in general.
We have just seen that a group action induces a homomorphism from G to SX , and
there is a converse assertion. If ¦ is a homomorphism of G to SX , then there is a
corresponding action, de¬ned by gx = ¦(g)x, x ∈ X. Condition (1) holds because ¦ is a
homomorphism, and (2) holds because ¦(1) must be the identity of SX . The kernel of ¦
is known as the kernel of the action; it is the set of all g ∈ G such that gx = x for all x,
in other words, the set of g™s that ¬x everything in X.

5.1.3 Examples
1. (The regular action) Every group acts on itself by multiplication on the left, as
in (5.1.1). In this case, the homomorphism ¦ is injective, and we say that the action is
faithful.
[Similarly, we can de¬ne an action on the right by (xg)h = x(gh), x1 = x, and then G
acts on itself by right multiplication. The problem is that ¦(gh) = ¦(h) —¦ ¦(g), an
antihomomorphism. The damage can be repaired by writing function values as xf rather
than f (x), or by de¬ning the action of g to be multiplication on the right by g ’1 . We
will avoid the di¬culty by restricting to actions on the left.]
2. (The trivial action) We take gx = x for all g ∈ G, x ∈ X. This action is highly
unfaithful.
3. (Conjugation on elements) We use the notation g • x for the action of g on x, and
we set g • x = gxg ’1 , called the conjugate of x by g, for g and x in the group G. Since
hgxg ’1 h’1 = (hg)x(hg)’1 and 1x1’1 = x, we have a legal action of G on itself. The
kernel is

{g : gxg ’1 = x for all x}, that is, {g : gx = xg for all x}.

Thus the kernel is the set of elements that commute with everything in the group. This
set is called the center of G, written Z(G).
4. (Conjugation on subgroups) If H is a subgroup of G, we take g • H = gHg ’1 .
Note that gHg ’1 is a subgroup of G, called the conjugate subgroup of H by g, since
gh1 g ’1 gh2 g ’1 = g(h1 h2 )g ’1 and (ghg ’1 )’1 = gh’1 g ’1 . As in Example (3), we have a
legal action of G on the set of subgroups of G.
5. (Conjugation on subsets) This is a variation of the previous example. In this case
we let G act by conjugation on the collection of all subsets of G, not just subgroups. The
veri¬cation that the action is legal is easier, because gHg ’1 is certainly a subset of G.
6. (Multiplication on left cosets) Let G act on the set of left cosets of a ¬xed sub-
group H by g • (xH) = (gx)H. By de¬nition of set multiplication, we have a legitimate
action.
5.1. GROUPS ACTING ON SETS 3

7. (Multiplication on subsets) Let G act on all subsets of G by g • S = gS = {gx : x ∈
S}. Again the action is legal by de¬nition of set multiplication.


Problems For Section 5.1
1. Let G act on left cosets of H by multiplication, as in Example 6. Show that the kernel
of the action is a subgroup of H.
2. Suppose that H is a proper subgroup of G of index n, and that G is a simple group, that
is, G has no normal subgroups except G itself and {1}. Show thatG can be embedded
in Sn .
3. Suppose that G is an in¬nite simple group. Show that for every proper subgroup H
of G, the index [G : H] is in¬nite.
4. Let G act on left cosets of H by multiplication. Show that the kernel of the action is

xHx’1 .
N=
x∈G


5. Continuing Problem 4, if K is any normal subgroup of G contained in H, show that
K ¤ N . Thus N is the largest normal subgroup of G contained in H; N is called the
core of H in G.
6. Here is some extra practice with left cosets of various subgroups. Let H and K be
subgroups of G, and consider the map f which assigns to the coset g(H © K) the pair
of cosets (gH, gK). Show that f is well-de¬ned and injective, and therefore

[G : H © K] ¤ [G : H][G : K].

Thus (Poincar´) the intersection of ¬nitely many subgroups of ¬nite index also has
e
¬nite index.
7. If [G : H] and [G : K] are ¬nite and relatively prime, show that the inequality in the
preceding problem is actually an equality.
8. Let H be a subgroup of G of ¬nite index n, and let G act on left cosets xH by
multiplication. Let N be the kernel of the action, so that N H by Problem 1. Show
that [G : N ] divides n!.
9. Let H be a subgroup of G of ¬nite index n > 1. If |G| does not divide n!, show that G
is not simple.




.
4 CHAPTER 5. SOME BASIC TECHNIQUES OF GROUP THEORY

5.2 The Orbit-Stabilizer Theorem
5.2.1 De¬nitions and Comments
Suppose that the group G acts on the set X. If we start with the element x ∈ X and
successively apply group elements in all possible ways, we get
B(x) = {gx : g ∈ G}
which is called the orbit of x under the action of G. The action is transitive (we also say
that G acts transitively on X) if there is only one orbit, in other words, for any x, y ∈ X,
there exists g ∈ G such that gx = y. Note that the orbits partition X, because they are
the equivalence classes of the equivalence relation given by y ∼ x i¬ y = gx for some
g ∈ G.
The stabilizer of an element x ∈ X is
G(x) = {g ∈ G : gx = x},
the set of elements that leave x ¬xed. A direct veri¬cation shows that G(x) is a subgroup.
This is a useful observation because any set that appears as a stabilizer in a group action
is guaranteed to be a subgroup; we need not bother to check each time.
Before proceeding to the main theorem, let™s return to the examples considered in
(5.1.3).

5.2.2 Examples
1. The regular action of G on G is transitive, and the stabilizer of x is the subgroup {1}.
2. The trivial action is not transitive (except in trivial cases), in fact, B(x) = {x} for
every x. The stabilizer of x is the entire group G.
3. Conjugation on elements is not transitive (see Problem 1). The orbit of x is the set
of conjugates gxg ’1 of x, that is,
B(x) = {gxg ’1 : g ∈ G},
which is known as the conjugacy class of x. The stabilizer of x is
G(x) = {g : gxg ’1 = x} = {g : gx = xg},
the set of group elements that commute with x. This set is called the centralizer of x,
written CG (x). Similarly, the centralizer CG (S) of an arbitrary subset S ⊆ G is de¬ned
as the set of elements of G that commute with everything in S. (Here, we do need to
check that CG (S) is a subgroup, and this follows because CG (S) = x∈S CG (x).)
4. Conjugation on subgroups is not transitive. The orbit of H is {gHg ’1 : g ∈ G},
the collection of conjugate subgroups of H. The stabilizer of H is
{g : gHg ’1 = H},
which is called the normalizer of H, written NG (H). If K is a subgroup of G containing H,
we have
K i¬ gHg ’1 = H for every g ∈ K
H
5.2. THE ORBIT-STABILIZER THEOREM 5

and this holds i¬ K is a subgroup of NG (H). Thus NG (H) is the largest subgroup of G
in which H is normal.
5. Conjugation on subsets is not transitive, and the orbit of the subset S is {gSg ’1 :
g ∈ G}. The stabilizer of S is the normalizerNG (S) = {g : gSg ’1 = S}.
6. Multiplication on left cosets is transitive; a solution of g(xH) = yH for x is x =
’1
g y. The stabilizer of xH is

{g : gxH = xH} = {g : x’1 gx ∈ H} = {g : g ∈ xHx’1 } = xHx’1 ,

the conjugate of H by x. Taking x = 1, we see that the stabilizer of H is H itself.
7. Multiplication on subsets is not transitive. The stabilizer of S is {g : gS = S}, the
set of elements of G that permute the elements of S.

5.2.3 The Orbit-Stabilizer Theorem
Suppose that a group G acts on a set X. Let B(x) be the orbit of x ∈ X, and let G(x)
be the stabilizer of x. Then the size of the orbit is the index of the stabilizer, that is,

|B(x)| = [G : G(x)].

Thus if G is ¬nite, then |B(x)| = |G|/|G(x)|; in particular, the orbit size divides the order
of the group.
Proof. If y belongs to the orbit of x, say y = gx. We take f (y) = gH, where H = G(x) is
the stabilizer of x. To check that f is a well-de¬ned map of B(x) to the set of left cosets
’1 ’1
of H, let y = g1 x = g2 x. Then g2 g1 x = x, so g2 g1 ∈ H, i.e., g1 H = g2 H. Since g
’1
is an arbitrary element of G, f is surjective. If g1 H = g2 H, then g2 g1 ∈ H, so that
’1
g2 g1 x = x, and consequently g1 x = g2 x. Thus if y1 = g1 x, y2 = g2 x, and f (y1 ) = f (y2 ),
then y1 = y2 , proving f injective. ™
Referring to (5.2.2), Example 3, we see that B(x) is an orbit of size 1 i¬ x commutes
with every g ∈ G, i.e., x ∈ Z(G), the center of G. Thus if G is ¬nite and we select one
element xi from each conjugacy class of size greater than 1, we get the class equation

|G| = |Z(G)| + [G : CG (xi )].
i

We know that a group G acts on left cosets of a subgroup K by multiplication. To
prepare for the next result, we look at the action of a subgroup H of G on left cosets of K.
Since K is a left coset of K, it has an orbit given by {hK : h ∈ H}. The union of the
sets hK is the set product HK. The stabilizer of K is not K itself, as in Example 6; it is
{h ∈ H : hK = K}. But hK = K(= 1K) if and only if h ∈ K, so the stabilizer is H © K.

5.2.4 Proposition
If H and K are subgroups of the ¬nite group G, then
|H||K|
|HK| = .
|H © K|
6 CHAPTER 5. SOME BASIC TECHNIQUES OF GROUP THEORY

Proof. The cosets in the orbit of K are disjoint, and each has |K| members. Since, as
remarked above, the union of the cosets is HK, there must be exactly |HK|/|K| cosets
in the orbit. Since the index of the stabilizer of K is |H/H © K|, the result follows from
the orbit-stabilizer theorem. ™


Problems For Section 5.2
1. Let σ be the permutation (1, 2, 3, 4, 5) and π the permutation (1, 2)(3, 4). Then πσπ ’1 ,
the conjugate of σ by π, can be obtained by applying π to the symbols of σ to get
(2, 1, 4, 3, 5). Reversing the process, if we are given „ = (1, 2)(3, 4) and we specify that
µ„ µ’1 = (1, 3)(2, 5), we can take µ = [ 1 2 3 4 5 ]. This suggests that two permutations
13254
are conjugate if and only if they have the same cycle structure. Explain why this works.
2. Show that if S is any subset of G, then the centralizer of S is a normal subgroup of
the normalizer of S. (Let the normalizer NG (S) act on S by conjugation on elements.)
3. Let G(x) be the stabilizer of x under a group action. Show that stabilizers of elements
in the orbit of x are conjugate subgroups. Explicitly, for every g ∈ G and x ∈ X we
have

G(gx) = gG(x)g ’1 .

4. Let G act on the set X. Show that for a given x ∈ X, Ψ(gG(x)) = gx is a well-de¬ned
injective mapping of the set of left cosets of G(x) into X, and is bijective if the action
is transitive.
5. Continuing Problem 4, let G act transitively on X, and choose any x ∈ X. Show that
the action of G on X is essentially the same as the action of G on the left cosets of
the stabilizer subgroup G(x). This is the meaning of the assertion that“any transitive
G-set is isomorphic to a space of left cosets”. Give an appropriate formal statement
expressing this idea.
6. Suppose that G is a ¬nite group, and for every x, y ∈ G such that x = 1 and y = 1, x
and y are conjugate. Show that the order of G must be 1 or 2.
7. First note that if r is a positive rational number and k a ¬xed positive integer, there
are only ¬nitely many positive integer solutions of the equation

1 1
+ ··· + = r.
x1 xk

Outline of proof: If xk is the smallest xi , the left side is at most k/xk , so 1 ¤ xk ¤ k/r
and there are only ¬nitely many choices for xk . Repeat this argument for the equation
x1 + · · · + xk’1 = r ’ xk .
1 1 1

Now set r = 1 and let N (k) be an upper bound on all the xi ™s in all possible solutions.
If G is a ¬nite group with exactly k conjugacy classes, show that the order of G is at
most N (k).
5.3. APPLICATION TO COMBINATORICS 7

5.3 Application To Combinatorics
The theory of group actions can be used to solve a class of combinatorial problems. To set
up a typical problem, consider the regular hexagon of Figure 5.3.1, and recall the dihedral
group D12 , the group of symmetries of the hexagon (Section 1.2).

3 2b
bb
ÐÐ bb
ÐÐ bb
Ð
ÐÐ
4b 1
bb Ð
ÐÐ
bb Ð
bb ÐÐ
Ð
5 6

Figure 5.3.1

If R is rotation by 60 degrees and F is re¬‚ection about the horizontal line joining
vertices 1 and 4, the 12 members of the group may be listed as follows.
I = identity, R = (1, 2, 3, 4, 5, 6), R2 = (1, 3, 5)(2, 4, 6),
R3 = (1, 4)(2, 5)(3, 6), R4 = (1, 5, 3)(2, 6, 4), R5 = (1, 6, 5, 4, 3, 2)
F = (2, 6)(3, 5), RF = (1, 2)(3, 6)(4, 5), R2 F = (1, 3)(4, 6)
R3 F = (1, 4)(2, 3)(5, 6), R4 F = (1, 5)(2, 4), R5 F = (1, 6)(2, 5)(3, 4).
(As before, RF means F followed by R.)
Suppose that we color the vertices of the hexagon, and we have n colors available (we
are not required to use every color). How many distinct colorings are there? Since we
may choose the color of any vertex in n ways, a logical answer is n6 . But this answer
does not describe the physical situation accurately. To see what is happening, suppose
we have two colors, yellow (Y ) and blue (B). Then the coloring
1 2 3 4 5 6
C1 =
B B Y Y Y B
is mapped by RF to
1 2 3 4 5 6
C2 =
B B B Y Y Y
(For example, vertex 3 goes to where vertex 6 was previously, delivering the color yellow
to vertex 6.) According to our counting scheme, C2 is not the same as C1 . But imagine
that we have two rigid necklaces in the form of a hexagon, one colored by C1 and the
other by C2 . If both necklaces were lying on a table, it would be di¬cult to argue that
they are essentially di¬erent, since one can be converted to a copy of the other simply by
¬‚ipping it over and then rotating it.
Let™s try to make an appropriate mathematical model. Any group of permutations of
a set X acts on X in the natural way: g • x = g(x). In particular, the dihedral group G
acts on the vertices of the hexagon, and therefore on the set S of colorings of the vertices.
The above discussion suggests that colorings in the same orbit should be regarded as
equivalent, so the number of essentially di¬erent colorings is the number of orbits. The
following result will help us do the counting.
8 CHAPTER 5. SOME BASIC TECHNIQUES OF GROUP THEORY

5.3.1 Orbit-Counting Theorem
Let the ¬nite group G act on the ¬nite set X, and let f (g) be the number of elements of
X ¬xed by g, that is, the size of the set {x ∈ X : g(x) = x}. Then the number of orbits is

1
f (g),
|G|
g∈G


the average number of points left ¬xed by elements of G.

Proof. We use a standard combinatorial technique called “counting two ways”. Let T be
the set of all ordered pairs (g, x) such that g ∈ G, x ∈ X, and gx = x. For any x ∈ X,
the number of g™s such that (g, x) ∈ T is the size of the stabilizer subgroup G(x), hence

|T | = |G(x)|. (1)
x∈X


Now for any g ∈ G, the number of x™s such that (g, x) ∈ T is f (g), the number of ¬xed
points of g. Thus

|T | = f (g). (2)
g∈G


Divide (1) and (2) by the order of G to get

|G(x)| 1
= f (g). (3)
|G| |G|
x∈X g∈G


But by the orbit-stabilizer theorem (5.2.3), |G|/|G(x)| is |B(x)|,the size of the orbit of x.
If, for example, an orbit has 5 members, then 1/5 will appear 5 times in the sum on the
left side of (3), for a total contribution of 1. Thus the left side of (3) is the total number
of orbits. ™

We can now proceed to the next step in the analysis.


5.3.2 Counting the Number of Colorings Fixed by a Given Per-
mutation
Let π = R2 = (1, 3, 5)(2, 4, 6). Since π(1) = 3 and π(3) = 5, vertices 1,3 and 5 have the
same color. Similarly, vertices 2,4 and 6 must have the same color. If there are n colors
available, we can choose the color of each cycle in n ways, and the total number of choices
is n2 . If π = F = (2, 6)(3, 5), then as before we choose 1 color out of n for each cycle, but
in this case we still have to color the vertices 1 and 4. Here is a general statement that
covers both situations.
If π has c cycles, counting cycles of length 1, then the number of colorings ¬xed by π
is nc .
5.3. APPLICATION TO COMBINATORICS 9

To emphasize the need to consider cycles of length 1, we can write F as (2,6)(3,5)(1)(4).
From the cycle decompositions given at the beginning of the section, we have one per-
mutation (the identity) with 6 cycles, three with 4 cycles, four with 3 cycles, two with 2
cycles, and two with 1 cycle. Thus the number of distinct colorings is

<<

. 4
( 14)



>>