Exactness at E0 no longer holds because the map E0 ’ E1 need not be injective.

We will use the notation P— ’ M for a projective resolution, and M ’ E— for an

injective resolution.

S4.2 Proposition

Every module M has a free (hence projective) resolution.

9

Proof. By (4.3.6), M is a homomorphic image of a free module F0 . Let K0 be the kernel

of the map from F0 onto M . In turn, there is a homomorphism with kernel K1 from a

free module F1 onto K0 , and we have the following diagram:

G K1 G F1 G K0 G F0 GM G0

0

Composing the maps F1 ’ K0 and K0 ’ F0 , we get

G K1 G F1 G F0 GM G0

0

which is exact. But now we can ¬nd a free module F2 and a homomorphism with kernel K2

mapping F2 onto K1 . The above process can be iterated to produce the desired free

resolution. ™

Specifying a module by generators and relations (see (4.6.6) for abelian groups) in-

volves ¬nding an appropriate F0 and K0 , as in the ¬rst step of the above iterative process.

Thus a projective resolution may be regarded as a generalization of a speci¬cation by gen-

erators and relations.

Injective resolutions can be handled by dualizing the proof of (S4.2).

S4.3 Proposition

Every module M has an injective resolution.

Proof. By (10.7.4), M can be embedded in an injective module E0 . Let C0 be the cokernel

of M ’ E0 , and map E0 canonically onto C0 . Embed C0 in an injective module E1 , and

let C1 be the cokernel of the embedding map. We have the following diagram:

GM G E0 G C0 G E1 G C1 G0

0

Composing E0 ’ C0 and C0 ’ E1 , we have

GM G E0 G E1 G C1 G0

0

which is exact. Iterate to produce the desired injective resolution. ™

S5. Derived Functors

S5.1 Left Derived Functors

Suppose that F is a right exact functor from modules to modules. (In general, the

domain and codomain of F can be abelian categories, but the example we have in mind is

M —R .) Given a short exact sequence 0 ’ A ’ B ’ C ’ 0, we form deleted projective

resolutions PA— ’ A, PB— ’ B, PC— ’ C. It is shown in texts on homological algebra

that it is possible to de¬ne chain maps to produce a short exact sequence of complexes

as shown below.

GA GB GC G0

0 y y y

G PA — G PB — G PC — G0

0

10

The functor F will preserve exactness in the diagram, except at the top row, where we only

have F A ’ F B ’ F C ’ 0 exact. But remember that we are using deleted resolutions,

so that the ¬rst row is suppressed. The left derived functors of F are de¬ned by taking

the homology of the complex F (P ), that is,

(Ln F )(A) = Hn [F (PA— )].

The word “left” is used because the Ln F are computed using left resolutions. It can

be shown that up to natural equivalence, the derived functors are independent of the

particular projective resolutions chosen. By (S3.2), we have the following long exact

sequence:

G (Ln F )(A) G (Ln F )(B) G (Ln F )(C) G (Ln’1 F )(A) G ···

‚ ‚

···

S5.2 Right Derived Functors

Suppose now that F is a left exact functor from modules to modules, e.g., HomR (M, ).

We can dualize the discussion in (S5.1) by reversing the vertical arrows in the commuta-

tive diagram of complexes, and replacing projective resolutions such as PA— by injective

resolutions EA— . The right derived functors of F are de¬ned by taking the homology of

F (E). In other words,

(Rn F )(A) = H n [F (EA— )]

where the superscript n indicates that we are using right resolutions and the indices are

increasing as we move away from the starting point. By (S3.2), we have the following

long exact sequence:

‚G G (Rn F )(B) G (Rn F )(C) G (Rn+1 F )(A) G ···

‚

··· (Rn F )(A)

S5.3 Lemma

(L0 F )(A) ∼ F (A) ∼ (R0 F )(A)

= =

Proof. This is a good illustration of the advantage of deleted resolutions. If P— ’ A, we

have the following diagram:

G F (P0 ) G0

F (P1 )

F (A)

0

The kernel of F (P0 ) ’ 0 is F (P0 ), so the 0th homology module (L0 F )(A) is F (P0 ) mod

the image of F (P1 ) ’ F (P0 ) [=the kernel of F (P0 ) ’ F (A).] By the ¬rst isomorphism

11

theorem and the right exactness of F , (L0 F )(A) ∼ F (A). To establish the other isomor-

=

phism, we switch to injective resolutions and reverse arrows:

G F (E0 ) G F (E1 )

0 y

F (A)

y

0

The kernel of F (E0 ) ’ F (E1 ) is isomorphic to F (A) by left exactness of F , and the image

of 0 ’ F (E0 ) is 0. Thus (R0 F )(A) ∼ F (A). ™

=

S5.4 Lemma

If A is projective, then (Ln F )(A) = 0 for every n > 0; if A is injective, then (Rn F )(A) = 0

for every n > 0.

Proof. If A is projective [resp. injective], then 0 ’ A ’ A ’ 0 is a projective [resp.

injective] resolution of A. Switching to a deleted resolution, we have 0 ’ A ’ 0 in each

case, and the result follows. ™

S5.5 De¬nitions and Comments

If F is the right exact functor M —R , the left derived functor Ln F is called TorR (M, ).

n

If F is the left exact functor HomR (M, ), the right derived functor Rn F is called

Extn (M, ). It can be shown that the Ext functors can also be computed using pro-

R

jective resolutions and the contravariant hom functor. Speci¬cally,

Extn (M, N ) = [Rn HomR ( , N )](M ).

R

A switch from injective to projective resolutions is a simpli¬cation, because projective

resolutions are easier to ¬nd in practice.

The next three results sharpen Lemma S5.4. [The ring R is assumed ¬xed, and we

write —R simply as —. Similarly, we drop the R in TorR and ExtR . When discussing Tor,

we assume R commutative.]

S5.6 Proposition

If M is an R-module, the following conditions are equivalent.

(i) M is ¬‚at;

(ii) Torn (M, N ) = 0 for all n ≥ 1 and all modules N ;

(iii) Tor1 (M, N ) = 0 for all modules N .

12

Proof. (i) implies (ii): Let P— ’ N be a projective resolution of N . Since M — is an

exact functor (see (10.8.1)), the sequence

· · · ’ M — P1 ’ M — P0 ’ M — N ’ 0

is exact. Switching to a deleted resolution, we have exactness up to M — P1 but not at

M — P0 . Since the homology modules derived from an exact sequence are 0, the result

follows.

(ii) implies (iii): Take n = 1.

(iii) implies (i): If 0 ’ A ’ B ’ C ’ 0 is a short exact sequence, then by (S5.1), we

have the following long exact sequence:

· · · Tor1 (M, C) ’ Tor0 (M, A) ’ Tor0 (M, B) ’ Tor0 (M, C) ’ 0.

By hypothesis, Tor1 (M, C) = 0, so by (S5.3),

0’M —A’M —B ’M —C ’0

is exact, and therefore M is ¬‚at. ™

S5.7 Proposition

If M is an R-module, the following conditions are equivalent.

(i) M is projective;

(ii) Extn (M, N ) = 0 for all n ≥ 1 and all modules N ;

(iii) Ext1 (M, N ) = 0 for all modules N .

Proof. (i) implies (ii): By (S5.4) and (S5.5), Extn (M, N ) = [Extn ( , N )](M ) = 0 for

n ≥ 1.

(ii) implies (iii): Take n = 1.

(iii) implies (i): Let 0 ’ A ’ B ’ M ’ 0 be a short exact sequence. If N is any

module, then using projective resolutions and the contravariant hom functor to construct

Ext, as in (S5.5), we get the following long exact sequence:

0 ’ Ext0 (M, N ) ’ Ext0 (B, N ) ’ Ext0 (A, N ) ’ Ext1 (M, N ) ’ · · ·

By (iii) and (S5.3),

0 ’ Hom(M, N ) ’ Hom(B, N ) ’ Hom(A, N ) ’ 0

is exact. Take N = A and let g be the map from A to B. Then the map g — from Hom(B, A)

to Hom(A, A) is surjective. But 1A ∈ Hom(A, A), so there is a homomorphism f : B ’ A

such that g — (f ) = f g = 1A . Therefore the sequence 0 ’ A ’ B ’ M ’ 0 splits, so by

(10.5.3), M is projective. ™

13

S5.8 Corollary

If N is an R-module, the following conditions are equivalent.

(a) N is injective;

(b) Extn (M, N ) = 0 for all n ≥ 1 and all modules M ;

(c) Ext1 (M, N ) = 0 for all modules M .

Proof. Simply saying “duality” may be unconvincing, so let™s give some details. For (a)

implies (b), we have [Extn (M, )](N ) = 0. For (c) implies (a), note that the exact

sequence 0 ’ N ’ A ’ B ’ 0 induces the exact sequence

0 ’ Ext0 (M, N ) ’ Ext0 (M, A) ’ Ext0 (M, B) ’ 0.

Replace Ext0 (M, N ) by Hom(M, N ), and similarly for the other terms. Then take M = B

and proceed exactly as in (S5.7). ™

S6. Some Properties of Ext and Tor

We will compute Extn (A, B) and TorR (A, B) in several interesting cases.

R n

S6.1 Example

We will calculate Extn (Zm , B) for an arbitrary abelian group B. To ease the notational

Z

burden slightly, we will omit the subscript Z in Ext and Hom, and use = (most of the

time) when we really mean ∼. We have the following projective resolution of Zm :

=

GZ GZ G Zm G0

m

0

where the m over the arrow indicates multiplication by m. Switching to a deleted resolu-

tion and applying the contravariant hom functor, we get

G Hom(Z, B) G Hom(Z, B) G0

m

0 y

Hom(Zm , B)

But by (9.4.1), we have

HomR (R, B) ∼ B (1)

=

and the above diagram becomes

GB GB G0

m

(2)

0

By (S5.3), Ext0 (Zm , B) = Hom(Zm , B). Now a homomorphism f from Zm to B is

determined by f (1), and f (m) = mf (1) = 0. If B(m) is the set of all elements of B

14

that are annihilated by m, that is, B(m) = {x ∈ B : mx = 0}, then the map of B(m) to

Hom(Zm , B) given by x ’ f where f (1) = x, is an isomorphism. Thus

Ext0 (Zm , B) = B(m).

It follows from (2) that

Extn (Zm , B) = 0, n ≥ 2

and

Ext1 (Zm , B) = ker(B ’ 0)/ im(B ’ B) = B/mB.

The computation for n ≥ 2 is a special case of a more general result.

S6.2 Proposition

Extn (A, B) = 0 for all n ≥ 2 and all abelian groups A and B.

Z

Proof. If B is embedded in an injective module E, we have the exact sequence

0 ’ B ’ E ’ E/B ’ 0.

This is an injective resolution of B since E/B is divisible, hence injective; see (10.6.5)

and (10.6.6). Applying the functor Hom(A, ) = HomZ (A, ) and switching to a deleted

resolution, we get the sequence

G Hom(A, E) G Hom(A, E/B) G0

0 y

Hom(A, B)

whose homology is 0 for all n ≥ 2. ™

S6.3 Lemma

Ext0 (Z, B) = HomZ (Z, B) = B and Ext1 (Z, B) = 0.

Z Z

Proof. The ¬rst equality follows from (S5.3) and the second from (1) of (S6.1). Since Z

is projective, the last statement follows from (S5.7). ™

S6.4 Example

We will compute TorZ (Zm , B) for an arbitrary abelian group B. As before, we drop the

n

superscript Z and write = for ∼. We use the same projective resolution of Zm as in (S6.1),

=

15

— B. Since R —R B ∼ B by (8.7.6), we reach diagram (2) as

and apply the functor =

before. Thus

Torn (Zm , B) = 0, n ≥ 2;

Tor1 (Zm , B) = ker(B ’ B) = {x ∈ B : mx = 0} = B(m);

Tor0 (Zm , B) = Zm — B = B/mB.

[To verify the last equality, use the universal mapping property of the tensor product to

produce a map of Zm — B to B/mB such that n — x ’ n(x + mB). The inverse of this

map is x + mB ’ 1 — x.]

The result for n ≥ 2 generalizes as in (S6.2):

S6.5 Proposition

TorZ (A, B) = 0 for all n ≥ 2 and all abelian groups A and B.

n

Proof. B is the homomorphic image of a free module F . If K is the kernel of the ho-

momorphism, then the exact sequence 0 ’ K ’ F ’ B ’ 0 is a free resolution of B.

[K is a submodule of a free module over a PID, hence is free.] Switching to a deleted

resolution and applying the tensor functor, we get a four term sequence as in (S6.2), and

the homology must be 0 for n ≥ 2. ™

S6.6 Lemma

Tor1 (Z, B) = Tor1 (A, Z) = 0; Tor0 (Z, B) = Z — B = B.

Proof. The ¬rst two equalities follow from (S5.6) since Z is ¬‚at. The other two equalities

follow from (S5.3) and (8.7.6). ™

S6.7 Finitely generated abelian groups

We will show how to compute Extn (A, B) and Torn (A, B) for arbitrary ¬nitely generated

abelian groups A and B. By (4.6.3), A and B can be expressed as a ¬nite direct sum of

cyclic groups. Now Tor commutes with direct sums:

Torn (A, •r Bj ) = •r Torn (A, Bj ).

j=1 j=1

[The point is that if Pj— is a projective resolution of Bj , then the direct sum of the Pj—

is a projective resolution of •j Bj , by (10.5.4). Since the tensor functor is additive on

direct sums, by (8.8.6(b)), the Tor functor will be additive as well. Similar results hold

when the direct sum is in the ¬rst coordinate, and when Tor is replaced by Ext. (We use

(10.6.3) and Problems 5 and 6 of Section 10.9, and note that the direct product and the

direct sum are isomorphic when there are only ¬nitely many factors.)]

Thus to complete the computation, we need to know Ext(A, B) and Tor(A, B) when

A = Z or Zm and B = Z or Zn . We have already done most of the work. By (S6.2) and

16

(S6.5), Extn and Torn are identically 0 for n ≥ 2. By (S6.1),

Ext0 (Zm , Z) = Z(m) = {x ∈ Z : mx = 0} = 0;

Ext0 (Zm , Zn ) = Zn (m) = {x ∈ Zn : mx = 0} = Zd

where d is the greatest common divisor of m and n. [For example, Z12 (8) = {0, 3, 6, 9} ∼

=

Z4 . The point is that the product of two integers is their greatest common divisor times

their least common multiple.] By (S6.3),

Ext0 (Z, Z) = Hom(Z, Z) = Z; Ext0 (Z, Zn ) = Zn .

By (S6.1),

Ext1 (Zm , Z) = Z/mZ = Zm ;

Ext1 (Zm , Zn ) = Zn /mZn = Zd

as above. By (S5.7),

Ext1 (Z, Z) = Ext1 (Z, Zn ) = 0

By (S6.4),

Tor1 (Zm , Z) = Tor1 (Z, Zm ) = Z(m) = 0;

Tor1 (Zm , Zn ) = Zn (m) = Zd .

By (8.7.6) and (S6.4),

Tor0 (Z, Z) = Z;

Tor0 (Zm , Z) = Z/mZ = Zm ;

Tor0 (Zm , Zn ) = Zn /mZn = Zd .

Notice that Tor1 (A, B) is a torsion group for all ¬nitely generated abelian groups A and B.

This is a partial explanation of the term “Tor”. The Ext functor arises in the study of

group extensions.

S7. Base Change in the Tensor Product

Let M be an A-module, and suppose that we have a ring homomorphism from A to B (all

rings are assumed commutative). Then B—A M becomes a B module (hence an A-module)

via b(b — m) = bb — m. This is an example of base change, as discussed in (10.8.8). We

examine some frequently occurring cases. First, consider B = A/I, where I is an ideal

of A.

S7.1 Proposition

(A/I) —A M ∼ M/IM .

=

17

Proof. Apply the (right exact) tensor functor to the exact sequence of A-modules

0 ’ I ’ A ’ A/I ’ 0

to get the exact sequence

I —A M ’ A —A M ’ (A/I) —A M ’ 0.

Recall from (8.7.6) that A —A M is isomorphic to M via a — m ’ am. By the ¬rst

isomorphism theorem, (A/I) —A M is isomorphic to M mod the image of the map from

I —A M to M . This image is the collection of all ¬nite sums ai mi with ai ∈ I and

mi ∈ M , which is IM . ™

Now consider B = S ’1 A, where S is a multiplicative subset of A.

S7.2 Proposition

(S ’1 A) —A M ∼ S ’1 M .

=

Proof. The map from S ’1 A — M to S ’1 M given by (a/s, m) ’ am/s is A-bilinear, so by

the universal mapping property of the tensor product, there is a linear map ± : S ’1 A —A

M ’ S ’1 M such that ±((a/s) — m) = am/s. The inverse map β is given by β(m/s) =

(1/s) — m. To show that β is well-de¬ned, suppose that m/s = m /s . Then for some

t ∈ S we have ts m = tsm . Thus

1/s — m = ts /tss — m = 1/tss — ts m = 1/tss — tsm = 1/s — m .

Now ± followed by β takes a/s—m to am/s and then to 1/s—am = a/s—m. On the other

hand, β followed by ± takes m/s to 1/s — m and then to m/s. Consequently, ± and β are

inverses of each other and yield the desired isomorphism of S ’1 A —A M and S ’1 M . ™

Finally, we look at B = A[X].

S7.3 Proposition

[X] —A M ∼ M [X]

=

where the elements of M [X] are of the form a0 m0 + a1 Xm1 + a2 X 2 m2 + · · · + an X n mn ,

ai ∈ A, mi ∈ M , n = 0, 1, . . . .

Proof. This is very similar to (S7.2). In this case, the map ± from A[X] —A M to M [X]

takes f (X)—m to f (X)m, and the map β from M [X] to A[X]—A M takes X i m to X i —m.

Here, there is no need to show that β is well-de¬ned. ™

Solutions Chapters 1“5

Section 1.1

1. Under multiplication, the positive integers form a monoid but not a group, and the

positive even integers form a semigroup but not a monoid.

2. With |a| denoting the order of a, we have |0| = 1, |1| = 6, |2| = 3, |3| = 2, |4| = 3,

and |5| = 6.

3. There is a subgroup of order 6/d for each divisor d of 6. We have Z6 itself (d = 1),

{0}(d = 6), {0, 2, 4}(d = 2), and {0, 3}(d = 3).

4. S forms a group under addition. The inverse operation is subtraction, and the zero

matrix is the additive identity.

5. S — does not form a group under multiplication, since a nonzero matrix whose deter-

minant is 0 does not have a multiplicative inverse.

6. If d is the smallest positive integer in H, then H consists of all multiples of d. For if

x ∈ H we have x = qd + r where 0 ¤ r < d. But then r = x ’ qd ∈ H, so r must be

0.

7. Consider the rationals with addition mod 1, in other words identify rational numbers

that di¬er by an integer. Thus, for example, 1/3 = 4/3 = 7/3, etc. The group is

in¬nite, but every element generates a ¬nite subgroup. For example, the subgroup

generated by 1/3 is {1/3, 2/3, 0}.

8. (ab)mn = (am )n (bn )m = 1, so the order of ab divides mn. Thus |ab| = m1 n1 where

m1 divides m and n1 divides n. Consequently,

am1 n1 bm1 n1 = 1 (1)

If m = m1 m2 , raise both sides of (1) to the power m2 to get bmn1 = 1. The order of b,

namely n, must divide mn1 , and since m and n are relatively prime, n must divide

n1 . But n1 divides n, hence n = n1 . Similarly, if n = n1 n2 we raise both sides of (1)

to the power n2 and conclude as above that m = m1 . But then |ab| = m1 n1 = mn,

as asserted.

If c belongs to both a and b then since c is a power of a and also a power of b, we

have cm = cn = 1. But then the order of c divides both m and n, and since m and n are

relatively prime, c has order 1, i.e., c = 1.

1

2

9. Let |a| = m, |b| = n. If [m, n] is the least common multiple, and (m, n) the greatest

common divisor, of m and n, then [m, n] = mn/(m, n). Examine the prime factoriza-

tions of m and n:

t

t

m = (pt1 · · · pti )(pi+1 · · · pjj ) = r r

i+1

1 i

u

u

n = (pu1 · · · pui )(pi+1 · · · pj j ) = s s

i+1

1 i

where tk ¤ uk for 1 ¤ k ¤ i, and tk ≥ uk for i + 1 ¤ k ¤ j.

Now ar has order m/r and bs has order n/s, with m/r (= r ) and n/s (= s ) relatively

prime. By Problem 8, ar bs has order mn/rs = mn/(m, n) = [m, n]. Thus given

elements of orders m and n, we can construct another element whose order is the

least common multiple of m and n. Since the least common multiple of m, n and

q is [[m, n], q], we can inductively ¬nd an element whose order is the least common

multiple of the orders of all elements of G.

10. Choose an element a that belongs to H but not K, and an element b that belongs to

K but not H, where H and K are subgroups whose union is G. Then ab must belong

to either H or K, say ab = h ∈ H. But then b = a’1 h ∈ H, a contradiction. If

ab = k ∈ K, then a = kb’1 ∈ K, again a contradiction. To prove the last statement,

note that if H ∪ K is a subgroup, the ¬rst result with G replaced by H ∪ K implies

that H = H ∪ K or K = H ∪ K, in other words, K ⊆ H or H ⊆ K.

akm = 1 if and only if km is a multiple of n, and the smallest such multiple occurs

11.

when km is the least common multiple of n and k. Thus the order of ak is [n, k]/k.

Examination of the prime factorizations of n and k shows that [n, k]/k = n/(n, k).

We have x ∈ Ai i¬ x is a multiple of pi , and there are exactly n/pi multiples of pi

12.

between 1 and n. Similarly, x belongs to Ai © Aj i¬ x is divisible by pi pj , and there

are exactly pin j multiples of pi pj between 1 and n. The same technique works for all

p

other terms.

The set of positive integers in {1, 2, . . . , n} and not relatively prime to n is ∪r Ai ,

13. i=1

so •(n) = n ’ | ∪i=1 Ai |. By the principle of inclusion and exclusion from basic

r

combinatorics,

r r

|Ai | ’ |Ai © Aj | + |Ai © Aj © Ak | ’ · · · + (’1)r’1 |A1 © A2 © · · · Ar |.

Ai =

i=1 i=1 i<j i<j<k

By Problem 12,

r

1 1 1

•(n) = n 1 ’ ’ + · · · + (’1)r 1p1 p2 · · · pr .

+

pi pi pj pi pj pk

i=1 i<j i<j<k

Thus •(n) = n 1 ’ p1 1 ’ p2 · · · 1 ’ pr .

1 1 1

14. Let G be cyclic of prime order p. Since the only positive divisors of p are 1 and p, the

only subgroups of G are G and {1}.

15. No. Any non-identity element of G generates a cyclic subgroup H. If H ‚ G, we

are ¬nished. If H = G, then G is isomorphic to the integers, and therefore has many

nontrivial proper subgroups. (See (1.1.4) and Problem 6 above.)

3

Section 1.2

1. The cycle decomposition is (1, 4)(2, 6, 5); there is one cycle of even length, so the

permutation is odd.

2. The elements are I, R = (A, B, C, D), R2 = (A, C)(B, D), R3 = (A, D, C, B),

F = (B, D), RF = (A, B)(C, D), R2 F = (A, C), R3 F = (A, D)(B, C).

3. Such a permutation can be written as (1, a1 , a2 , a3 , a4 ) where (a1 , a2 , a3 , a4 ) is a per-

mutation of {2, 3, 4, 5}. Thus the number of permutations is 4! = 24.

4. Select two symbols from 5, then two symbols from the remaining 3, and divide by 2

since, for example, (1, 4)(3, 5) is the same as (3, 5)(1, 4). The number of permutations

is 10(3)/2 = 15.

5. For example, (1, 2, 3)(1, 2) = (1, 3) but (1, 2)(1, 2, 3) = (2, 3).

6. We have V = {I, (1, 2)(3, 4), (1, 3)(2, 4), (1, 4)(2, 3)}. Thus V = {I, a, b, c} where

the product of any two distinct elements from {a, b, c} (in either order) is the third

element, and the square of each element is I. It follows that V is an abelian group.

7. This follows because the inverse of the cycle (a1 , a2 , . . . , ak ) is (ak , . . . , a2 , a1 ).

8. Pick 3 symbols out of 4 to be moved, then pick one of two possible orientations, e.g.,

(1, 2, 3) or (1, 3, 2). The number of 3-cycles in S4 is therefore 4(2) = 8.

9. If π is a 3-cycle, then π 3 = I, so π 4 = π. But π 4 = (π 2 )2 , and π 2 ∈ H by hypothesis,

so (π 2 )2 ∈ H because H is a group. Thus π ∈ H.

10. There are 5 inversions, 21, 41, 51, 43 and 53. Thus we have an odd number of

inversions and the permutation π = (1, 2, 4)(3, 5) is also odd.

11. This follows because a transposition of two adjacent symbols in the second row changes

the number of inversions by exactly 1. Therefore such a transposition changes the

parity of the number of inversions. Thus the parity of π coincides with the parity of the

number of inversions. In the given example, it takes 5 transpositions of adjacent digits

to bring 24513 into natural order 12345. It also takes 5 transpositions to create π:

π = (1, 5)(1, 4)(1, 2)(3, 5)(3, 4)

Section 1.3

1. If Ha = Hb then a = 1a = hb for some h ∈ H, so ab’1 = h ∈ H. Conversely, if

ab’1 = h ∈ H then Ha = Hhb = Hb.

2. Re¬‚exivity: aa’1 = 1 ∈ H.

Symmetry: If ab’1 ∈ H then (ab’1 )’1 = ba’1 ∈ H.

Transitivity: If ab’1 ∈ H and bc’1 ∈ H then (ab’1 )(bc’1 ) = ac’1 ∈ H.

3. ab’1 ∈ H i¬ (ab’1 )’1 = ba’1 ∈ H i¬ b ∈ Ha.

4. Ha’1 = Hb’1 i¬ a’1 (b’1 )’1 = a’1 b ∈ H i¬ aH = bH.

5. Since a1 belongs to both aH and a1 H, we have a1 H = aH because the left cosets

partition G.

4

6. There are only two left cosets of H in G; one is H itself, and the other is, say, aH.

Similarly, there are only two right cosets, H and Hb. Since the left cosets partition G,

as do the right cosets, aH must coincide with Hb, so that every left coset if a right

coset.

7. The permutations on the list are e, (1, 2, 3), (1, 3, 2), (1, 2), (1, 3), and (2, 3), which

are in fact the 6 distinct permutations of {1, 2, 3}.

8. The left cosets of H are H = {e, b}, aH = {a, ab}, and a2 H = {a2 , a2 b}. The right

cosets of H are H = {e, b}, Ha = {a, ba} = {a, a2 b}, and Ha2 = {a2 , ba2 } = {a2 , ab}.

9. The computation of Problem 8 shows that the left cosets of H do not coincide with

the right cosets. Explicitly, aH and a2 H are not right cosets (and similarly, Ha and

Ha2 are not left cosets).

10. f (n) = f (1 + 1 + · · · 1) = f (1) + f (1) + · · · f (1) = r + r + · · · r = rn.

11. In Problem 10, the image f (Z) must coincide with Z. But f (Z) consists of all multiples

of r, and the only way f (Z) can equal Z is for r to be ±1.

12. The automorphism group of Z is {I, ’I} where (’I)2 = I. Thus the automorphisms

of Z form a cyclic group of order 2. (There is only one such group, up to isomorphism.)

13. Re¬‚exivity: x = 1x1. Symmetry: If x = hyk, then y = h’1 xk ’1 . Transitivity: if

x = h1 yk1 and y = h2 zk2 , then x = h1 h2 zk2 k1 .

14. HxK is the union over all k ∈ K of the right cosets H(xk), and also the union over

all h ∈ H of the left cosets (hx)K.

Section 1.4

1. De¬ne f : Z ’ Zn by f (x) = the residue class of x mod n. Then f is an epimorphism

with kernel nZ, and the result follows from the ¬rst isomorphism theorem.

2. De¬ne f : Zn ’ Zn/m by f (x) = x mod n/m. Then f is an epimorphism with

kernel Zm , and the result follows from the ¬rst isomorphism theorem. (In the concrete

example with n = 12, m = 4, we have f (0) = 0, f (1) = 1, f (2) = 2, f (3) = 0, f (4) = 1,

f (5) = 2, f (6) = 0, etc.)

3. f (xy) = axya’1 = axa’1 aya’1 = f (x)f (y), so f is a homomorphism. If b ∈ G, we

can solve axa’1 = b for x, namely x = a’1 ba, so f is surjective. If axa’1 = 1 then

ax = a, so x = 1 and f is injective. Thus f is an automorphism.

4. Note that fab (x) = abx(ab)’1 = a(bxb’1 )a’1 = fa (fb (x)), and y = fa (x) i¬ x =

fa’1 (y), so that (fa )’1 = fa’1 .

5. De¬ne Ψ : G ’ Inn G, the group of inner automorphisms of G, by Ψ(a) = fa . Then

Ψ(ab) = fab = fa —¦ fb = Ψ(a)Ψ(b), so Ψ is a homomorphism (see the solution to

Problem 4). Since a is arbitrary, Ψ is surjective. Now a belongs to ker Ψ i¬ fa is the

identity function, i.e., axa’1 = x for all x ∈ G, in other words, a commutes with every

x in G. Thus ker Ψ = Z(G), and the result follows from the ¬rst isomorphism theorem.

6. If f is an automorphism of Zn , then since 1 generates Zn , f is completely determined by

m = f (1), and since 1 has order n in Zn , m must have order n as well. But then m is a

unit mod n (see (1.1.5)), and f (r) = f (1 + 1 + · · · 1) = f (1) + f (1) + · · · f (1) = rf (1) =

5

rm. Conversely, any unit m mod n determines an automorphism θ(m) = multiplication

by m. The correspondence between m and θ(m) is a group isomorphism because

θ(m1 m2 ) = θ(m1 ) —¦ θ(m2 ).

7. The ¬rst assertion follows from the observation that HN is the subgroup generated

by H ∪ N (see (1.3.6)). For the second assertion, note that if K is a subgroup of G

contained in both H and N , then K is contained in H © N .

8. If g(x) = y, then g —¦ fa —¦ g ’1 maps y to g(axa’1 ) = g(a)y[g(a)]’1 .

9. If G is abelian, then fa (x) = axa’1 = aa’1 x = x.

Section 1.5

1. C2 — C2 has 4 elements 1 = (1, 1), ± = (a, 1), β = (1, a) and γ = (a, a), and the

product of any two distinct elements from {±, β, γ} is the third. Since each of ±, β,

γ has order 2 (and 1 has order 1), there is no element of order 4 and C2 — C2 is not

cyclic.

2. The four group is V = {I, a, b, c} where the product of any two distinct elements from

{a, b, c} is the third. Therefore, the correspondence 1 ’ I, ± ’ a, β ’ b, γ ’ c is an

isomorphism of C2 — C2 and V .

3. Let C2 = {1, a} with a2 = 1, and C3 = {1, b, b2 } with b3 = 1. Then (a, b) generates

C2 — C3 , since the successive powers of this element are (a, b), (1, b2 ), (a, 1), (1, b),

(a, b2 ), and (1, 1). Therefore C2 — C3 is cyclic of order 6, i.e., isomorphic to C6 .

4. Proceed as in Problem 3. If a has order n in Cn and b has order m in Cm , then (a, b)

has order nm in Cn — Cm , so that Cn — Cm is cyclic of order nm.

5. Suppose that (a, b) is a generator of the cyclic group Cn — Cm . Then a must generate

Cn and b must generate Cm (recall that Cn — {1} can be identi¬ed with Cn ). But

(a, b)k = 1 i¬ ak = bk = 1, and it follows that the order of (a, b) is the least common

multiple of the orders of a and b, i.e., the least common multiple of n and m. Since n

and m are not relatively prime, the least common multiple is strictly smaller than nm,

so that (a, b) cannot possibly generate Cn — Cm , a contradiction.

6. By (1.3.3), G and H are both cyclic. Since p and q are distinct primes, they are

relatively prime, and by Problem 4, G — H is cyclic.

7. De¬ne f : H — K ’ K — H by f (h, k) = (k, h). It follows from the de¬nition of direct

product that f is an isomorphism.

8. De¬ne f1 : G — H — K ’ G — (H — K) by f1 (g, h, k) = (g, (h, k)), and de¬ne f2 : G —

H — K ’ (G — H) — K by f2 (g, h, k) = ((g, h), k). It follows from the de¬nition of

direct product that f1 and f2 are isomorphisms.

Section 2.1

1. Never. If f is a polynomial whose degree is at least 1, then f cannot have an inverse.

For if f (X)g(X) = 1, then the leading coe¬cient of g would have to be 0, which is

impossible.

6

2. If f (X)g(X) = 1, then (see Problem 1) f and g are polynomials of degree 0, in other

words, elements of R. Thus the units of R[X] are simply the nonzero elements of R.

3. (a) No element of the form a1 X + a2 X 2 + · · · can have an inverse.

(b) For example, 1 ’ X is a unit because (1 ’ X)(1 + X + X 2 + X 3 + · · · ) = 1.

4. Since Z[i] is a subset of the ¬eld C of complex numbers, there can be no zero divisors

in Z[i]. If w is a nonzero Gaussian integer, then w has an inverse in C, but the inverse

need not belong to Z[i]. For example, (1 + i)’1 = 1 ’ 1 i.

2 2

5. If z = a + bi with a and b integers, then |z|2 = a2 + b2 , so that if z is not zero, we

must have |z| ≥ 1. Thus if zw = 1, so that |z||w| = 1, we have |z| = 1, and the only

possibilities are a = 0, b = ±1 or a = ±1, b = 0. Consequently, the units of Z[i] are 1,

’1, i and ’i.

6. All identities follow directly from the de¬nition of multiplication of quaternions. Al-

ternatively, (b) can be deduced from (a) by interchanging x1 and x2 , y1 and y2 , z1

and z2 , and w1 and w2 . Then the second identity of (c) can be deduced by noting

that the quaternion on the right side of the equals sign in (a) is the conjugate of the

quaternion on the right side of the equals sign in (b).

7. Multiply identities (a) and (b), and use (c). (This is not how Euler discovered the

identity; quaternions were not invented until much later.)

8. The veri¬cation that End G is an abelian group under addition uses the fact that G is

an abelian group. The additive identity is the zero function, and the additive inverse

of f is given by (’f )(a) = ’f (a). Multiplication is associative because composition

of functions is associative. To establish the distributive laws, note that the value of

(f + g)h at the element a ∈ G is f (h(a)) + g(h(a)), so that (f + g)h = f h + gh.

Furthermore, the value of f (g + h) at a is f (g(a) + h(a)) = f (g(a)) + f (h(a)) since

f is an endomorphism. Therefore f (g + h) = f h + gh. The multiplicative identity is

the identity function, given by E(a) = a for all a.

9. An endomorphism that has an inverse must be an isomorphism of G with itself. Thus

the units of the ring End G are the automorphisms of G.

10. Use Euler™s identity with x1 = 1, y1 = 2, z1 = 2, w1 = 5 (34 = 12 + 22 + 22 + 52 )

and x2 = 1, y2 = 1, z2 = 4, w2 = 6 (54 = 12 + 12 + 42 + 62 ). The result is 1836 =

(34)(54) = 412 + 92 + 52 + 72 . The decomposition is not unique; another possibility

is x1 = 0, y1 = 0, z1 = 3, w1 = 5, x2 = 0, y2 = 1, z2 = 2, w2 = 7.

11. In all four cases, sums and products of matrices of the given type are also of that

type. But in (b), there is no matrix of the given form that can serve as the identity..

Thus the sets (a), (c) and (d) are rings, but (b) is not.

Section 2.2

1. By Section 1.1, Problem 6, the additive subgroups of Z are of the form (n) = all

multiples of n. But if x ∈ (n) and r ∈ Z then rx ∈ (n), so each (n) is an ideal as well.

2. If the n by n matrix A is 0 except perhaps in column k, and B is any n by n matrix,

then BA is 0 except perhaps in column k. Similarly, if A is 0 o¬ row k, then so is AB.

7

3. (a) This follows from the de¬nition of matrix multiplication.

(b) In (a) we have ajr = 0 for r = k, and the result follows.

(c) By (b), the ith term of the sum is a matrix with cik in the ik position, and 0™s

elsewhere. The sum therefore coincides with C.

4. The statement about left ideals follows from the formula of Problem 3(c). The result

for right ideals is proved in a similar fashion. Explicitly, AEij has column i of A as its

j th column, with 0™s elsewhere. If A ∈ Rk then AEij has aki in the kj position, with

0™s elsewhere, so if aki = 0 we have AEij a’1 = Ekj . Thus if C ∈ Rk then

ki

n

AEij a’1 ckj = C.

ki

j=1

5. If I is a two-sided ideal and A ∈ I with ars = 0, then by considering products of the

form a’1 Epq AEkl (which have the e¬ect of selecting an entry of A and sliding it from

rs

one row or column to another), we can show that every matrix Eij belongs to I. Since

every matrix is a linear combination of the Eij , it follows that I = Mn (R).

6. A polynomial with no constant term is of the form a1 X + a2 X 2 + · · · an X n = Xg(X).

Conversely, a polynomial expressible as Xg(X) has no constant term. Thus we may

take f = X.

7. Let a be a nonzero element of R. Then the principal ideal (a) is not {0}, so (a) = R.

Thus 1 ∈ (a), so there is an element b ∈ R such that ab = 1.

8. Since an ideal I is a ¬nite set in this case, it must have a ¬nite set of generators

x1 , . . . , xk . Let d be the greatest common divisor of the xi . Every element of I is

of the form a1 x1 + · · · + ak xk , and hence is a multiple of d. Thus I ⊆ (d). But

d ∈ I, because there are integers ai such that i ai xi = d. Consequently, (d) ⊆ I.

[Technically, arithmetic is modulo n, but we get around this di¬culty by noting that

if ab = c as integers, then ab ≡ c modulo n.]

Section 2.3

1. Use the same maps as before, and apply the ¬rst isomorphism theorem for rings.

2. If In is the set of multiples of n > 1 in the ring of integers, then In is an ideal but not

a subring (since 1 ∈ In ). Z is a subring of the rational numbers Q but not an ideal,

/

since a rational number times an integer need not be an integer.

3. In parts (2) and (3) of the Chinese remainder theorem, take R = Z and Ii = the set of

multiples of mi .

4. Apply part (4) of the Chinese remainder theorem with R = Z and Ii = the set of

multiples of mi .

5. To prove the ¬rst statement, de¬ne f : R ’ R2 by f (r1 , r2 ) = r2 . Then f is a ring

homomorphism with kernel R1 and image R2 . By the ¬rst isomorphism theorem for

rings, R/R1 ∼ R2 . A symmetrical argument proves the second statement. In practice,

=

we tend to forget about the primes and write R/R1 ∼ R2 and R/R2 ∼ R1 . There is

= =

8

also a tendency to identify a ring with its isomorphic copy, and write R/R1 = R2 and

R/R2 = R1 This should not cause any di¬culty if you add, mentally at least, ”up to

isomorphism”.

6. The product is always a subset of the intersection, by de¬nition. First consider the

case of two ideals. Then 1 = a1 + a2 for some a1 ∈ I1 , a2 ∈ I2 . If b ∈ I1 © I2 ,

then b = b1 = ba1 + ba2 ∈ I1 I2 . The case of more than two ideals is handled by

induction. Note that R = (I1 + In )(I2 + In ) · · · (In’1 + In ) ⊆ (I1 · · · In’1 ) + In .

Therefore (I1 · · · In’1 ) + In = R. By the n = 2 case and the induction hypothesis,

I1 · · · In’1 In = (I1 · · · In’1 ) © In = I1 © I2 © · · · © In .

7. Let a + ©i Ii map to (1 + I1 , 0 + I2 , c3 + I3 , . . . , cn + In ), where the cj are arbitrary.

Then 1 ’ a ∈ I1 and a ∈ I2 , so 1 = (1 ’ a) + a ∈ I1 + I2 . Thus I1 + I2 = R, and

similarly Ii + Ij = R for all i = j.

Section 2.4

1. If n = rs with r, s > 1 then r ∈ n , s ∈ n , but rs ∈ n , so that n is not prime. But

/ /

Z/ n is isomorphic to Zn , the ring of integers modulo n (see Section 2.3, Problem 1).

If n is prime, then Zn is a ¬eld, in particular an integral domain, hence n is a prime

ideal by (2.4.5).

2. By Problem 1, I is of the form p where p is prime. Since Z/ p is isomorphic to Zp ,

which is a ¬eld, p is maximal by (2.4.3).

3. The epimorphism a0 + a1 X + a2 X 2 + · · · ’ a0 of F [[X]] onto F has kernel X , and

the result follows from (2.4.7).

4. The ideal I = 2, X is not proper; since 2 ∈ I and 1 ∈ F ⊆ F [[X]], we have 1 ∈ I and

2

therefore I = F [[X]]. The key point is that F is a ¬eld, whereas Z is not.

5. Suppose that f (X) = a0 + a1 X + · · · belongs to I but not to X . Then a0 cannot

be 0, so by ordinary long division we can ¬nd g(X) ∈ F [[X]] such that f (X)g(X) = 1.

But then 1 ∈ I, contradicting the assumption that I is proper.

6. Let f (X) = an X n + an+1 X n+1 + · · · , an = 0, be an element of the ideal I, with n

as small as possible. Then f (X) ∈ (X n ), and if g(X) is any element of I, we have

g(X) ∈ (X m ) for some m ≥ n. Thus I ⊆ (X n ). Conversely, if f (X) = X n g(X) ∈ I,

with g(X) = an + an+1 X + · · · , an = 0,, then as in Problem 5, g(X) is a unit, and

therefore X n ∈ I. Thus (X n ) ⊆ I, so that I = (X n ), as claimed.

7. f ’1 (P ) is an additive subgroup by (1.3.15), part (ii). If a ∈ f ’1 (P ) and r ∈ R, then

f (ra) = f (r)f (a) ∈ P , so ra ∈ f ’1 (P ). Thus f ’1 (P ) is an ideal. If ab ∈ f ’1 (P ), then

f (a)f (b) ∈ P , so either a or b must belong to f ’1 (P ). If f ’1 (P ) = R, then f maps

eveything in R, including 1, into P ; thus f ’1 (P ) is proper. (Another method: As a

proper ideal, P is the kernel of some ring homomorphism π. Consequently, f ’1 (P ) is

the kernel of π —¦ f , which is also a ring homomorphism. Therefore f ’1 (P ) is a proper

ideal.) Consequently, f ’1 (P ) is prime.

8. Let S be a ¬eld, and R an integral domain contained in S, and assume that R is not

a ¬eld. For example, let R = Z, S = Q. Take f to be the inclusion map. Then {0} is

a maximal ideal of S, but f ’1 ({0}) = {0} is a prime but not maximal ideal of R.

9

9. If P = I ©J with P ‚ I and P ‚ J, choose a ∈ I \P and b ∈ J \P . Then ab ∈ I ©J = P ,

contradicting the assumption that P is prime.

Section 2.5

1. Any number that divides a and b divides b and r1 , and conversely, any number that

divides b and r1 divides a and b. Iterating this argument, we ¬nd that gcd(a, b) =

gcd(b, r1 ) = gcd(r1 , r2 ) = · · · = gcd(rj’1 , rj ) = rj .

2. This follows by successive substitution. We start with rj = rj’2 ’ rj’1 qj , continue

with rj’1 = rj’3 ’ rj’2 qj’1 , rj’2 = rj’4 ’ rj’3 qj’2 , and proceed up the ladder until

we have expressed d as a linear combination of a and b. There is an easier way, as

Problem 3 shows.

3. The ¬rst equation of the three describes the steps of the algorithm. We wish to prove

that axi + byi = ri , that is,

a(xi’2 ’ qi xi’1 ) + b(yi’2 ’ qi yi’1 ) = ri . (1)

But this follows by induction: if axi’2 + byi’2 = ri’2 and axi’1 + byi’1 = ri’1 , then

the left side of (1) is ri’2 ’qi ri’1 , which is ri by de¬nition of the Euclidean algorithm.

4. We have the following table:

i qi+1 ri xi yi

’1 ” 123 1 0

0 2 54 0 1

’2

1 3 15 1

’3

2 1 9 7

’9

3 1 6 4

’7

4 2 3 16

For example, to go from i = 1 to i = 2 we have x2 = x0 ’ q2 x1 = 0 ’ 3(1) = ’3,

y2 = y0 ’ q2 y1 = 1 ’ 3(’2) = 7, and r2 = r0 ’ q2 r1 = 54 ’ 3(15) = 9; also,

q3 = 15/9 = 1. We have ax2 + by2 = 123(’3) + 54(7) = 9 = r2 , as expected. The

process terminates with 123(’7) + 54(16) = 3 = d.

5. If p is composite, say p = rs with 1 < r < p, 1 < s < p, then rs is 0 in Zp but r

and s are nonzero, so Zp is not a ¬eld. If p is prime and a is not zero in Zp then the

greatest common divisor of a and p is 1, and consequently there are integers x and

y such that ax + py = 1. In Zp this becomes ax = 1, so that every nonzero element

in Zp has an inverse in Zp , proving that Zp is a ¬eld.

6. Since f (X) and g(X) are multiples of d(X), so are all linear combinations a(X)f (X)+

b(X)g(X), and consequently I ⊆ J. By Problem 2, there are polynomials a(X) and

b(X) such that a(X)f (X) + b(X)g(X) = d(X), so that d(X) belongs to I. Since I is

an ideal, every multiple of d(X) belongs to I, and therefore J ⊆ I.

n

7. Take f (X) = i=0 bi Pi (X).

10

8. If g(X) is another polynomial such that g(ai ) = f (ai ) for all i, then f and g agree at

n+1 points, so that f (X)’g(X) has more than n roots in F . By (2.5.3), f (X)’g(X)

must be the zero polynomial.

9. If F has only ¬nitely many elements a1 , . . . , an , take f (X) = (X ’ a1 ) · · · (X ’ an ).

10. Let F be the complex numbers C. Then every polynomial of degree n has exactly n

roots, counting multiplicity. Thus if f (a) = 0 at more than n points a, in particular

if f vanishes at every point of C, then f = 0. More generally, F can be any in¬nite

¬eld (use (2.5.3)).

Section 2.6

1. If r = 0 then I contains a unit, so that 1 ∈ I and I = R.

2. If b ∈ p1 then b + p1 = p1 , so b + p1 has an inverse in R/ p1 , say c + p1 . Thus

/

(b + p1 )(c + p1 ) = 1 + p1 , hence (bc ’ 1) + p1 = p1 , so bc ’ 1 ∈ p1 .

3. If bc ’ dp1 = 1 then bcp2 · · · pn ’ dp1 · · · pn = p2 · · · pn , and since b and p1 · · · pn belong

to I, so does p2 · · · pn , contradicting the minimality of n. (If n = 1, then 1 ∈ I, so

I = R.)

4. If a, b ∈ R and x, y ∈ J then (ax + by)p1 = xp1 a + yp1 b. Since x, y ∈ J we have xp1 ∈ I

and yp1 ∈ I, so that (ax + by)p1 ∈ I, hence ax + by ∈ J.

5. If x ∈ J then xp1 ∈ I, so Jp1 ⊆ I. Now I ⊆ p1 by Problem 3, so if a ∈ I then

a = xp1 for some x ∈ R. But then x ∈ J by de¬nition of J, so a = xp1 ∈ Jp1 .

6. Since J contains a product of fewer than n primes, J is principal by the induction

hypothesis. If J = d then by Problem 5, I = J p1 . But then I = dp1 , and the

result follows. (If n = 1, then p1 ∈ I, hence 1 ∈ J , so J = R and I = J p1 = p1 .)

7. Assume that P ⊆ Q. Then p = aq for some a ∈ R, so aq ∈ P . Since P is prime, either

a or q belongs to P . In the second case, Q ⊆ P and we are ¬nished. Thus assume

a ∈ P , so that a = bp for some b ∈ R. Then p = aq = bpq, and since R is an integral

domain and p = 0, we have bq = 1, so q is a unit and Q = R, a contradiction of the

assumption that Q is prime.

8. Let x be a nonzero element of P , with x = up1 · · · pn , u a unit and the pi prime. Then

p1 · · · pn = u’1 x ∈ P , and since P is prime, some pi belongs to P . Thus P contains

the nonzero principal prime ideal pi .

Section 2.7

1. If m is a generator of the indicated ideal then m belongs to all ai , so each ai divides

m. If each ai divides b then b is in every ai , so b ∈ ©n ai = m , so m divides b.

i=1

Thus m is a least common multiple of A. Now suppose that m is an lcm of A, and let

©n ai = c . Then c belongs to every ai , so each ai divides c. Since m = lcm(A),

i=1

m divides c, so c is a subset of m . But again since m = lcm(A), each ai divides m,

so m ∈ ©n ai = c . Therefore m ⊆ c , hence m = c , and m is a generator

i=1

of ©n ai .

i=1

11

2. Let a = 11 + 3i, b = 8 ’ i. Then a/b = (11 + 3i)(8 + i)/65 = 85/65 + i35/65. Thus

we may take x0 = y0 = 1, and the ¬rst quotient is q1 = 1 + i. The ¬rst remainder

is r1 = a ’ bq1 = (11 + 3i) ’ (8 ’ i)(1 + i) = 2 ’ 4i. The next step in the Euclidean

algorithm is (8 ’ i)/(2 ’ 4i) = (8 ’ i)(2 + 4i)/20 = 1 + (3i/2). Thus the second quotient

is q2 = 1 + i (q2 = 1 + 2i would be equally good). The second remainder is r2 =

(8’i)’(2’4i)(1+i) = 2+i. The next step is (2’4i)/(2+i) = (2’4i)(2’i)/5 = ’2i,

so q3 = ’2i, r3 = 0. The gcd is the last divisor, namely 2 + i.

3. We have Ψ(1) ¤ Ψ(1(a)) = Ψ(a) for every nonzero a. If a is a unit with ab = 1,

then Ψ(a) ¤ Ψ(ab) = Ψ(1), so Ψ(a) = Ψ(1). Conversely, suppose that a = 0 and

Ψ(a) = Ψ(1). Divide 1 by a to get 1 = aq + r, where r = 0 or Ψ(r) < Ψ(a) = Ψ(1).

But if r = 0 then Ψ(r) must be greater than or equal to Ψ(1), so we must have r = 0.

Therefore 1 = aq, and a is a unit.

√ √

4. Ψ((a1 + b1 d)(a2 + b2 d)) √

= ψ(a1 a2 + b1 b2 d + (a1 b2 + a2 b1 ) d)

√ √

= a1 a2 + b1 b2 d + (a1 b2 + a2 b1 ) d a1 a2 + b1 b2 d ’ (a1 b2 + a2 b1 ) d ;

√ √

Ψ(a1 + b1 d)Ψ(a2 + b2 d) √ √ √ √

= a1 + b1 d a2 + b2 d a1 ’ b1 d a2 ’ b2 d

and it follows that Ψ(±β) = Ψ(±)Ψ(β). Now Ψ(±) ≥ 1 for all nonzero ±, for if

Ψ(±) = |a2 ’ db2 | = 0, then a2 = db2 . But if b = 0 then d = (a/b)2 , contradicting the

assumption that d is not a perfect square. Thus b = 0, so a is 0 as well, and ± = 0, a

contradiction. Thus Ψ(±β) = Ψ(±)Ψ(β) ≥ Ψ(±).

√ √

5. Either d or d ’ 1 is √ divides d(d ’ 1) = d2 ’ d = (d + d)(d ’ d). But 2

even, so 2 √ √ √

does not divide d + d or √ ’ d. For example, if 2(a + b d) = d + d) for integers

d √ √

a, b then 2a ’ d = (1 ’ 2b) d, which is impossible since d is irrational. (If d = r/s

then r2 = ds2 , which cannot happen if d is not a perfect square.)

6. De¬ne Ψ as in√ Problem 4 (and Example (2.7.5)). Suppose 2 = ±β where ± and β are

nonunits in Z[ d]. Then 4 = Ψ(2) = Ψ(±)Ψ(β), with Ψ(±), Ψ(β) > 1 by Problems 3

√

and 4. But then Ψ(±) = Ψ(β) = 2. If ± = a + b d then |a2 ’ db2 | = 2, so a2 ’ db2 is

either 2 or ’2. Therefore if b = 0 (so that b2 ≥ 1), then since d ¤ ’3 we have

a2 ’ db2 ≥ 0 + 3(1) = 3,

a contradiction. Thus b = 0, so ± = a, and 2 = Ψ(a) = a2 , an impossibility for a ∈ Z.

7. This follows from Problems 5 and 6, along with (2.6.4).

8. Just as with ordinary integers, the product of two Gaussian integers is their greatest

common divisor times their least common multiple. Thus by Problem 2, the lcm is

(11 + 3i)(8 ’ i)/(2 + i) = 39 ’ 13i.

9. If ± = βγ, then Ψ(±) = Ψ(β)Ψ(γ). By hypothesis, either Ψ(β) or Ψ(γ) is 1(= Ψ(1)).

By Problem 3, either β or γ is a unit.

Section 2.8

1. If D is a ¬eld, then the quotient ¬eld F , which can be viewed as the smallest ¬eld

containing D, is D itself. Strictly speaking, F is isomorphic to D; the embedding map

12

f (a) = a/1 is surjective, hence an isomorphism. To see this, note that if a/b ∈ F , then

a/b = ab’1 /1 = f (ab’1 ).

2. The quotient ¬eld consists of all rational functions f (X)/g(X), where f (X) and g(X)

are polynomials in F [X] and g(X) is not the zero polynomial. To see this, note that

the collection of rational functions is in fact a ¬eld, and any ¬eld containing F [X] must

contain all such rational functions.

a c e a c e adf + bcf + bde

3. + + and + + both compute to be .

b df b d f bdf

ac e a cf + de acf + ade

4. + = = and

bd f b df bdf

ac ae acbf + bdae acf + dae acf + ade

+ = = = .

b2 df

bd bf bdf bdf

5. If g is any extension of h and a/b ∈ F , there is only one possible choice for g(a/b),

namely h(a)/h(b). (Since b = 0 and h is a monomorphism, h(b) = 0.) If we de¬ne g

this way, then g(a) = g(a/1) = h(a)/h(1) = h(a), so g is in fact an extension of f .

Furthermore, if a/b = c/d then since h is a monomorphism, h(a)/h(b) = h(c)/h(d).

Therefore g is well-de¬ned. Again since h is a monomorphism, it follows that g a + d =c

b

g a + g d and g a d = g a g d . Since g is an extension of h, we have g(1) = 1,

c c c

b b b

so g is a homomorphism. Finally, if g(a/b) = 0, then h(a) = 0, so a = 0 by injectivity

of h. Thus g is a monomorphism.

6. The problem is that h is not injective. As before, if g is to be an extension of h, we

must have g(a/b) = h(a)/h(b). But if b is a multiple of p, then h(b) is zero, so no

such g can exist.

7. We must have g(a/b) = g(a/1)g((b/1)’1 ) = g(a)g(b)’1 .

8. If a/b = c/d, then for some s ∈ S we have s(ad’bc) = 0. So g(s)[g(a)g(d)’g(b)g(c)] = 0.

Since g(s) is a unit, we may multiply by its inverse to get g(a)g(d) = g(b)g(c), hence

g(a)g(b)’1 = g(c)g(d)’1 , proving that g is well-de¬ned. To show that g is a homomor-

phism, we compute

a c ad + bc

= g(ad + bc)g(bd)’1

g + =g

b d bd

a c

= [g(a)g(d) + g(b)g(c)]g(b)’1 g(d)’1 = g +g

b d

ac a c

Similarly, we have g =g g and g(1) = 1.

bd b d

Section 2.9

1. We have an (u/v)n + an’1 (u/v)n’1 + · · · + a1 (u/v) + a0 = 0; multiply by v n to get

an un + an’1 un’1 v + · · · + a1 uv n’1 + a0 v n = 0.

Therefore

an un = ’an’1 un’1 v ’ · · · ’ a1 uv n’1 ’ a0 v n .

13

Since v divides the right side of this equation, it must divide the left side as well, and

since u and v are relatively prime, v must divide an . Similarly,

a0 v n = ’an un ’ an’1 un’1 v ’ · · · ’ a1 uv n’1 ,

so u divides a0 .

2. X n ’ p satis¬es Eisenstein™s criterion, and since the polynomial is primitive, it is

irreducible over Z.

3. f3 (X) = X 3 +2X +1, which is irreducible over Z3 . For if f3 (X) were reducible over Z3 ,

it would have a linear factor (since it is a cubic), necessarily X ’ 1 or X + 1(= X ’ 2).

But then 1 or 2 would be a root of f3 , a contradiction since f3 (1) = 1 and f3 (2) = 1

(mod 3).

4. By Eisenstein, X 4 + 3 is irreducible over Z. The substitution X = Y + 1 yields

Y 4 + 4Y 3 + 6Y 2 + 4Y + 4, which is therefore irreducible in Z[Y ]. Thus X 4 + 4X 3 +

6X 2 + 4X + 4 is irreducible in Z[X], i.e., irreducible over Z.

5. Note that n, X is a proper ideal since it cannot contain 1. If n, X = f then

n ∈ f , so n is a multiple of f . Thus f is constant (= 1), in which case X ∈ f .

/

6. Since 1 ∈ X, Y , X, Y is a proper ideal. Suppose X, Y = f . Then Y is a

/

multiple of f , so f is a polynomial in Y alone (in fact f = cY ). But then X ∈ f , a

/

contradiction.

7. If p = X + i, then p is irreducible since X + i is of degree 1. Furthermore, p divides

X 2 + 1 but p2 does not. Take the ring R to be C[X, Y ] = (C[X])[Y ] and apply

Eisenstein™s criterion.

8. Write f (X, Y ) as Y 3 +(X 3 +1) and take p = X +1. Since X 3 +1 = (X +1)(X 2 ’X +1)

and X + 1 does not divide X 2 ’ X + 1, the result follows as in Problem 7.

Section 3.1

1. F (S) consists of all quotients of ¬nite linear combinations (with coe¬cients in F ) of

¬nite products of elements of S. To prove this, note ¬rst that the set A of all such

quotients is a ¬eld. Then observe that any ¬eld containing F and S must contain

A, in particular, A ⊆ F (S). But F (S) is the smallest sub¬eld containing F and S,

so F (S) ⊆ A.

2. The composite consists of all quotients of ¬nite sums of products of the form xi1 xi2 · · ·

xin , n = 1, 2, . . . , where i1 , i2 , . . . , in ∈ I and xij ∈ Kij . As in Problem 1, the set A

of all such quotients is a ¬eld, and any ¬eld that contains all the Ki must contain A.

3. By (3.1.9), [F [±] : F ] = [F [±] : F [β]][F [β] : F ], and since the degree of any extension

is at least 1, the result follows.

√

4. Let min(’1 + 2, Q) = a0 + a1 X + X 2 √ polynomial√ degree 1 cannot work √

(a of because

√

’1 +√ 2 ∈ Q). Then a0 + a1 (’1 + 2) + (’1 + 2) = 0. Since (’1 + 2)2 =

2

/

3 ’ 2 2, we have a0 ’ a1 + 3 = 0 and a1 ’ 2 = 0, so a0 = ’1, a1 = 2. Therefore

√

min(’1 + 2 2, Q) = X 2 + 2X ’ 1.

14

5. Let β = b0 + b1 ± + · · · + bn’1 ±n’1 . Then for some a0 , . . . , an ∈ F we have a0 + a1 β +

· · · + an β n = 0. Substituting the expression for β in terms of ± into this equation,

reducing to a polynomial in ± of degree at most n ’ 1 (as in the proof of (3.1.7)), and

setting the coe¬cients of the ±i , i = 0, 1, . . . , n ’ 1 equal to zero (remember that the

±i form a basis for F [±] over F ), we get n linear equations in the n + 1 unknowns ai ,

i = 0, . . . , n. We know that a solution exists because β is algebraic over F . By brute

force (try ai = 1, aj = 0, j > i for i = 1, 2, . . . , n), we will eventually arrive at the

minimal polynomial.

6. De¬ne • : F (X) ’ E by •(f (X)/g(X)) = f (±)/g(±). Note that • is well-de¬ned,

since if g is a nonzero polynomial, then g(±) = 0 (because ± is transcendental over F ).

By (3.1.2), • is a monomorphism. Since •(F (X)) = F (±), it follows that F (X) and

F (±) are isomorphic.

7. The kernel of • is I, and as in (3.1.3), F [X]/I is a ¬eld. The image of • is F [±], and

by the ¬rst isomorphism theorem for rings, F [±] is isomorphic to F [X]/I. Therefore

F [±] is a ¬eld, and consequently F [±] = F (±).

8. If f = gh, then (g + I)(h + I) = 0 in F [X]/I, so F [X]/I is not a ¬eld. By (2.4.3), I

is not maximal.

9. The minimal polynomial over F belongs to F [X] ⊆ E[X], and has ± as a root. Thus

min(±, E) divides min(±, F ).

10. The result is true for n = 1; see (3.1.7). Let E = F [±1 , . . . , ±n’1 ], so that

[F [±1 , . . . , ±n ] : F ] = [F [±1 , . . . , ±n ] : E][E : F ] = [E[±n ] : E][E : F ]. But [E[±n ] : E]

is the degree of the minimal polynomial of ±n over E, which is at most the degree of

the minimal polynomial of ±n over F , by Problem 9. An application of the induction

hypothesis completes the proof.

Section 3.2

1. f (X) = (X ’ 2)2 , so we may take the splitting ¬eld K to be Q itself.

√ √ √

2. f (X) = (X ’ 1)2 + 3, with roots 1 ± i 3, so K = Q(i 3). Now i 3 ∈ Q since /

√2 √

(i 3) = ’3 < 0, so [K : Q] ≥ 2. But i 3 is a root of X + 3, so [K : Q] ¤ 2.

2

Therefore [K : Q] = 2.

3. Let ± be the positive 4th root of 2. The roots of f (X) are ±, i±, ’± and ’i±. Thus

K = Q(±, i). Now f (X) is irreducible by Eisenstein, so [Q(±) : Q] = 4. Since

i ∈ Q(±) and i is a root of X 2 + 1 ∈ Q(±)[X], we have [K : Q(±)] = 2. By (3.1.9),

/

[K : Q] = 2 — 4 = 8.

4. The argument of (3.2.1) may be reproduced, with the polynomial f replaced by the

family C of polynomials, and the roots ±1 , . . . , ±k of f by the collection of all roots of

the polynomials in the family C.

5. Take f = f1 · · · fr . Since ± is a root of f i¬ ± is a root of some fi , the result follows.

√ √ √

6. If the degree is less than√ it must be 2 (since m ∈ Q). In this case, n = a + b m,

4, /

so n = a2 + b2 m + 2ab m. Since m is square-free, we must have a = 0 or b = 0,

√ √

and the latter is impossible because n is square-free. Thus n = b m, so n = b2 m, a

contradiction of the hypothesis that m and n are distinct and square-free.

15

Section 3.3

1. If ±1 , . . . , ±n form a basis for E over F , then E is generated over F by the ±i . Each

±i is algebraic over F because F (±i ) ⊆ E, and (3.1.10) applies.

2. There are only countably many polynomials with rational coe¬cients, and each such

polynomial has only ¬nitely many roots. Since an algebraic number must be a root of

one of these polynomials, the set of algebraic numbers is countable. Since the complex

¬eld is uncountably in¬nite, there are uncountably many transcendental numbers.

3. The complex ¬eld C is algebraically closed, and C is an extension of the rational ¬eld Q.

But C is not algebraic over Q, by Problem 2.

4. The algebraic numbers A form a ¬eld by (3.3.4), and A is algebraic over Q by de¬nition.

But it follows from Section 2.9, Problem 2, that A contains sub¬elds of arbitrarily high

degree (in fact sub¬elds of every degree) over Q, so that A/Q is not ¬nite.

5. This can be veri¬ed by trans¬nite induction. A splitting ¬eld is always an algebraic ex-

tension (see (3.2.2)), and the ¬eld F<f is algebraic over F by the induction hypothesis.

The result follows from (3.3.5).

6. By de¬nition of algebraic number, A is an algebraic extension of Q. If ± is algebraic

over A, then as in (3.3.5), ± is algebraic over Q, so ± ∈ A. Thus A has no proper

algebraic extensions, so by (3.3.1), A is algebraically closed.

7. Since E is an extension of F we have |F | ¤ |E|. Suppose that ± ∈ E and the minimal

polynomial f of ± has roots ±1 , . . . , ±n , with ± = ±i . Then the map ± ’ (f, i) is

injective, since f and i determine ±. It follows that |E| ¤ |F [X]|„µ0 = |F [X]|. But

for each n, the set of polynomials of degree n over F has cardinality |F |n+1 = |F |, so

|F [X]| = |F |„µ0 = |F |. Thus |E| = |F |.

8. Let C be an algebraic closure of F , and let A be the set of roots in C of all polynomials

in S. Then F (A), the ¬eld generated over F by the elements of A, is a splitting ¬eld

for S over F ; see Section 3.2, Problem 4.

n

9. If F is a ¬nite ¬eld with elements a1 , . . . , an , the polynomial f (X) = 1 + i=1 (X ’ ai )

has no root in F , so F cannot be algebraically closed.

Section 3.4

1. Let f (X) = (X ’ 1)p over Fp .

2. ± is a root of X p ’ ±p = (X ’ ±)p , so m(X) divides (X ’ ±)p .

3. By Problem 2, m(X) = (X ’ ±)r for some r. We are assuming that ± is separable

over F (±p ), so m(X) must be simply X ’ ±. But then ± ∈ F (±p ).

4. The “if” part follows from the proof of (3.4.5), so assume that F is perfect and let

b ∈ F . Let f (X) = X p ’ b and adjoin a root ± of f . Then ±p = b, so F (±p ) =

F (b) = F . By hypothesis, ± is separable over F = F (±p ), so by Problem 3, ± ∈ F .

But then b is the pth power of an element of F .

5. If ±1 , . . . , ±n is a basis for E over F , then by the binomial expansion mod p,

p p

p

K = F (±1 , . . . , ±n ). Now since E/F is algebraic, the elements of F (±1 ) can be ex-

p p

pressed as polynomials in ±1 with coe¬cients in F . Continuing, ±2 is algebraic over

16

p p p

F , hence over F (±1 ), so each element of F (±1 , ±2 ) can be written as a polynomial in

p p

±2 with coe¬cients in F (±1 ). Such an element has the form

pr ps

brs ±1 ±2

s r

with the brs ∈ F . An induction argument completes the proof.

6. Extend the yi to a basis y1 , . . . , yn for E over F . By Problem 5, every element of

p p

E(= F (E p )) has the form y = a1 y1 + · · · + an yn with the ai ∈ F . Thus {y1 , . . . , yn }

p p

spans E over F . It follows that this set contains a basis, hence (since there are exactly

n vectors in the set) the set is a basis for E over F . The result follows.

7. Assume the extension is separable, and let ± ∈ E. Then ± is separable over F , hence

over F (±p ), so by Problem 3, ± ∈ F (E p ). Thus E = F (E p ). Conversely, suppose that

E = F (E p ) and the element ± ∈ E has an inseparable minimal polynomial m(X).

By (3.4.3), m(X) is of the form b0 + b1 X p + · · · + br’1 X (r’1)p + X rp . Since m(±) = 0,

the elements 1, ±p , . . . , ±rp are linearly dependent over F . But by minimality of

m(X), 1, ±, . . . , ±rp’1 are linearly independent over F , hence 1, ±, . . . , ±r are linearly

independent over F . (Note that rp ’ 1 ≥ 2r ’ 1 ≥ r.) By Problem 6, 1, ±p , . . . , ±rp

are linearly independent over F , which is a contradiction. Thus E/F is separable.

8. We may assume that F has prime characteristic p. By Problem 7, E = K(E p ) and

K = F (K p ). Thus E = F (K p , E p ) = F (E p ) since K ¤ E. Again by Problem 7,

E/F is separable.

m

9. If g can be factored, so can f , and therefore g is irreducible. If f (X) = g(X p ) with

m maximal, then g ∈ F [X p ]. By (3.4.3) part (2), g is separable.

/

m

10. Suppose that the roots of g in a splitting ¬eld are c1 , . . . , cr . Then f (X) = g(X p ) =

m m

(X p ’ c1 ) · · · (X p ’ cr ). By separability of g, the cj must be distinct, and since

m

f (±) = 0, we have ±p = cj for all j. This is impossible unless r = 1, in which case

m m

f (X) = X p ’ c1 . But f ∈ F [X], so ±p = c1 ∈ F .

n n n

11. If ±p = c ∈ F , then ± is a root of X p ’ c = (X ’ ±)p , so min(±, F ) is a power

of X ’ ±, and therefore has only one distinct root ±. The converse follows from

Problem 10 with f = min(±, F ).

Section 3.5

√

1. Take F = Q, K = Q( 3 2) (see (3.5.3)), and let E be any extension of K that is normal

over F , for example, E = C.

2. The polynomial f (X) = X 2 ’ a is irreducible, else it would factor as (X ’ b)(X ’ c)

with b + c = 0, bc = a, i.e., (X ’ b)(X + b) with b2 = a, contradicting the hypothesis.

√

Thus E is obtained from Q by adjoining a root of f . The other root of f is ’ a, so

that E is a splitting ¬eld of f over Q. By (3.5.7), E/Q is normal.

√ √

3. Take F = Q, K = Q( 2), E = Q( 4 2). Then K/F is normal by Problem 2, and

E/K is normal by a similar argument. But E/F is not normal, since the two complex

roots of X 4 ’ 2 do not belong to E. The same argument works with 2 replaced by any

positive integer that is not a perfect square.

17

4. There are at most n embeddings of E in C extending σ. The proof is the same, except

that now g has at most r distinct roots in C, so there are at most r possible choices

of β. The induction hypothesis yields at most n/r extensions from F (±) to E, and the

result follows.

5. Since the rationals have characteristic zero, the extension is separable. Since E is the

splitting ¬eld of (X 2 ’ 2)(X 2 ’ 3) over Q, the extension is normal, hence Galois.

√ √

6. Since 3 ∈ Q( 2), the extension has degree 4. By (3.5.9), there are exactly four

/

Q-automorphisms in the Galois group. By (3.5.1), each such Q-automorphism must

permute the roots of X 2 ’ 2 and must also permute the roots of X 2 ’ 3. There are only

four possible ways this can be done. Since a Q-automorphism is completely speci¬ed

√ √

by its action on 2 and 3, the Galois group may be described as follows:

√ √ √ √

(1) 2 ’ 2, 3 ’ 3;

√ √ √ √

(2) 2 ’ 2, 3 ’ ’ 3;

√ √ √ √

(3) 2 ’ ’ 2, 3 ’ 3;

√ √ √ √

(4) 2 ’ ’ 2, 3 ’ ’ 3.

Since the product (composition) of any two of automorphisms (2),(3),(4) is the third,

the Galois group is isomorphic to the four group (Section 1.2, Problem 6).

7. Yes, up to isomorphism. If f is the polynomial given in (3.5.11), any normal closure is

a splitting ¬eld for f over F , and the result follows from (3.2.5).

8. If f is irreducible over F and has a root in E1 © E2 , then f splits over both E1 and E2 ,

hence all roots of f lie in E1 © E2 . Thus f splits over E1 © E2 , and the result follows.

Section 4.1

1. If x ∈ R, take r(x + I) to be rx + I to produce a left R-module, and (x + I)r = xr + I

for a right R-module. Since I is an ideal, the scalar multiplication is well-de¬ned, and

the requirements for a module can be veri¬ed using the basic properties of quotient

rings.

2. If A is an algebra over F , the map x ’ x1 of F into A is a homomorphism, and since F

is a ¬eld, it is a monomorphism (see (3.1.2)). Thus A contains a copy of F . Conversely,

if F is a subring of A, then A is a vector space over F , and the compatibility conditions

are automatic since A is commutative.

3. Let R = Z and let M be the additive group of integers mod m, where m is composite,

say m = ab with a, b > 1. Take x = a (mod m) and r = b.

4. Any set containing 0 is linearly dependent, so assume a/b and c/d are nonzero rationals.

Since a/b is rational, the result follows.

c/d

5. In view of Problem 4, the only hope is that a single nonzero rational number a/b spans

M over Z. But this cannot happen, since an integer multiple of a/b must be a fraction

whose denominator is a divisor of b.

6. If a ∈ A ⊆ C and x ∈ B © C, then ax ∈ (AB) © C. Conversely, let c = ab ∈ (AB) © C.

Then b = a’1 c ∈ C since A ⊆ C. Thus ab ∈ A(B © C).

18

7. If f (X) = a0 + a1 X + · · · + an X n and v ∈ V , take

f (X)v = f (T )v = a0 Iv + a1 T v + · · · + an T n v

where I is the identity transformation and T i is the composition of T with itself i

times.

Section 4.2

1. Let W be a submodule of M/N . By the correspondence theorem, W = L/N for some

submodule L of M with L ≥ N . Since L = L + N , we have W = (L + N )/N .

2. No. If S is any submodule of M , then S + N is a submodule of M containing N ,

so S + N corresponds to W = (S + N )/N . We know that W can also be written as

(L + N )/N where L ≥ N . (For example, L = S + N .) By the correspondence theorem,

S + N = L + N , and there is no contradiction.

3. If A ∈ Mn (R), then AE11 retains column 1 of A, with all other columns zero.

4. To identify the annihilator of E11 , observe that by Problem 4, AE11 = 0 i¬ column 1

of A is zero. For the annihilator of M , note that Ej1 ∈ M for every j, and AEj1 has

column j of A as column 1, with zeros elsewhere. (See Section 2.2, Problem 4.) Thus

if A annihilates everything in M , then column j of A is zero for every j, so that A is

the zero matrix.

5. R/I is an R-module by Problem 1 of Section 4.1 If r ∈ R then r + I = r(1 + I), so

R/I is cyclic with generator 1 + I.

6. We must show that scalar multiplication is well-de¬ned, that is, if r ∈ I, then rm = 0

for all m ∈ M . Thus I must annihilate M , in other words, IM = 0, where the

rj mj , rj ∈ R, mj ∈ M .

submodule IM is the set of all ¬nite sums

7. No, since (r + I)m coincides with rm.

Section 4.3

1. Essentially the same proof as in (4.3.3) works. If z1 + · · · + zn = 0, with zi ∈ Mi , then

zn is a sum of terms from previous modules, and is therefore 0. Inductively, every zi

is 0. (In the terminology of (4.3.3), zi is xi ’ yi .)

2. Only when A = {0}. If A has n elements, then by Lagrange™s theorem, nx = 0 for

every x ∈ A, so there are no linearly independent sets (except the empty set).

3. This follows because (’s)r + rs = 0.

4. If I is not a principal ideal, then I can never be free. For if I has a basis consisting of

a single element, then I is principal, a contradiction. But by Problem 3, there cannot

be a basis with more than one element. If I = a is principal, then I is free if and

only if a is not a zero-divisor.

5. Z, or any direct sum of copies of Z, is a free Z-module. The additive group of rational

numbers is not a free Z-module, by Problem 5 of Section 4.1

19

6. The “only if” part was done in (4.3.6), so assume that M has the given property. Con-

struct a free module M = •i∈S Ri where Ri = R for all i. Then the map f : S ’ M

with f (i) = ei (where ei has 1 in its ith component and zeros elsewhere) extends to a

homomorphism (also called f ) from M to M . Let g : M ’ M be the module homo-

morphism determined by g(ei ) = i. Then g —¦ f is the identity on S, hence on M , by

the uniqueness assumption. Similarly, f —¦ g = 1.

7. An element of M is speci¬ed by choosing a ¬nite subset F of ±, and then selecting an

element bi ∈ R for each i ∈ F . The ¬rst choice can be made in ± ways, and the second

in |R||F | = |R| ways. Thus |M | = ±|R| = max(±, |R|).

8. We may take B to the set of “vectors” (ei ) with 1 in position i and zeros elsewhere.

Thus there is a basis element for each copy of R, so |B| = ±.

Section 4.4

1. To prove that the condition is necessary, take the determinant of the equation

P P ’1 = I. Su¬ciency follows from Cramer™s rule.

2. A homomorphism f : V ’ W is determined by its action on elements of the form

(0, . . . , 0, xj , 0, . . . , 0). Thus we must examine homomorphisms from Vj to •m Wi .

i=1

Because of the direct sum, such mappings are assembled from homomorphisms from

Vj to Wi , i = 1, . . . , m. Thus f may be identi¬ed with an m—n matrix whose ij element

is a homomorphism from Vj to Wi . Formally, we have an abelian group isomorphism

HomR (V, W ) ∼ [HomR (Vj , Wi )].

=

3. In Problem 2, replace V and W by V n and take all Wi and Vj to be V . This gives an

abelian group isomorphism of the desired form. Now if f corresponds to [fij ] where

fij : Vj ’ Vi , and g corresponds to [gij ], then the composition g —¦ f is assembled

from homomorphisms gik —¦ fkj : Vj ’ Vk ’ Vi . Thus composition of homomorphisms

corresponds to multiplication of matrices, and we have a ring isomorphism.

4. In (4.4.1), take n = m = 1 and M = R.

5. Since f (x) = f (x1) = xf (1), we may take r = f (1).

6. This follows from Problems 3 and 4, with V = R.

7. If the endomorphism f is represented by the matrix A and g by B, then for any c ∈ R,

we have c(g —¦ f ) = (cg) —¦ f = g —¦ (cf ), so EndR (M ) is an R-algebra. Furthermore,

cf is represented by cA, so the ring isomorphism is also an R-module homomorphism,

hence an R-algebra isomorphism.

Section 4.5

1. Add column 2 to column 1, then add -3 times column 1 to column 2, then add ’4

times row 2 to row 3. The Smith normal form is

®