Step 3. Using the de¬nition of the inde¬nite integral, we antidi¬erentiate:

e4x y = (10e4x )dx + c.

Step 4. We multiply both sides by e’4x .

y = e’4x (10e4x )dx + c

e4x

’4x

10 ·

y=e +c

4

10

+ ce’4x .

y(x) = (8)

4

4.3. LINEAR FIRST ORDER DIFFERENTIAL EQUATIONS 167

Step 5. We impose the condition y(0) = 200 to solve for c.

5 5

c = 200 ’ .

y(0) = 200 = + c,

2 2

Step 6. We replace c by its value in solution (8)

5 5

e’4x .

+ 200 ’

y(x) =

2 2

Exercises 4.3 Find y(t) in each of the following:

2. y = ’2y, y(0) = 1200

1. y = 4y, y(0) = 100

dy

y = ’4(y ’ c), y(0) = y0

3. 4. L + Ry = E, y(0) = y0

dt

5. y + 3y(t) = 32, y(0) = 0 6. y = ty, y(0) = y0

t2

1 dy 1

Hint: y = ty, dt = tdt; dy = + c.

y dt y 2

7. The population P (t) of a certain country is given by the equation:

P (t) = 0.02P (t), P (0) = 2 million.

(i) Find the time when the population will double.

(ii) Find the time when the population will be 3 million.

8. Money grows at the rate of r% compounded continuously if

r

A (t) = A(t), A(0) = A0 ,

100

where A(t) is the amount of money at time t.

(i) Determine the time when the money will double.

(ii) If A = $5000, determine the time for which A(t) = $15,000.

168 CHAPTER 4. APPLICATIONS OF DIFFERENTIATION

9. A radioactive substance satis¬es the equation

A (t) = ’0.002A(t), A(0) = A0 ,

where t is measured in years.

1

(i) Determine the time when A(t) = A0 . This time is called the

2

half-life of the substance.

(ii) If A0 = 20 grams, ¬nd the time t for which A(t) equals 5 grams.

10. The number of bacteria in a test culture increases according to the equa-

tion

N (t) = rN (t), n(0) = N0 ,

where t is measured in hours. Determine the doubling period. If N0 =

100, r = 0.01, ¬nd t such that N (t) = 300.

11. Newton™s law of cooling states that the time rate of change of the tem-

perature T (t) of a body is proportional to the di¬erence between T and

the temperature A of the surrounding medium. Suppose that K stands

for the constant of proportionality. Then this law may be expressed as

T (t) = K(A ’ T (t)).

Solve for T (t) in terms of time t and T0 = T (0).

12. In a draining cylindrical tank, the level y of the water in the tank drops

according to Torricelli™s law

y (t) = ’Ky 1/2

for some constant K. Solve for y in terms of t and K.

13. The rate of change P (t) of a population P (t) is proportional to the

square root of P (t). Solve for P (t).

14. The rate of change v (t) of the velocity v(t) of a coasting car is propor-

tional to the square of v. Solve for v(t).

In exercises 15“30, solve for y.

4.4. LINEAR SECOND ORDER HOMOGENEOUS DIFFERENTIAL EQUATIONS169

16. y + 3x2 y = 0, y(0) = 6

15. y = x ’ y, y(0) = 5

17. xy (x) + 3y(x) = 2x5 , y(2) = 1 18. xy + y = 3x2 , y(1) = 4

19. y + y = ex , y(0) = 100 20. y = ’6xy, y(0) = 9

22. y = xy 3 , y(0) = 2

21. y = (sin x)y, y(0) = 5

√

1+ x

24. y ’ 2y = 1, y(1) = 3

23. y = √ , y(0) = 10

1+ y

25. y = ry ’ c, y(0) = A 26. y ’ 3y = 2 sin x, y(0) = 12

3

27. y ’ 2y = 4e2x , y(0) = 4 28. y ’ 3x2 y = ex , y(0) = 7

1

30. y ’ 3y = e2x , y(0) = 1

29. y ’ y = sin x, y(1) = 3

2x

4.4 Linear Second Order Homogeneous Dif-

ferential Equations

De¬nition 4.4.1 A linear second order di¬erential equation in the variable

y is an equation of the form

y + p(x)y + q(x)y = r(x).

If r(x) = 0, we say that the equation is homogeneous; otherwise it is called

non-homogeneous. If p(x) and q(x) are constants, we say that the equation

has constant coe¬cients.

De¬nition 4.4.2 If f and g are di¬erentiable functions, then the Wronskian

of f and g is denoted W (f, g) and de¬ned by

W (f, g) = f (x)g (x) ’ f (x)g(x).

Example 4.4.1 Compute the following Wronskians:

170 CHAPTER 4. APPLICATIONS OF DIFFERENTIATION

(ii) W (epx sin(qx), epx cos(qx))

(i) W (sin(mx), cos(mx))

(iii) W (xn , xm ) (iv) W (x sin(mx), x cos(mx))

d d

(cos(mx)) ’

Part (i) W (sin mx, cos mx) = sin(mx) (sin(mx)) cos(mx)

dx dx

= ’m sin2 (mx) ’ m cos2 (mx)

= ’m(sin2 (mx) + cos2 (mx))

= ’m

Part (iii) W (epx sin qx, epx cos qx)

= epx sin qx(pepx cos qx ’ qepx sin qx)

’epx cos qx(pepx sin qx + qepx cos qx)

= ’qe2px (sin2 qx + cos2 qx)

= ’qe2px .

Part (iii) W (xn , xm ) = xn · mxm’1 ’ xm · nxn’1

= (m ’ n)xn+m’1 .

Part (iv) W (x sin mx, x cos mx) = (x sin mx)(cos mx ’ mx sin mx)

’(x cos mx)(sin mx + mx cos mx)

= ’mx2 (sin2 mx + cos2 mx)

= ’mx2 .

De¬nition 4.4.3 Two di¬erentiable functions f and g are said to be linearly

independent if their Wronskian, W (f (x), g(x)), is not zero for all x in the

domains of both f and g.

Example 4.4.2 Which pairs of functions in Example 8 are linearly indepen-

dent?

(i) In Part (i), W (sin mx, cos mx) = ’m = 0 unless m = 0. Therefore,

sin mx and cos mx are linearly independent if m = 0.

(ii) In Part (ii),

W (epx sin qx, epx cos qx) = ’qe2px ≡ if q = 0.

Therefore, epx sin(qx) and epx cos qx are linearly independent if q = 0.

4.4. LINEAR SECOND ORDER HOMOGENEOUS DIFFERENTIAL EQUATIONS171

(iii) In Part (iii), W (xn , xm ) = (m ’ n)xn+m’1 ≡ 0 if m = n. Therefore, if m

and n are not equal, then xn and xm are linearly independent.

(iv) In Part (iv),

W (x sin mx, x cos mx) = ’mx2 ≡ 0

if m = 0. Therefore, x sin mx and x cos mx are linearly independent if

m = 0.

Theorem 4.4.1 Consider the linear homogeneous second order di¬erential

equation

y + p(x)y + q(x)y = 0. (1)

(i) If y1 (x) and y2 (x) are any two solutions of (1), then every linear combi-

nation y(x), with constants A and B,

y(x) = Ay1 (x) + By2 (x)

is also a solution of (1).

(ii) If y1 (x) and y2 (x) are any two linearly independent solutions of (1), then

every solution y(x) of (1) has the form

y(x) = Ay1 (x) + By2 (x)

for some constants A and B.

Proof.

Part (i) Suppose that y1 and y2 are solutions of (1), A and B are any con-

stants. Then

(Ay1 + By2 ) + p(Ay1 + By2 ) + q(Ay1 + By2 )

= Ay1 + By2 + Apy1 + ABy2 + Aqy1 + BqBy2

= A(y1 + py1 + qy1 ) + B(y2 + py2 + qy2 )

= A(0) + B(0) (Because y1 and y2 are solutions of (1))

= 0.

Hence, y = Ay1 + By2 are solutions of (1) whenever y1 and y2 are solutions

of (1).

172 CHAPTER 4. APPLICATIONS OF DIFFERENTIATION

Part (ii) Let y be any solution of (1) and suppose that

y = Ay1 + By2 (2)

(3)

y = Ay1 + By2

We solve for A and B from equations (2) and (3) to get

yy2 ’ y2 y W (y, y2 )

A= =

y1 y2 ’ y2 y1 W (y1 , y2 )

y1 y ’ y1 y W (y1 , y)

B= = .

y1 y2 ’ y2 y1 (y1 , y2 )

Since y1 and y2 are linearly independent, W (y1 , y2 ) = 0, and hence, A and B

are uniquely determined.

Remark 15 It turns out that the Wronskian of two solutions of (1) is either

identically zero or never zero for any value of x.

Theorem 4.4.2 Let y1 and y2 be any two solutions of the homogeneous equa-

tion

y + py + qy = 0. (1)

Let

W (x) = W (y1 , y2 ) = y1 (x)y2 (x) ’ y1 (x)y2 (x).

Then

W (x) = ’pW (x)

W (x) = ce’ p(x)dx

for some constant c. If c = 0, then W (x) = 0 for every x. If c = 0, then

W (x) = 0 for every x.

Proof. Since y1 and y2 are solutions of (1),

y1 = ’py1 ’ qy1 (2)

y2 = ’py2 ’ qy2 (3)

4.4. LINEAR SECOND ORDER HOMOGENEOUS DIFFERENTIAL EQUATIONS173

Then,

W (x) = (y1 y2 ’ y1 y2 )

= y1 y2 + y1 y2 ’ y1 y2 ’ y1 y2

= y1 y2 ’ y2 y1

= y1 (’py2 ’ qy2 ) ’ y2 (’py1 ’ qy1 ) (from (2) and (3))

= ’p[y1 y2 ’ y2 y1 ]

= ’pW (x).

Thus,

W (x) + pW (x) = 0.

By Theorem 4.3.1

0 dx + c = ce’

W (x) = e’ pdx

pdx

.

If c = 0, W (x) ≡ 0; otherwise W (x) is never zero.

Theorem 4.4.3 (Homogeneous Second Order) Consider the linear second

order homogeneous di¬erential equation with constant coe¬cients:

ay + by + cy = 0, a = 0. (1)

(i) If y = emx is a solution of (1), then

am2 + bm + c = 0. (2)

Equation (2) is called the characteristic equation of (1).

√ √

’b ’ b2 ’ 4ac ’b + b2 ’ 4ac

(ii) Let m1 = and m2 = . Then the follow-

2a 2a

ing three cases arise:

Case 1. The discriminant b2 ’ 4ac > 0. Then m1 and m2 are real and

distinct. The two linearly independent solutions of (1) are em1 x and

em2 x and its general solution has the form

y(x) = Aem1 x + Bem2 x .

174 CHAPTER 4. APPLICATIONS OF DIFFERENTIATION

Case 2. The discriminant b2 ’ 4ac = 0. Then m1 = m2 = m, and only

one real solution exists for equation (2). The roots are repeated. In

this case, emx and xemx are two linearly independent solutions of (1)

and the general solution of (1) has the form

y(x) = Aemx + Bxemx = emx (A + Bx).

Case 3 b2 ’ 4ac < 0. Then m1 = p ’ iq, and m2 = p + iq where p =

√

’b/2a, and q = 4ac ’ b2 /2a. In this case, the functions epx sin qx

and epx cos qx are two linearly independent solutions of (1) and the

most general solution of (1) has the form

y(x) = epx (A sin qx + B sin qx).

Proof. Let y = emx . Then y = memx , y = m2 emx and

ay + by + cy = (am2 + bm + c)emx = 0, a = 0 ”

am2 + bm + c = 0, a = 0 ”

√

b2 ’ 4ac

’b

± .

m=

2a 2a

This proves Part (i).

Case 1. For Case 1, em1 x and em2 x are solutions of (1). We show that these

are linearly independent by showing that their Wronskian is not zero.

W (em1 x , em2 x ) = em1 x · m2 em2 x ’ m1 em1 x · em2 x

= (m2 ’ m1 )e(m1 +m2 )x .

Since m1 = m2 , W (em1 x , em2 x ) = 0.

Case 2. We already know that emx = 0, and m = ’b/2a. Let us try

y = xemx . Then

ay + by + cy = a(2m + m2 x)emx + b(1 + mx)emx + xemx

= (b + 2am)emx + (am2 + bm + c)xemx

= (b + 2a(’b/2a))emx

= 0.

4.4. LINEAR SECOND ORDER HOMOGENEOUS DIFFERENTIAL EQUATIONS175

Therefore, emx and xemx are both solutions. We only need to show that

they are linearly independent.

W (emx , xemx ) = emx (emx + mxemx ) ’ memx (xemx )

= e2mx + mxe2mx ’ mxe2mx

= e2mx

= 0.

Hence, emx and xemx are linearly independent and the general solution

of (1) has the form

y(x) = Aemx + Bxemx = emx (A + Bx).

Case 3. In Example 8, we showed that

W (epx sin qx, epx cos qx) = ’qe2px = 0

since q = 0.

We only need to show that epx sin qx and epx cos qx are solutions of (1).

Let y1 = epx sin qx and y2 = epx cos qx. Then,

y1 = pepx sin qx + qepx cos qx

y1 = p2 epx sin qx + pqepx cos qx + pqepx cos qx ’ q 2 sin qx

ay1 + by1 + cy1 = aepx (p2 sin qx + 2pq cos qx ’ q 2 sin qx)

+ bepx (p sin qx + q cos qx) + cepx sin qx

= epx sin qx[a(p2 ’ q 2 ) + (bp + c)] + epx cos qx[2apq + bq]

b2 4ac ’ b2 ’b

px

’

= e sin(qx) a +c

+b

4a2 4a2 2a

√

’b

px

4ac ’ b2

+ e cos qx 2a +b

2a

b2 ’ 2ac + b2 + 2ac

px

= e sin(qx)

2a

+ epx cos(qx)[0]

= 0.

Therefore y1 = epx sin(qx) is a solution of (1). Similarly, we can show

that y2 = epx cos qx is a solution of (1). We leave this as an exercise.

176 CHAPTER 4. APPLICATIONS OF DIFFERENTIATION

Remark 16 Use Integral Tables or computer algebra in evaluating the in-

de¬nite integrals as needed.

Example 4.4.3 Solve the di¬erential equations for y(t).

(i) y ’ 5y + 14y = 0

(ii) y ’ 6y + 9y = 0

(iii) y ’ 4y + 5y = 0

We let y = emt . We then solve for m and determine the solution. We observe

that y = memt , y = m2 emt .

Part (i) By substituting y = emt in the equation we get

m2 emt ’ 5memt + 14emt = emt (m2 ’ 5m + 14) = 0 ’

m2 ’ 5m + 14 = 0 = (m ’ 7)(m + 2) ’ m = 7, ’2.

Therefore,

y(t) = Ae’2t + Be7t .

Part (ii) Again, by substituting y = emt , we get

m2 emt ’ 6memt + 9emt = 0

m2 ’ 6m + 9 = (m ’ 3)2 = 0

m = 3, 3.

The solution is

y(t) = Ae3t + Bte3t .

Part (iii) By substituting y = emt , we get

m2 emt ’ 4memt + 5emt = emt (m2 ’ 4m + 5) = 0 ’

m2 ’ 4m + 5 = 0

√

4 ± 16 ’ 20

= 2 ± 1i.

m=

2

The general solution is

y(t) = e2t (A cos t + B sin t).

4.4. LINEAR SECOND ORDER HOMOGENEOUS DIFFERENTIAL EQUATIONS177

Example 4.4.4 Solve the di¬erential equation for y(t) where y(t) satis¬es

the conditions

y (t) ’ 2y (t) ’ 15y(t) = 0; y(0) = 1, y (0) = ’1.

We assume that y(t) = emt . By substitution we get the characteristic equa-

tion:

m2 ’ 2m ’ 15 = 0, m = 5, ’3.

The general solution is

y(t) = Ae’3t + Be5t .

We now impose the additional conditions y(0) = 1, y (0) = ’1.

y(t) = Ae’3t + Be5t

y (t) = ’3Ae’3t + 5Be5t

y(0) = A + B = 1

y (0) = ’3A + 5B = ’1

On solving these two equations simultaneously, we get

3 1

A= , B= .

4 4

Then the exact solution is

3 ’3t 1 5t

y(t) = e + e.

4 4

Exercises 4.4 Solve for y(t) from each of the following:

1. y ’ y ’ 20y = 0

2. y ’ 8y + 16y = 0

3. y + 9y + 20y = 0

4. y + 4y + 4 = 0

5. y ’ 8y + 12y = 0

6. y ’ 6y + 10y = 0

178 CHAPTER 4. APPLICATIONS OF DIFFERENTIATION

7. y ’ y ’ 6y = 0, y(0) = 10, y (0) = 15

8. y ’ 4y + 4y ’ 0, y(0) = 4, y (0) = 8

9. y + 8y + 12y = 0, y(0) = 1, y (0) = 3

10. y + 6y + 10y = 0, y(0) = 5, y (0) = 7

11. y ’ 4y = 0, y(0) = 1, y (0) = ’1

12. y ’ 9y = 0, y(0) = ’1, y (0) = 1

13. y + 9y = 0, y(0) = 2, y (0) = 3

14. y + 4y = 0, y(0) = ’1, y (0) = 2

15. y ’ 3y + 2y = 0, y(0) = 2, y (0) = ’2

16. y ’ y ’ 6y = 0, y(0) = 6, y (0) = 5

17. y + 4y + 4y = 0, y(0) = 1, y (0) = 4

18. y ’ 6y + 9y = 0, y(0) = 1, y (0) = ’1

19. y + 6y + 13y = 0, y(0) = 1, y (0) = 2

20. y ’ 3y + 2y = 0

21. y + 3y + 2y = 0

22. y + m2 y = 0

23. y ’ m2 y = 0

24. y + 2my + m2 y 2 = 0

25. y + 2my + (m2 + 1)y = 0

26. y ’ 2my + (m2 + 1)y = 0

27. y + 2my + (m2 ’ 1)y = 0

28. y ’ 2my + (m2 ’ 1)y = 0

29. 9y ’ 12y + 4y = 0

30. 4y + 4y + y = 0

4.5. LINEAR NON-HOMOGENEOUS SECOND ORDER DIFFERENTIAL EQUATIONS179

4.5 Linear Non-Homogeneous Second Order

Di¬erential Equations

Theorem 4.5.1 (Variation of Parameters) Consider the equations

y + p(x)y + q(x)y = r(x) (1)

y + p(x)y + q(x)y = 0. (2)

Suppose that y1 and y2 are any two linearly independent solutions of (2).

Then the general solution of (1) is

y1 (x)r(x) y2 (x)r(x)

dx ’ y1 (x)

y(x) = c1 y1 (x) + c2 y2 (x) + y2 (x) dx .

W (y1 , y2 ) W (y1 , y2 )

Proof. It is already shown that c1 y1 (x) + c2 y2 (x) is the most general solution

of the homogeneous equation (2), where c1 and c2 are arbitrary constants.

We observe that the di¬erence of any two solutions of (1) is a solution of (2).

Suppose that y — (x) is any solution of (1). We wish to ¬nd two functions,

u1 and u2 , such that

y — (x) = u1 (x)y1 (x) + u2 (x)y2 (x). (3)

By di¬erentiation of (3), we get

y — (x) = (u1 y1 + u2 y2 ) + (u1 y1 + u2 y2 ). (4)

We impose the following condition (5) on u1 and u2 :

u1 y1 + u2 y2 = 0. (5)

Then

y — (x) = u1 y1 + u2 y2

y — (x) = (u1 y1 + u2 y2 ) + u1 y1 + u2 y2 .

Since y — (x) is a solution of (1), we get

r(x) = y — + p(x)y — + q(x)y —

= (u1 y1 + u2 y2 ) + (u1 y1 + u2 y2 ) + p(x)[u1 y1 + u2 y2 ]

+ q(x)(u1 y1 + u2 y2 )

= u1 [y1 + p(x)y1 + q(x)y1 ] + u2 [y2 + p(x)y2 + q(x)y2 ]

+ (u1 y1 + u2 y2 )

= u1 y1 + u2 y2 .

180 CHAPTER 4. APPLICATIONS OF DIFFERENTIATION

Hence, another condition on u1 and u2 is

u1 y1 + u2 y2 = r(x). (6)

By solving equations (5) and (6) simultaneously for u1 and u2 , we get

’y2 r(x) y1 r(x)

u1 = and u2 (x) = . (7)

y1 y2 ’ y2 y1 y1 y2 ’ y2 y1

The denominator of the solution (7) is the Wronskian of y1 and y2 , which is

not zero for any x since y1 and y2 are linearly independent by assumption.

By taking the inde¬nite integrals in equation (7), we obtain u1 and u2 .

y1 (x)r(x)

y2 (x)r(x)

u1 (x) = ’ dx and u2 (x) = dx.

W (y1 , y2 ) W (y1 , y2 )

By substituting these values in (3), we get a particular solution

y — (x) = y2 u2 + y1 u1

y1 (x)r(x) y2 (x)r(x)

dx ’ y1 (x)

= y2 (x) dx.

W (y1 , y2 ) W (y1 , y2 )

This solution y — (x) is called a particular solution of (1). To get the general

solution of (1), we add the general solution c1 y1 (x) + c2 y2 (x) of (2) to the

particular solution of y — (x) and get

y1 (x)r(x) y2 (x)r(x)

dx ’ y1 (x)

y(x) = (c1 y1 (x) + c2 y2 (x)) + y2 (x) dx .

W (y1 , y2 ) W (y1 , y2 )

This completes the proof of this theorem.

Remark 17 The general solution of (2) is called the complementary solution

of (1) and is denoted yc (x).

yc (x) = c1 y1 (x) + c2 y2 (x).

The particular solution y — of (1) is generally written as yp .

r(x)y2 (x)

r(x)y1 (x)

dx ’ y1 dx.

yp = y2

W (y1 , y2 ) W (y1 , y2 )

The general solution y(x) of (1) is the sum of yc and yp ,

y = yc (x) + yp (x).

4.5. LINEAR NON-HOMOGENEOUS SECOND ORDER DIFFERENTIAL EQUATIONS181

Example 4.5.1 Solve the di¬erential equation

y + 8y + 12y = e’3x .

We ¬nd the general solution of the homogeneous equation

y + 8y + 12y = 0.

We let y = emx be a solution. Then y = memx , y = m2 emx and

m2 emx + 8memx + 12emx = 0

m2 + 8m + 12 = 0 m = ’6, ’2.

So,

yc (x) = Ae’6x + Be’2x

is the complementary solution. We compute the Wronskian

W (e’6x , e’2x ) = e’6x (’2)e’2x ’ e’2x (’6)e’6x

= e’8x (’2 + 6)

= 4e’8x

= 0.

By Theorem 4.4.1, the particular solution is given by

e’6x · e’3x e’2x · e’3x

’2x ’6x

dx ’ e

yp = e dx

4e’8x 4e’8x

1 ’x 1 3x

= e’2x e dx ’ e’6x e dx

4 4

1 ’x 1 3x

= ’e’2x ’ e’6x

e e

4 4

1 1 ’3x

= ’ e’3x ’ e

4 12

1

= ’ e’3x .

3

The complete solution is the sum of the complementary solution yc and the

particular solution yp .

1 ’3x

y(t) = Ae’6x + Be’2x ’ e.

3

182 CHAPTER 4. APPLICATIONS OF DIFFERENTIATION

Exercises 4.5 Find the complementary, particular and the complete solu-

tion for each of the following. Use tables of integrals or computer algebra to

do the integrations, if necessary.

2. y ’ 9y = e2x

1. y + 4y = sin(3x)

4. y ’ 4y = e’x

3. y + 9y = cos 2x

5. y ’ y = xex 6. y ’ 5y + 6y = 3e4x

7. y ’ 4y + 4y = e’x 8. y + 5y + 4y = 2ex

10. y + 5y + 6y = x2 e2x

9. my ’ py = mg

In exercises 11“20, compute the complete solution for y.

11. y + y = 4x, y(0) = 2, y (0) = 1

12. y ’ 9y = ex , y(0) = 1, y (0) = 5

13. y ’ 2y ’ 3y = 4, y(0) = 2, y (0) = ’1

14. y ’ 3y + 2y = 4x

15. y + 4y = sin 2x

16. y ’ 4y = e2x

17. y ’ 4y = e’2x

18. y + 4y = cos 2x

19. y + 9y = 2 sin 3x + 4 cos 3x

20. y + 4y + 5y = sin x ’ 2 cos x

Chapter 5

The De¬nite Integral

5.1 Area Approximation

In Chapter 4, we have seen the role played by the inde¬nite integral in ¬nd-

ing antiderivatives and in solving ¬rst order and second order di¬erential

equations. The de¬nite integral is very closely related to the inde¬nite inte-

gral. We begin the discussion with ¬nding areas under the graphs of positive

functions.

Example 5.1.1 Find the area bounded by the graph of the function y =

4, y = 0, x = 0, x = 3.

graph

From geometry, we know that the area is the height 4 times the width 3 of

the rectangle.

Area = 12.

Example 5.1.2 Find the area bounded by the graphs of y = 4x, y = 0, x =

0, x = 3.

183

184 CHAPTER 5. THE DEFINITE INTEGRAL

graph

1

From geometry, the area of the triangle is times the base, 3, times the

2

height, 12.

Area = 18.

Example 5.1.3 Find the area bounded by the graphs of y = 2x, y = 0, x =

1, x = 4.

graph

1

The required area is covered by a trapezoid. The area of a trapezoid is

2

times the sum of the parallel sides times the distance between the parallel

sides.

1

Area = (2 + 8)(3) = 15.

2

√

4 ’ x2 , y =

Example 5.1.4 Find the area bounded by the curves y =

0, x = ’2, x = 2.

graph

By inspection, we recognize that this is the area bounded by the upper half

of the circle with center at (0, 0) and radius 2. Its equation is

√

2 2

x + y = 4 or y = 4 ’ x2 , ’2 ¤ x ¤ 2.

Again from geometry, we know that the area of a circle with radius 2 is

πr2 = 4π. The upper half of the circle will have one half of the total area.

Therefore, the required area is 2π.

5.1. AREA APPROXIMATION 185

Example 5.1.5 Approximate the area bounded by y = x2 , y = 0, x = 0,

and x = 3. Given that the exact area is 9, compute the error of your

approximation.

Method 1. We divide the interval [0, 3] into six equal subdivisions at the

35

1

points 0, , 1, , and 2. Such a subdivision is called a partition of [0, 3].

2 22

We draw vertical segments joining these points of division to the curve. On

each subinterval [x1 , x2 ], the minimum value of the function x2 is at x2 .

1

2

The maximum value x2 of the function is at the right hand end point x2 .

Therefore,

graph

The lower approximation, denoted L, is given by

2 2

1 1 3 1 1 5 1

2

L = 0 · + 12 · + · + (2)2 · + ·

2 2 2 2 2 2 2

1 9 25

= · 0+1+ +4+

2 4 4

27

≈ 8 · 75.

=

4

This approximation is called the left-hand approximation of the area. The

error of approximation is ’0.25.

The Upper approximation, denoted U , is given by

2 2 2

1 1 3 1 1 5 1 1

1

· + 12 · + · + (2)2 · + + (3)2 ·

·

U=

2 2 2 2 2 2 2 2 2

1 1 9 25

= +1+ +4+ +9

2 4 4 4

1 91

=

2 4

91

≈ 11 · 38.

=

8

186 CHAPTER 5. THE DEFINITE INTEGRAL

The error of approximation is +2.28.

This approximation is called the right-hand approximation.

Method 2. (Trapezoidal Rule) In this method, for each subinterval [x1 , x2 ],

we join the point (x1 , x2 ) with the point (x2 , x2 ) by a straight line and ¬nd

2

1

1

the area under this line to be a trapezoid with area (x2 ’ x1 )(x2 + x2 ). We

2

1

2

add up these areas as the Trapezoidal Rule approximation, T , that is given

by

2 2

1 1 1

1 1 1

2 2

’0 1’

T= 0+ 1+

+

2 2 2 2 2 2

2 2

1 3

1 3 3 3 2

2

’1 2’

+ 2+

+ +1

2 2 2 2 2 2

2 2

1 5

1 5 5 5

32 +

+ 22

’2 3’

+

+

2 2 2 2 2 2

2 2 2

12 1 3 5

2 2

+ 32

0 +2· + 2(1 ) + 2 · + 2(2) + 2 ·

=

4 2 2 2

1 9 25

= 1+2+ +8+ +9

4 2 2

37

= 9 · 25.

=

4

The error of this Trapezoidal approximation is +0.25.

Method 3. (Simpson™s Rule) In this case we take two intervals, say [x1 , x2 ]∪

[x2 , x3 ], and approximate the area over this interval by

1

[f (x1 ) + 4f (x2 ) + f (x3 )] · (x3 ’ x1 )

6

1

and then add them up. In our case, let x0 = 0, x1 = , x2 = 1, x3 =

2

3 5

, x4 = 2, x5 = and x6 = 3. Then the Simpson™s rule approximation, S,

2 2

5.1. AREA APPROXIMATION 187

is given by

2 2

12 1

1 3

2

(1)2 + 4 · + 22 (1)

· (1) +

0 +4·

S= + (1)

6 2 6 2

2

12 5

+ 32 · (1)

2 +4·

+

6 2

2 2 2

12 1 3 5

2 2

+ 32

+2·1 +4· +2·2 +4·

= 0 +4

6 2 2 2

54

= = 9 = Exact Value!

6

For positive functions, y = f (x), de¬ned over a closed and bounded interval

[a, b], we de¬ne the following methods for approximating the area A, bounded

by the curves y = f (x), y = 0, x = a and x = b. We begin with a common

equally-spaced partition,

P = {a = x0 < x1 < x2 < x3 < . . . < xn = b},

b’a

i, for i = 0, 1, 2, . . . , n.

such that xi = a +

n

De¬nition 5.1.1 (Left-hand Rule) The left-hand rule approximation for A,

denoted L, is de¬ned by

b’a

· [f (x0 ) + f (x1 ) + f (x2 ) + · · · + f (xn’1 )].

L=

n

De¬nition 5.1.2 (Right-hand Rule) The right-hand rule approximation for

A, denoted R, is de¬ned by

b’a

· [f (x1 ) + f (x2 ) + f (x3 ) + · · · + f (xn )].

R=

n

De¬nition 5.1.3 (Mid-point Rule) The mid-point rule approximation for

A, denoted M , is de¬ned by

b’a x0 + x1 x1 + x2 xn’1 + xn

+ ··· + f

M= f +f .

n 2 2 2

188 CHAPTER 5. THE DEFINITE INTEGRAL

De¬nition 5.1.4 (Trapezoidal Rule) The trapezoidal rule approximation

for A, denoted T , is de¬ned by

b’a 1 1 1

(f (x0 ) + f (x1 )) + (f (x1 ) + f (x2 )) + · · · + (f (xn’1 ) + f (xn ))

T=

n 2 2 2

b’a 1 1

f (x0 ) + f (x1 ) + f (x2 ) + · · · + f (xn’1 ) + f (xn ) .

=

n 2 2

De¬nition 5.1.5 (Simpson™s Rule) The Simpson™s rule approximation for

A, denoted S, is de¬ned by

b’a 1 x0 + x1

f (x0 ) + 4 f + f (x1 )

S=

n 6 2

1 x1 + x2

+ f (x1 ) + 4 f + f (x2 )

6 2

1 xn’1 + xn

+ ··· + f (xn’1 + 4 f + f (xn )

6 2

b’a 1 x0 + x1 x1 + x2

· · f (x0 ) + 4 f

= + 2 f (x1 ) + 4 f

n 6 2 2

xn’1 + xn

+ · · · 2 f (xn’1 ) + 4 f + f (xn ) .

2

Examples

Exercises 5.1

1. The sum of n terms a1 , a2 , · · · , an is written in compact form in the so

called sigma notation

n

ak = a1 + a2 + · · · + an .

k=1

The variable k is called the index, the number 1 is called the lower limit

n

and the number n is called the upper limit. The symbol ak is read

k=1

“the sum of ak from k = 1 to k = n.”

Verify the following sums for n = 5:

5.1. AREA APPROXIMATION 189

n

n(n + 1)

(a) k=

2

k=1

n

n(n + 1)(2n + 1)

k2 =

(b)

6

k=1

n 2

n(n + 1)

3

(c) k=

2

k=1

n

2r = 2n+1 = 1

(d)

k=1

2. Prove the following statements by using mathematical induction:

n

n(n + 1)

(a) k=

2

k=1

n

n(n + 1)(2n + 1)

k2 =

(b)

6

k=1

n 2

n(n + 1)

k3 =

(c)

2

k=1

n

2r = 2n+1 ’ 1

(d)

k=1

3. Prove the following statements:

n n

(a) (c ak ) = c ak

k=1 k=1

n n n

(b) (ak + bk ) = ak + bk

k=1 k=1 k=1

n n n

(ak ’ bk ) = ak ’

(c) bk

k=1 k=1 k=1

n n n

(d) (a ak + b bk ) = a ak + b bk

k=1 k=1 k=1

190 CHAPTER 5. THE DEFINITE INTEGRAL

4. Evaluate the following sums:

6

(a) (2i)

i=0

5

1

(b)

j

j=1

4

(1 + (’1)k )2

(c)

k=0

5

(3m ’ 2)

(d)

m=2

5. Let P = {a = x0 < x1 < x2 < · · · < xn = b} be a partition of [a, b]

b’a

k, k = 0, 1, 2, · · · , n. Let f (x) = x2 . Let A

such that xk = a +

n

denote the area bounded by y = f (x), y = 0, x = 0 and x = 2. Show

that

n’1

2

x2 .

(a) Left-hand Rule approximation of A is k’1

n k=1

n’1

2

x2 .

(b) Right-hand Rule approximation of A is k

n k=1

n 2

2 xk’1 + xk

(c) Mid-point Rule approximation of A is .

n 2

k=1

n’1

2

x2 .

(d) Trapezoidal Rule approximation of A is 2+ k

n k=1

(e) Simpson™s Rule approximation of A

n n’1

2

1 xk’1 + xk

x2

4+4 +2 .

k

3n 2

k=1 k=1

5.1. AREA APPROXIMATION 191

In problems 6“20, use the function f , numbers a, b and n, and compute the

approximations LH, RH, M P, T, S for the area bounded by y = f (x), y =

0, x = a, x = b using the partition

b’a

P = {a = x0 < x1 < · · · < xn = b}, where xk = a + k , and

n

n

b’a

(a) LH = f (xk’1 )

n k=1

n

b’a

(b) RH = f (xk )

n k=1

n

b’a xn’1 + xk

(c) M P = f

n 2

k=1

n’1

b’a 1

(d) T = f (xk ) + (f (x0 ) + f (xn ))

n 2

k=1

n’1 n

b’a xk’1 + xk

(e) S = (f (x0 ) + f (xn )) + 2 f (xn ) + 4 f

6n 2

k=1 k=1

1

= {LH + 4M P + RH}

6

6. f (x) = 2x, a = 0, b = 2, n = 6

1

7. f (x) = , a = 1, b = 3, n = 6

x

8. f (x) = x2 , a = 0, b = 3, n = 6

9. f (x) = x3 , a = 0, b = 2, n = 4

1

10. f (x) = , a = 0, b = 3, n = 6

1+x

1

11. f (x) = , a = 0, b = 1, n = 4

1 + x2

1

12. f (x) = √ , a = 0, b = 1, n = 4

2

4’x

192 CHAPTER 5. THE DEFINITE INTEGRAL

1

13. f (x) = , a = 0, b = 1, n = 4

4 ’ x2

1

14. f (x) = , a = 0, b = 2, n = 4

4 + x2

1

f (x) = √

15. , a = 0, b = 2, n = 4

4 + x2

√

f (x) = 4 + x2 , a = 0, b = 2, n = 4

16.

√

f (x) = 4 ’ x2 , a = 0, b = 2, n = 4

17.

18. f (x) = sin x, a = 0, b = π, n = 4

π π

19. f (x) = cos x, a = ’ , b = , n = 4

2 2

20. f (x) = sin2 x, a = 0, b = π, n = 4

5.2 The De¬nite Integral

Let f be a function that is continuous on a bounded and closed interval [a, b].

Let p = {a = x0 < x1 < x2 < . . . < xn = b} be a partition of [a, b], not

necessarily equally spaced. Let

mi = min{f (x) : xi’1 ¤ x ¤ xi }, i = 1, 2, . . . , n;

Mi = max{f (x) : xi’1 ¤ x ¤ xi }, i = 1, 2, . . . , n;

∆xi = xi ’ xi’1 , i = 1, 2, . . . , n;

∆ = max{∆xi : i = 1, 2, . . . , n};

L(p) = m1 ∆x1 + m2 ∆x2 + . . . + mn ∆xn

U (p) = M1 ∆xi + M2 ∆x2 + . . . + Mn ∆xn .

We call L(p) the lower Riemann sum. We call U (p) the upper Riemann

sum. Clearly L(p) ¤ U (p), for every partition. Let

Lf = lub{L(p) : p is a partition of [a, b]}

Uf = glb{U (p) : p is a partition of [a, b]}.

5.2. THE DEFINITE INTEGRAL 193

De¬nition 5.2.1 If f is continuous on [a, b] and Lf = Uf = I, then we say

that:

(i) f is integrable on [a, b];

(ii) the de¬nite integral of f (x) from x = a to x = b is I;

(iii) I is expressed, in symbols, by the equation

b

I= f (x)dx;

a

(iv) the symbol“ ” is called the “integral sign”; the number “a” is called

the “lower limit”; the number “b” is called the “upper limit”; the func-

tion “f (x)” is called the “integrand”; and the variable “x” is called the

(dummy) “variable of integration.”

(v) If f (x) ≥ 0 for each x in [a, b], then the area, A, bounded by the curves

y = f (x), y = 0, x = a and x = b, is de¬ned to be the de¬nite integral

of f (x) from x = a to x = b. That is,

b

A= f (x)dx.

a

(vi) For convenience, we de¬ne

a a b

f (x)dx = ’

f (x)dx = 0, f (x)dx.

a b a

Theorem 5.2.1 If a function f is continuous on a closed and bounded in-

terval [a, b], then f is integrable on [a, b].

Proof. See the proof of Theorem 5.6.3.

Theorem 5.2.2 (Linearity) Suppose that f and g are continuous on [a, b]

and c1 and c2 are two arbitrary constants. Then

194 CHAPTER 5. THE DEFINITE INTEGRAL

b b b

(i) (f (x) + g(x))dx = f (x)dx + g(x)dx

a a a

b b b

(f (x) ’ g(x))dx = f (x)dx ’

(ii) g(x)dx

a a a

b

b b

b

g(x)dx and

f (x)dx, c2 g(x)dx = c2

(iii) c1 f (x)dx = c1

a

a a

a

b

b

b

g(x)dx

f (x)dx + c2

(c1 f (x) + c2 g(x))dx = c1

a

a

a

Proof.

Part (i) Since f and g are continuous, f + g is continuous and hence by

Theorem 5.2.1 each of the following integrals exist:

b b b

f (x)dx, g(x)dx, and (f (x) + g(x))dx.

a a a

Let P = {a = x0 < x1 < x2 < · · · < xn’1 < xn = b}. For each i, there exist

number c1 , c2 , c3 , d1 , d2 , and d3 on [xi’1 , xi ] such that

f (c1 ) = absolute minimum of f on [xi’1 , xi ],

g(c2 ) = absolute minimum of f on [xi’1 , xi ],

f (c3 ) + g(c3 ) = absolute minimum of f + g on [xi’1 , xi ],

f (d1 ) = absolute maximum of f on [xi’1 , xi ],

g(d2 ) = absolute maximum of g on [xi’1 , xi ],

f (d3 ) + g(d3 ) = absolute maximum of f + g on [xi’1 , xi ].

It follows that

f (c1 ) + g(c2 ) ¤ f (c3 ) + g(c3 ) ¤ f (d3 ) + g(d3 ) ¤ f (d1 ) + g(d2 )

Consequently,

Lf + Lg ¤ L(f +g) ¤ U(f +g) ¤ Uf + Ug (Why?)

Since f and g are integrable,

b b

Lf = Uf = f (x)dx; Lg = Ug = g(x)dx.

a a

By the squeeze principle,

b

L(f +g) = U(f +g) = (f (x) + g(x))dx

a

5.2. THE DEFINITE INTEGRAL 195

and

b b b

[f (x) + g(x)]dx = f (x)dx + g(x)dx.

a a a

This completes the proof of Part (i) of this theorem.

Part (iii) Let k be a positive constant and let F be a function that is con-

tinuous on [a, b]. Let P = {a = x0 < x1 < x2 < · · · < xn’1 < xn = b} be

any partition of [a, b]. Then for each i there exist numbers ci and di such

that F (ci ) is the absolute minimum of F on [xi’1 , xi ] and F (di ) is absolute

maximum of F on [xi’1 , xi ]. Since k is a positive constant,

kF (ci ) = absolute minimum of kF on [xi’1 , xi ],

kF (di ) = absolute maximum of kF on [xi’1 , xi ],

’kF (di ) = absolute minimum of (’k)F on [xi’1 , xi ],

’kF (ci ) = absolute maximum of (’k)F on [xi’1 , xi ].

Then

L(P ) = F (c1 )∆x1 + F (c2 )∆x2 + · · · + F (cn )∆xn ,

U (P ) = F (d1 )∆x1 + F (d2 )∆x2 + · · · + F (dn )∆xn ,

kL(P ) = (kF )(c1 )∆x1 + (kF )(c2 )∆x2 + · · · + (kF )(cn )∆xn ,

kU (P ) = (kF )(d1 )∆x1 + (kF )(d2 )∆x2 + · · · + (kF )(dn )∆xn ,

’kU (P ) = (’kF )(d1 )∆x1 + (’kF )(d2 )∆x2 + · · · + (’kF )(dn )∆xn ,

’kL(P ) = (’kF )(c1 )∆x1 + (’kF )(c2 )∆x2 + · · · + (’kF )(cn )∆xn .

Since F is continuous, kF and (’k)F are both continuous and

b

Lf = Up = F (x)dx,

a

b

L(kF ) = U(kF ) = k(LF ) = k(UF ) = k F (x)dx

a

L(’kF ) = (’k)UF , U(’kF ) = ’kLF ,

and hence

b

L(’kF ) = U(’kF ) = (’k) F (x)dx.

a

196 CHAPTER 5. THE DEFINITE INTEGRAL

Therefore,

b b b

(c1 f (x) + c2 g(x)) = c1 f (x)dx + c2 g(x)dx (Part (i))

a a a

b

b

g(x)dx (Why?)

f (x)dx + c2

= c1

a

a

This completes the proof of Part (iii) of this theorem.

Part (ii) is a special case of Part (iii) where c1 = 1 and c2 = ’1. This

completes the proof of the theorem.

Theorem 5.2.3 (Additivity) If f is continuous on [a, b] and a < c < b,

then

b c b

f (x)dx = f (x)dx + f (x)dx.

a a c

Proof. Suppose that f is continuous on [a, b] and a < c < b. Then f is

continuous on [a, c] and on [c, b] and, hence, f is integrable on [a, b], [a, c]

and [c, b]. Let P = {a = x0 < x1 < x2 < · · · xn = b}. Suppose that

xi’1 ¤ c ¤ xi for some i. Let P1 = {a = x0 < x1 < x2 < · · · < xi’1 ¤ c}

and P2 = {c ¤ xi < xi+1 < · · · < xn = b}. Then there exist numbers

c1 , c2 , c3 , d1 , d2 , and d3 such that

f (c1 ) = absolute minimum of f on [xi’1 , c],

f (d1 ) = absolute maximum of f on [xi’1 , c],

f (c2 ) = absolute minimum of f on [c, xi ],

f (d2 ) = absolute maximum of f on [c, xi ],

f (c3 ) = absolute minimum of f on [xi’1 , xi ],

f (d3 ) = absolute maximum of f on [xi’1 , xi ],

Also,

f (c3 ) ¤ f (c1 ), f (c3 ) ¤ f (c2 ), f (d1 ) ¤ f (d3 ) and f (d2 ) ¤ f (d3 ).

It follows that

L(P ) ¤ L(P1 ) + L(P2 ) ¤ U (P1 ) + U (P2 ) ¤ U (P ).

It follows that

b c b

f (x) = f (x)dx + f (x)dx.

a a c

This completes the proof of the theorem.