You are on page 1of 24

MA2264 /MA1251 - NUMERICAL METHODS

UNIT-I
SOLUTION OF EQUATIONS AND EIGENVALUE PROBLEMS
1. If a function f(x) = 0 is continuous in the interval (a, b) and if f (a) and f (b) are of
opposite signs. Then one of the root lies between a and b.
2. Example of Algebraic equation: (i) x3 2x + 5 = 0; (ii) 2x3 3x 6 = 0.
3. Example of Transcendental equation: (i) x cosx = 0; (ii) xex -2 = 0;
(iii) x log 10 x 12 0 .
af (b) bf (a )
f (b) f (a )
(First iteration of Regula Falsi Method).

4. Regula Falsi Method: x

5. Iterative Method: x n 1 ( x n ) .
6. Convergence condition of iterative method is ( x) 1 .
7. Order of convergence of iterative method is linear (i.e.) 1.
8. Newton Raphsons Method (Method of Tangents): xn 1 = x n

f ( xn )
.
f ( x n )

9. Convergence condition of N-R method is f ( x) f ( x) f ( x) .


10. Order of convergence of Newtons method is quadratic (i.e.) 2.
11. Direct Method: (i) Gauss Elimination Method, (ii) Gauss Jordan Method.
12. Indirect Method (or) Iteration Method: (i) Gauss-Jacobi Method,
(ii) Gauss-Seidel Method.
13. Gauss Elimination Method: To reduce the augmented matrix [A, B] to upper triangular
matrix. Then, using Back Substitution method weve to find the unknowns.
14. Gauss Jordan Method: To reduce the augmented matrix [A, B] to diagonal matrix.
Finally this system of equation each has only one unknown and then weve to find the
unknowns directly.

15. Diagonally Dominant: An n n matrix A is said to be diagonally dominant if the


absolute value of each leading diagonal element is greater than or equal to the sum of the
absolute values of the remaining elements in that row.
Given system of equations is a1 x b1 y c1 z d1 ; a 2 x b2 y c 2 z d 2 ;
a3 x b3 y c3 z d 3 is a diagonal system is if a1 b1 c1

b2 a 2 c 2
c3 a3 b3
16. Gauss Jacobi Method: If the given system of equation is diagonally dominant then
1
x ( n 1) d1 b1 y n c1 z n
a1
1
y ( n 1) d 2 a 2 x n c 2 z n
b2
1
z ( n 1) d 3 a3 x n b3 y n
c3
17. Gauss Seidel Method: If the given system of equation is diagonally dominant then
1
x ( n 1) d1 b1 y n c1 z n
a1
1
y ( n 1) d 2 a 2 x n 1 c 2 z n
b2
1
z ( n 1) d 3 a3 x n 1 b3 y n 1
c3
18. Sufficient condition for iterative methods (Gauss Seidel Method & Gauss Jacobi Method)
to convergence is the coefficient matrix should be diagonally dominant.
19. The iteration method is a self correcting method since the round off error is smaller.
20. Why Gauss Seidel iteration is a method of successive corrections?
Ans: Because we replace approximations by corresponding new ones as soon the latter
have been computed.

21. Compare Gauss Elimination Method and Gauss Jordan Method


Gauss Elimination Method

Gauss Jordan Method

1. Direct Method
2. Coefficient matrix is
transformed into upper
triangular matrix.
3. We obtain the solution
by back substitution
method.

1. Direct Method
2. Coefficient matrix is transformed
into diagonal matrix.
3. No need of back substitution
method. Since finally this system
of equation each has only one
unknown.

22. Inverse of a Matrix: Let A be an n n nonsingular matrix. If X is the inverse of the


matrix A then AX = I (i.e.) X = I A-1. We start with augmented matrix of A with identity
matrix I of the same order and convert A into the required form (i.e.) identity then the
inverse is formed. [A / I ] [I / A-1].
23. Compare Gauss Elimination Method and Gauss Seidel Method.

Gauss Elimination Method


1. Direct Method
2. It has the advantage that it is finite
and works in theory for any nonsingular set of equation.
3. We obtain exact value.

Gauss Seidel Method


1. Indirect Method
2. It converges only for
diagonally
dominant.
3. Approximate value
which is self correct
method.

24. Compare Gauss Jacobi and Gauss Seidel Methods.


Gauss Jacobi Method
1. Indirect Method
2. Convergence rate is slow.
3. Condition for convergence is
diagonally dominant.

Gauss Seidel Method


1. Indirect Method
2. The rate of convergence of this method is
roughly twice that of Jacobi.
3. Condition for convergence is diagonally
dominant.

25. Why Gauss Seidel method is better method than Jacobis method?
Ans: Since the current value of the unknowns at each stage of iteration are used in
proceeding to the next stage of iteration, the convergence in Gauss Seidel method will be
more rapid than in Gauss Jacobi method.

UNIT-II
INTERPOLATION AND APPROXIMATION
26. Explain briefly Interpolation.
Ans: Interpolation is the process of computing the values of a function for any value of
the independent variable within an interval for which some values are given.
27. Definition of Interpolation and extrapolation.
Ans: Interpolation: It is the process of finding the intermediate values of a function from
a set of its values specific points given in a tabulated form. The process of
computing y corresponding to x xi x xi 1 , i 0,1,2,...n 1 is interpolation.
Extrapolation: If x x0 or x x n then the process is called extrapolation.
28. State Newtons Forward interpolation formula.
u
u ( u 1) 2
u ( u 1)( u 2 ) 3
Ans : y(x 0 uh ) y ( x ) y 0 y 0
y0
y 0 ....
1!
2!
3!
x x0
u ( u 1)( u 2 )...( u ( n 1)) n
...
y0
where u
n!
h
29. State Newtons Backward interpolation formula.
Ans :

y(x

p
p ( p 1) 2
p ( p 1)( p 2 ) 3
yn
yn
y n ....
1!
2!
3!
x xn
p ( p 1)( p 2 )...( p ( n 1)) n
...
yn
where p
n!
h

ph ) y ( x ) y n

30. Error in Newtons forward:


u (u 1)(u 2)....(u n) n 1
Ans :
f ( )
(n 1)!

where x0 x n

31. Error in Newtons Backward:


p ( p 1)( p 2)....( p n) n 1
Ans :
f ( )
(n 1)!

where x0 x n

32. State Newtons divided difference formula.


Ans : f ( x) f ( x0 ) ( x x0 ) f ( x0 , x1 ) ( x x0 )( x x1 ) f ( x0 , x1 , x 2 ) ...
( x x0 )( x x1 )( x x 2 )......( x x n 1 ) f ( x0 , x1 , x 2 ,......x n )

33. Show that the divided differences are symmetrical in their arguments.
f ( x1 ) f ( x0 ) f ( x0 ) f ( x1 )
Ans : f ( x0 , x1 )

f ( x1 , x0 )
x1 x0
x0 x1
34. Show that divided difference operator
is linear.
[f(x1 ) g(x1 )] - [f(x 0 ) g(x 0 )]
Ans:
[f(x) g(x)] =
x1 x0
[f(x1 ) f(x 0 )] [g(x1 ) - g(x 0 )]
=
=

x1 x0
x1 x0

f(x)

g(x).

35. Divided difference table:


X Y
X0 Y0
X1 Y1
X2 Y2
X3 Y3

Y
f ( x1 ) f ( x0 )
x1 x0
f ( x0 , x1 )
f ( x 2 ) f ( x1 )
x 2 x1
f ( x1 , x 2 )
f ( x3 ) f ( x 2 )
x3 x 2
f ( x 2 , x3 )

f ( x1 , x 2 ) f ( x0 , x1 )
x 2 x0
f ( x0 , x1 , x 2 )
f ( x 2 , x3 ) f ( x1 , x 2 )
x3 x1
f ( x1 , x 2 , x3 )

36. Write Lagrangians polynomial formula.

f ( x1 , x 2 , x3 ) f ( x 0 , x1 , x 2 )
x3 x 0
f ( x 0 , x1 , x 2 , x3 )

Ans : y

( x x1 )( x x 2 )( x x3 )...( x x n )
( x x0 )( x x 2 )( x x3 )...( x x n )
y0
y1 ...
( x0 x1 )( x0 x 2 )( x0 x3 )...( x0 x n )
( x1 x0 )( x1 x 2 )( x1 x3 )...( x1 x n )
...

( x x0 )( x x1 )( x x 2 )( x x3 )...( x x n 1 )
yn
( x n x0 )( x n x1 )( x n x 2 )( x n x3 )...( x n x n 1 )

35. What is the assumption we make when Lagranges formula is used?


Ans: It can be used whether the values of x, the independent variable are equally spaced
or not whether the difference of y become smaller or not.
36. Write Lagrangian inverse interpolation formula.
Ans : x

( y y1 )( y y 2 )( y y 3 )...( y y n )
( y y 0 )( y y 2 )( y y 3 )...( y y n )
x0
x1 ...
( y 0 y1 )( y 0 y 2 )( y 0 y 3 )...( y 0 y n )
( y1 y 0 )( y1 y 2 )( y1 y 3 )...( y1 y n )
...

( y y 0 )( y y1 )( y y 2 )( y y 3 )...( y y n 1 )
xn
( y n y 0 )( y n y1 )( y n y 2 )( y n y 3 )...( y n y n 1 )

37. Define Cubic Spline


Ans: Let xi , f ( x i ) , i = 0, 1, 2... n be the given (n +1) pairs of a data. The third order
curves employed to connect each pair of data points are called cubic splines. (OR) A
smooth polynomial curve is known as cubic spline.
A cubic spline function f(x) w.r.t. the points x0, x1, .....xn is a polynomial of
degree three in each interval (x i-1, xi) i = 1, 2, ...n such that f (x) , f (x) and f (x) are
continuous.
38. Write down the formula of Cubic Spline.
2

1
xi x 3 M i 1 ( x xi 1 ) 3 M i 1 ( xi x) yi 1 h M i 1
Ans : y ( x)
6h
h
6

1
h2
( x xi 1 ) y i
Mi ;
xi 1 x xi
h
6

6
and M i 1 4 M i M i 1 2 y i 1 2 y i y i 1 for i 1,2,3....( n 1)
h
where M = y
(OR)
f f i f i f i 1
hi ai 1 2(hi hi 1 )ai hi 1 ai 1 6 i 1

where hi xi xi 1 and
hi
hi 1

f i y i ; i = 1,2,3,....

UNIT III

NUMERICAL DIFFERENTIATION AND INTEGRATION


39. Derivatives of Y based on Newtons forward interpolation formula:
1
1
1

y 0 2u 12 y 0 3u 2 6u 2 3 y 0 4u 3 18u 2 22u 6 4 y 0

dy 1
2!
3!
4!

dx h 1
4
3
2
5

5u 40u 105u 100u 24 y 0 ........


5!

1
1
2

3
6u 2 18u 11 4 y0
2u 3 12u 2 21u 10 5 y0 ........
d 2 y 1 y0 (u 1) y0

12
12

dx 2 h 2

1
1
3

4
2
5
x x0
d 3 y 1 y 0 2u 3 y 0 2u 8u 7 y 0 ........
where u

2
4
3

h
h3
dx

If x x0 then u = 0
1 2
1 3
1 4
1 5

1 y 0 y 0 y 0 y 0 y 0 .....
dy

2
3
4
5

dx x x0 h

11 4
5 5
2

3
d2y
1 y0 y0 y0 y0 ........
2
2
12
6

dx x x0 h

3 4
7 5
3

d3y
1 y 0 y 0 y 0 ........
3
3
2
4

dx x x0 h

40. Derivatives of Y based on Newtons backward interpolation formula:

1
1
1

y n 2 p 1 2 y n 3 p 2 6 p 2 3 y n 4 p 3 18 p 2 22 p 6 4 y n

dy 1
2!
3!
4!

dx h 1
4
3
2
5

5 p 40 p 105 p 100 p 24 y n ........


5!

1
1
2

3
6 p 2 18 p 11 4 y n
2 p 3 12u 2 21u 10 5 y n ........
d 2 y 1 y n ( p 1) y n

12
12

dx 2 h 2

1
1
3

4
2
5
x xn
d 3 y 1 y n 2 p 3 y n 2 p 8 p 7 y n ........
where p

2
4
3

h
h3
dx

If x x n then p = 0
1 2
1 3
1 4
1 5

1 y n y n y n y n y n .....
dy

2
3
4
5

dx x xn h

11 4
5 5
2

3
d2y
1 y n y n y n y n ........
2
2
12
6

dx x xn h

3 4
7 5
3

d3y
1 y n y n y n ........
3

2
4

dx x xn h3

41. What are the two types of errors involving in the numerical computation of
derivatives?
Ans: (i) Truncation error; (ii) Rounding error (To produce exact result is rounded to
the number of digits).

42. Define the error of approximation.


Ans: The approximation may further deteriorate as the order of derivative increases.
The quantity E(r) = f(r)(x) P( r )n(x) is called the error of approximate in the r th order
derivative. Where f(x) is the given equation and P(x) is approximate values of f(x).
43. To find Maxima and Minima of a tabulated function:
Let y = f(x)

dy
and equate to zero. And solving for x.
dx
d2y
d2y
Find
; If
at x is ve y has maximum at that point x.
dx 2
dx 2
d2y
If
at x is +ve y has minimum at that point x.
dx 2
Weve to use Newtons forward or backward interpolation formulae for equal
intervals or use Newtons divided difference interpolation formula for unequal
dy
intervals then we get
.
dx

Find

44. Newton-Cotes Quadrature formula:

x0 nh

x0

n 4 3n 3 11n 2
4 y 0
n
n(2n 3) 2
n ( n 2) 2 3
f ( x)dx nh y 0 y 0
y0
y 0

3n
...
2
12
24
2
3
5
4!

45. Trapezoidal rule:

xn

f ( x)dx 2 y

x0

y n 2 y1 y 2 ... y n 1

h
{sum of the 1st and last ordinates+2(sum of the
2
remaining ordinates)}

46. Error in Trapezoidal rule:


(b a )h 2
E
M
12
ba
where h
& M max y 0'' , y1'' ,..... y n'' 1
n

47. The error in the Trapezoidal rule is of order h 2 .


1

48. Simpsons one-third rule: rule


3

xn

f ( x ) dx

x0

h
y 0 y n 2 y 2 y 4 ... 4 y 1 y 3 ....
3

h Sum of the 1 st and last ordinates

3 4 Sum of even ordinates

2 Sum

of remaining

odd ordinates

49. Simpsons three eighth rule: rule


8

xn

f ( x ) dx

x0

3h
y 0 y n 3 y 1 y 2 y 4 y 6 ... 4 y 3 y 6 y 9 ....
8

50. Error in Simpsons one-third rule:


(b a )h 4
E
M
180

where h

51. The error in Simpsons One third rule is of order h 4 .


52. When does Simpsons rule give exact result?
Ans: Polynomials of degree 2 .
53. When is Trapezoidal rule applicable?
Ans: For any intervals.
1
rule applicable?
3
Ans: When there even no. of intervals.

54. When is Simpsons

ba
& M max y 0''' , y 2''' ,.....
n

3
rule applicable?
8
Ans: When the intervals are in multiples of three.

55. When is Simpsons

56. Rombergs Method: I f ( x)dx


a

I I
I I2 2 1
3
b

57. State Rombergs method integration formula to find the value of I f ( x)dx
a

using h and

h
.
2

I h Ih
h

Ans : I h, I h 2
3
2
2

1
4I h I h

3 2

58. State Two Point Gaussian Quadrature formula:


Ans: I

f ( x)dx

1
f

3

1
f

59. State Three Point Gaussian Quadrature formula:


Ans:
I

5 3 8
5
f 0

5 9
9

f ( x)dx 9 f

f
5

60. In Gaussians Quadrature: If the limit is from a to b then we shall apply a suitable
change of variable to bring the integration from -1 to 1
b a t b a ;
ba
Put x
dx
dt .
2
2

61. State Trapezodial formula for Double Integrals:


d b

I f x, y dxdy
c a

Sum of values of f at the four corners

hk

I
2Sum of the values of f at the remaining nodes on the boundary
4

4Sum of the values of f at the int erior nodes

62. State Simpsons rule for Double Integrals:


d b

I f x, y dxdy
c a

Sum of values of f at the four corners

2Sum of the values of f at the odd positions on the boundary

hk

I
4Sum of the values of f at the even positions on the boundary

4Sum of the values of f at the odd positions



on the odd row of the matrix
8Sum of the values of f at the even positions

8Sum of the values of f at the odd positions

16Sum of the values of f at the even positions on the even row of the matrix

63. Why is Trapezoidal rule so called?


Ans: Because it approximates the integral by the sum of the areas of n trapezoids.

64. Compare Trapezoidal rule and Simpsons one-third rule.


Ans:
Trapezoidal rule
1. No. of intervals any

Simpsons one-third rule

2. Error: O(h2)

1. No. of intervals should be


even.
2. Error: O(h4)

3. Degree of y(x) is one

3. Degree of y(x) is two.

UNIT-IV
INITIAL VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS

65. Initial Value Problem:


Ans: The general solution of a differential equation of the nth order has n arbitrary
constants. In order to compute the numerical solution of such an equation we need n
conditions. If all the n conditions are specified at the initial point only then it is called
an initial value problem.
If the conditions are specified at two or more points then it is called a
Boundary value problem.
66.

Define Single Step Method & Multistep Method.


Ans: A series for y in terms of powers x from which the value of f can be obtained by
direct substitution. In each step we use the data of just one preceding step. Hence
these methods are called Single step methods or Pointwise methods. A solution of
this type is called pointwise solution.
A method that uses values from more than one preceding step is called
multistep method.

67. State Taylor series formula :


y ( x) y1

h ' h 2 '' h 3 ''' h 4 v '


y0
y0
y0
y 0 ....... where h x x0
1!
2!
3!
4!

68. What are the merits and demerits of Taylors method?


Ans. Merits:
It is a powerful single step method if we are able to find the successive
derivatives easily.
Demerits:
(i)The derivative may be complicated
(ii) At the given point the derivative may be infinity.
69. State Eulers Method formula:
y n 1 y n hf x n , y n ;

n 0,1,2,......

70. State Modified Eulers Method formula:


h
h

y n 1 y n hf x n , y n f x n , y n ;
2
2

n 0,1,2,......

71. State Fourth Order Runge-Kutta Method formula:(for First order differential
equations)
y n 1 y n y
1
Where y k1 2k 2 2k 3 k 4 and
6
k1 hf x n , y n
k
h

k 2 hf x n , y n 1
2
2

k
h

k 3 hf x n , y n 2
2
2

k 4 hf x n h, y n k 3

72. State Fourth Order Runge-Kutta Method formula:(for First order Simultaneous
differential equations)
Let

dy
dz
f 1 x, y, z and
f 2 x, y , z
dx
dx

y n 1 y n y
&
1
Where y k1 2k 2 2k 3 k 4 and
6
k1 hf 1 x n , y n

z n 1 z n z
1
Where z l1 2l 2 2l 3 l 4
6
l1 hf 2 x n , y n

k
h

k 2 hf 1 x n , y n 1
2
2

k
h

k 3 hf 1 x n , y n 2
2
2

k 4 hf 1 x n h, y n k 3

k
h

l 2 hf 2 x n , y n 1
2
2

k
h

l3 hf 2 x n , y n 2
2
2

l 4 hf 2 x n h, y n k 3

73. State Fourth Order Runge-Kutta Method formula:(for second order differential
equations)
Let

dy
d 2 y dz
f 1 x, y, z z and

f 2 x, y , z
dx
dx 2 dx

y n 1 y n y
1
Where y k1 2k 2 2k 3 k 4 and
6
k1 hf 1 x n , y n
k
h

k 2 hf 1 x n , y n 1
2
2

k
h

k 3 hf 1 x n , y n 2
2
2

k 4 hf 1 x n h, y n k 3

1
l1 2l 2 2l3 l 4
6
l1 hf 2 x n , y n

Where z

k
h

l 2 hf 2 x n , y n 1
2
2

k
h

l3 hf 2 x n , y n 2
2
2

l 4 hf 2 x n h, y n k 3

74. Eulers algorithm:


h2 "
Error
y ( )
2
h2

M2
2
xi xi 1
M 2 max y "

The order of the local truncation error is o h 2

75. Modified Eulers method:


The order of the local truncation error is o(h 3 )
Error

h3
M3
3

76. State which is better. Taylors method or R-K method?


Ans: R-K method .Since it do not require prior calculation of higher derivatives of
y(x) as the Taylors method does.

77. Milnes predictor and corrector methods:


Formula:
4h
2 y ' n2 y ' n1 2 y ' n
y n 1, p y n 3
3
h
y n 1,c y n 1 y ' n 1 4 y ' n y ' n 1
3
78. Adams Bashforth predictor and corrector methods:
h
55 y ' n 59 y ' n1 37 y ' n2 9 y ' n3
24
h
y n 9 y ' n 1 19 y ' n 5 y ' n 1 y ' n 2
24

y n 1, p y n
y n 1,c

79. Compare R-K method with Predictor-corrector methods


Ans:
R-K method
Self-starting
Not possible to get truncation error

Predictor-corrector method
It is not self-starting, It requires prior values
Easy to get truncation error

80. How many prior values required to predict the next value in Adams and Milnes
method?
Ans: Four prior values.
81. What is meant by initial value problem and give an example it.
Ans: Problems for finding solutions of differential equation in which all the initial
conditions are specified at the initial point only are called initial value
problems.
Example: y ' f ( x , y ) with y ( x 0 ) y 0 .
82. Write the name of any two self-starting methods to solve
Ans: Eulers Method, Runge-Kutta Method.

dy
f ( x, y ) given y ( x 0 ) y 0 .
dx

83. Mention the multistep methods available for solving ordinary differential equation.
Ans: Milnes Predictor-Corrector Method and Adams Bashforth Predictor-Corrector
Method.
84. What is a Predictor-Corrector Method of solving a differential equation?
Ans: We first use a formula to find the value of y at x n 1 and this is known as a
predictor formula. The value of y so got is improved or corrector by another
formula known as corrector formula.
85. Why Runge-Kutta formula is called fourth order?
Ans: The fourth order Runge-Kutta method agree with Taylor series solution up to
the terms of h4. Hence it is called fourth order R.K. method.

86. Error: [Milnes Predictor]


14 5
(5)
h y n 1 ( ) where x n 3 x n 1
45
[Milnes Corrector]
E

1 5
(5)
h y n 1 ( ) where x n 1 x n 1
90

87. Round off error: When we are working with decimal numbers. We approximate the
decimals to the required degree of accuracy. The error due to
these approximations is called round off error.
Truncation error: The error caused by using approximate formula in computations
is known as truncation error.

88. Error: Adams Predictor


E

251 5 ( 5)
h y where x n 3 x n 1
720

Error: Adams Corrector


E

19 5 ( 5)
h y where x n 2 x n 1
720

89. Compare the Milnes Predictor-Corrector and Adam-Bashforth Predictor-Corrector


methods for solving ordinary differential equations.
Ans:
Adams Method
Milnes Method
1. We require four starting
1. We require four starting
values of y
values of y
2. It does not have the same 2. But is about as efficient.
instability problems as
the Milne method.
3. A modification of
3. It is simple and has a
Adams method is more
good local error, O(h5)
widely used than Milnes

UNIT V
BOUNDARY VALUE PROBLEMS FOR ORDINARY AND PARTIAL
DIFFERENTIAL EQUATIONS
90. Define Boundary value problem.
Ans: When the differential equation is to be solved satisfying the conditions
specified at the end points of an interval the problem is called boundary value
problem.
91. Define Difference Quotients
Ans: A difference quotient is the quotient obtained by dividing the difference
between two values of a function by the difference between two corresponding
1
yi 1 yi 1
values of the independent variable. y i'
2h

92. Finite Difference Methods:


1
yi 1 yi 1
2h
1
''
y i 2 y i 1 2 y i y i 1
h

y i'

93. Classification of Partial Differential Equations of the Second Order

2u
2u
2u
u u
A 2 B
C 2 f x, y, u , , 0
xy
x y
x
y

(i) If B 2 4 AC 0 then the equation is Elliptic.


(ii) If B 2 4 AC 0 then the equation is Hyperbolic.
(iii) If B 2 4 AC 0 then the equation is Parabolic.

u
2u
2 2
t
x

94. Bender-Schmidts Difference Equation: (Explicit Method)


u i , j 1

u i 1, j

ui, j

u i 1, j

(OR)
u i 1, j

ui, j

u i 1, j

u i , j 1

u i , j 1 u i 1, j (1 2 )u i , j u i 1, j

Where

k
1
and a 2
2
ah

Bender-Schmidts Difference Equation


1
1
then u i , j 1 u i 1, j u i 1, j
2
2
a
This is valid only if k h 2
2

If

95. Crank-Nicholsons Difference Equation: [Implicit Method]

u
2u
2 2
t
x

u i 1, j 1 u i 1, j 1 2 1u i , j 1 2 1u i , j u i 1, j u i 1, j
Where

k
1
and a 2
2
ah

Crank-Nicholsons Difference Equation When 1 (i.e.) k ah 2 the CrankNicholsons difference equation becomes
u i , j 1

ui 1, j 1

u i , j 1

u i 1, j

ui, j

1
u i 1, j 1 u i 1, j 1 u i 1, j u i 1, j
4

u i 1, j 1

u i 1, j

96. Write down the implicit formula to solve one dimensional heat flow equation.
u
2u
2 2
t
x
u i 1, j 1 u i 1, j 1 2 1u i , j 1 2 1u i , j u i 1, j u i 1, j
97. Why is Crank Nicholsons Scheme called an implicit scheme?
Ans: The solution value at any point (i,j+1) on the (j+1)th level is dependent on the
solution values at the neighbouring points on the same level and on three values
of the jth level. Hence it is an implicit method.
98. What type of equations can be solved by Crank Nicholsons formula (implicit)?
Ans: It is used to solve parabolic equation (one dimensional heat equation) of the
2
u
2 u
form

.
t
x 2
99. What type of equations can be solved by Bender-Schmidts formula (explicit)?
Ans: It is used to solve parabolic equation (one dimensional heat equation) of the
u
2u
form
2 2 .
t
x
100.State the explicit scheme formula for the solution of the wave equation.
Ans: The formula to solve numerically the wave equation a 2 u xx u tt is

u i , j 1 2 1 2 a 2 u i , j 2 a 2 u i 1, j u i 1, j u i , j 1

When 2

1
k2
h

(i.e) k
2
2
a
a
h

The wave equation is


u i , j 1 u i 1, j u i 1, j u i , j 1
u i , j 1

and u t ( x,0) 0 u i ,1

u i 1, 0 u i 1, 0
2

u i 1, j

ui , j

u i 1, j

u i , j 1

101. Define the local truncation error.


y 2 y i y i 1

y y i 1

Ans: E i 1
y i f i i 1
yi
2
2h
h

d2y
102. State finite difference approximation for
and state the order of truncation error.
dx 2
y 2 yi yi 1
Ans: y i i 1
and the order of truncation error is O(h2).
2
h

103. Forward Finite difference formula: u x ( x 0, y 0 )

u ( x 0 h, y 0 ) u ( x 0 , y 0 )
h

u y ( x0, y 0 )

u ( x0 , y 0 k ) u ( x0 , y 0 )
k

h
Truncation error is u xx , y 0 where x 0 x 0 h
2

104. Backward Finite difference formula: u x ( x 0, y 0 )

u ( x 0 , y 0 ) u ( x 0 h, y 0 )
h

u y ( x0, y 0 )

u ( x0 , y 0 ) u ( x0 , y 0 k )
k

h
Truncation error is u xx , y 0 where x 0 x 0 h
2

105. Second order finite difference formulae:


u ( x h, y 0 ) u x ( x 0 , y 0 ) u x 0 h, y 0 2u x 0 , y 0 u x 0 h, y 0
u xx ( x 0, y 0 ) x 0

h
h2
u yy ( x0, y 0 )

u y ( x0 , y 0 k ) u y ( x0 , y 0 )

Truncation error is

u x0 , y 0 k 2u x0 , y 0 u x0 , y 0 k
k2

h2
u xxxx , y 0 where x0 h x0 h
12

106. Name at least two numerical methods that are used to solve one dimensional
diffusion equation.
Ans; (i) Bender-Schmidt Method
(ii) Crank-Nicholson Method.
107. State standard five point formula for solving u xx + uyy = 0.
Ans:
u i 1, j u i 1, j u i , j 1 u i , j 1 4u i , j
108. State diagonal five point formula for solving uxx + uyy = 0.
Ans:
u i 1, j 1 u i 1, j 1 u i 1, j 1 u i 1, j 1 4u i , j
109. Write down one dimensional wave equation and its boundary conditions.
2u
2u
Ans: 2 2 2
t
x
Boundary conditions are
(i)
u(0,t) = 0
(ii)
u(l, t) = 0 t 0
(iii) u(x,0) = f(x), 0<x<l
(iv)
ut(x,0) = 0, 0<x<l

110. State the explicit formula for the one dimensional wave equation with
k
T
1 2 a 2 0 where and a 2 .
h
m
Ans:

u i , j 1 u i 1, j u i 1, j u i , j 1

111. Write down the finite difference form of the equation 2 u f ( x, y )


Ans:
u i 1, j u i 1, j u i , j 1 u i , j 1 4u i , j h 2 f ih, jh

112. Write Laplaces equation and its finite difference analogue and the standard five
point formula.
Ans: Laplace equation is uxx + uyy = 0
u i 1, j 2u i , j u i 1, j u i , j 1 2u i , j u i , j 1
Finite difference formula:

0
h2
k2
Standard five point formula: u i 1, j u i 1, j u i , j 1 u i , j 1 4u i , j .
113. What type of equations can be solved by one dimensional wave equation?
2u
2u
Ans: It is used to solve hyperbolic equation of the form 2 2 2 .
t
x
114. Write down one dimensional heat flow equation and its boundary conditions.
u
2u
Ans:
2 2
t
t
Boundary conditions are
(i)
u(0,t) = T0 t 0
(ii)
u(l, t) = T1 t 0
(iii) u(x,0) = f(x), 0<x<l
115. Write down two dimensional heat flow equation and its boundary conditions.
2u 2u
Ans:

0
x 2 y 2
Boundary conditions are
(i)
u(0,y) = 0, for 0<y<b
(ii)
u(a, y) = 0, for 0<y<b
(iii) u(x,0) = f(x), for 0<x<a
(iv)
u(x,b) = g(x), for 0<x<a

You might also like