You are on page 1of 1

Computer Arithmetic

1. There are two types of arithmetic operations available in a computer. These are:
a. Integer Arithmetic
b. Real or floating point Arithmetic
2. Integer arithmetic deals with integer operands i.e., numbers without fractional parts.
3. Integer arithmetic is used mainly in counting and as subscripts.
4. Real arithmetic uses numbers with fractional parts as operands and is used in most
computations.
5. In normalized floating point mode of representing and storing real numbers, a real number is
expressed as a combination of a mantissa and an exponent.
6. The mantissa is made less than 1 and greater than or equal to 0.1 and exponent is the power
represented.
7. In .4485E8, mantissa is .4485 and the exponent is 8.
8. E8 is used to represent 108.
9. The shifting of the mantissa to the left till its most significant digit is non-zero is called
normalization.
10. When the numbers are stored using normalized floating point notation the range of numbers
that may be stored are .9999X1099 and .1000X10-99.
11. If the two numbers in normalized floating point notation are to be added the exponents of
the two numbers must be made equal and the mantissa shifted appropriately.
12. The operation of subtraction is nothing but adding a negative number.
13. Two numbers are multiplied in the normalized floating point mode by multiplying the
mantissas and adding the exponents. After the multiplication of the mantissas the result
mantissa is normalized as in addition/subtraction operation and the exponent appropriately
adjusted.
14. In dividing a number by another the mantissa of the numerator is divided by that of the
denominator. The denominator exponent is subtracted from the numerator exponent. The
quotient mantissa is normalized to make the most significant digit non-zero and the
exponent appropriately adjusted.
15. In performing numerical calculations three types of errors are encountered.
16. Errors due to the finite representation of numbers. For example 1/3 is not exactly representing
able using a finite number of digits.
17. Errors due to arithmetic operations using normalized floating point numbers. Such errors are
called rounding errors.
18. Errors due to finite representation of an inherently infinite process. For example the use of a
finite number of terms in the infinite series of expansion of sin x, cos x, etc. Such errors are
called truncation errors.
19. There are two measures of accuracy of the results: a measure of absolute error and a measure
of relative error.
20. If m1 is the correct value of a variable and m2 the value obtained by computation then |m1m2| = ea is called the absolute error.
21. A measure of relative error is defined as | (m1-m2)/m1|= |ea/m1| = er
22. Computers store numbers in binary form.

You might also like