You are on page 1of 12

COMPUTER SCIENCE & ENGINEERING DEPARTMENT O WO ` UNIVERSITY O . MI AWOL . BAFE . . ` E ` ILE-IF , O S UN STATE, NIGERIA . ..

. .. Harmattan Semester, 2012-2013 Session PRACTICAL LAB 1 THIS DOCUMENT IS NOT FOR SALE! CSC 307: Numerical Computations I

March 29, 2013

Contents
1 Background 2 Introduction to the laboratory 2.1 Purpose of the Laboratory . . . . . . . . . . . . . . . . . . . . . . . 3 Programming 3.1 Why FORTRAN 95 . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Why Octave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Programming Works . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Laboratory Reports 4.1 Assignment Submission Dates . . . . . . . . . . . . . . . . . . . . . 5 Laboratory Assignment 1 5.1 EXPERIMENT I: Error and Error Propagation in Numerical Computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Aim of Experiment I . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Experiment 1A: Inherent Error in single precision arithmetic . . . . . 5.4 Machine epsilon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Task set 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Machine Gamma . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Task set 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Experiment 1B: Inherent Error in Double Precision Arithmetic . . . . 5.9 Task set 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 3 4 5 5 6 6 6 7 9 9 9 9 10 10 11 11 12 12

1 Background
This is a 3 unit course. You are expected to have two hours of lectures and three hours of practical classes every week, throughout the duration of this course. Note that CSC201: Introduction to Programming I is a prerequisite to this course. What this indicates is that the knowledge of CSC 201 is required for you to do well in this course. Therefore, you are expected to have offered and passed CSC 201 before registering for this course. This document has been written to assist you during the practical classes. It is important that you are familiar with the contents of this document before coming for your practical classes. As discussed in our lectures, the aim in numerical computations is to develop accurate, reliable and economic solution to problems. A number of factors militate against the computation of accurate results. They are highlighted as follows. Conceptual errors: These errors result from wrong conception of the problem or its solution approach. An example is the use of a wrong mathematical model or scientic theory to describe the problem and/or its solution. Data error: Numerical computations are carried out with numbers which are usually approximated values. The measuring instruments, e.g. a thermostat in the case of temperature, will only return results accurate to a specied number of signicant digits (this depends on the instruments resolution). The exact values is never obtained particularly when dealing with real numbers. Blunder can also lead to data errors. Algorithmic error: Algorithms underlying the solution methods in numerical computations sometime uses mathematical series with innite terms. Only a nite number of terms can be taken into account when designing the solution algorithm for a problem. This requires that some terms will be ignored in the computation, hence the error. Taking the rst neglected term as the error does not remove this inexactness. Computational tool error: There are errors inherent in the computational tool due to the limited capacity of the computer. This depends on what the computer can represent, stores, process and/or transmit. Most data cannot be stored exactly so a compromise must be made vis-a-vis the capability of the computer. Despite the above, note that the computer does not make mistakes. The computer carries out only what it is designed and programmed to do. A general conceptualisation of errors that militates against obtaining accurate computational results is depicted in Fig. 1. Our aim in numerical computations is to nd an answer which is correct to a specied level or standard of accuracy while taking into account the above sources of error. Hence, numerical computations comprises the art and science of obtaining practical solutions to computational problems. The solution obtained to a problem may not be the

Mathematical Function truncation

Concept representation error

Solution Conception

Computing Machine Inherent error Data entry error (Blunder)

Solution Interpretation error

Problem

Computational Model

Running Program

Computed Solution

Data

Algorithmic error

Solution Presenation error

Measuring instrument Inherent error

Data round off/ Chopping

Fig.1: Context of Computational Errors most accurate. However, a solution is expected to be within a reasonable range of accepted values. The value will serve to inform an engineer or a scientist in making practical decisions. Therefore, the purpose of computation is not numeration. Computation is done to gain informed scientic insight into real world situation or phenomenon. The design and implementation of computational solutions are in the context of what is practical. This is constrained by what can be measured and how it is measured as well as the data, algorithm and computing device used to represent and manipulate them. The issue of accuracy, or error, in numerical computation and the computing process are meant to facilitate the construction of an acceptable expression of the extent to which computation results can be reasonably applied. Acceptability, however, is a function of the technology, cost and application environment. Therefore, the emphasis in our practical classes will not be on merely generating the correct results, but also on knowing the degree or extent to which the result can be relied upon in engineering decision making. The background information provided above will underpin all the exercises and tasks that will be assigned to you in this laboratory.

2 Introduction to the laboratory


The practical classes in this course will, among other things, provide insights into the design of accurate numerical algorithms and their implementation on digital computers. You will be expected to gain experiences on: i. The fundamentals of numerical rendering of problems. ii. Data representation and manipulation on digital computing machines.

iii. Numerical algorithm construction methodology. iv. Algorithm implementation as program (Plato Fortran and Octave). v. Error management techniques: detection, reduction or avoidance strategies. vi. Issues in the use of modern numerical software and tools. vii. Hints on numerical software development and deployment processes. Skill development through practices with programming exercises will provide a background for much of the course work. The programming exercises will focus on the utilization of numerical concepts and methods which have engineering applications. The practical exercises will emphasis good programming style as well as adequate and accurate documentation procedures. The fundamental questions that underlies the activities in the laboratory include the following: 1. How can we accurately approximate continuous or innite processes by nite discrete processes? 2. How do we cope with the errors arising from these approximations? 3. How rapidly can a given class of equations be solved for a given level of accuracy? 4. How can symbolic manipulations on equations, such as integration, differentiation, and reduction to minimal terms be most accurately performed? 5. Given assorted numerical solution methods, how can we select or synthesis the best solution method for a target problem? 6. How can the solutions to problems be incorporated into efcient, reliable, highquality mathematical software packages when such is required?

2.1 Purpose of the Laboratory


The laboratory assignments in this course have four (4) basic goals; 1. They help you to understand the general concepts and principles discussed in CSC-307 course lectures, as well as during your private studies, by allowing you to experiment with and conrm specic concepts expressed in some of the ideas presented. 2. They help to improve your prociency in algorithm design and computational problem solving. This prociency will be useful in modeling and simulation of engineering concepts and systems. 3. They help you to understand the principles of scientic and engineering experiment documentation.

4. They allow your instructor(s) to evaluate your practical skills in Numerical Computations. Accordingly, you are advised strongly to make sure that all the work which you submit for assessment arise out of your efforts and ideas. You are permitted, and encouraged, to discuss general algorithm design with the course lecturers as well as the laboratory coordinators. You may also receive help with specic debugging problems during the practical sessions from the laboratory coordinators. However, you are expected to work independently or only within your own team (where applicable) when in the laboratory. Remember that the motto of this university is for learning and culture. So do not let your behaviour suggest a lack of culture or home training, inside or outside the laboratory.

3 Programming
We will be using a programming language, FORTRAN with Plato IDE (Integrated Development Environment) and a programming tool, Octave modelling and simulation software. You will ask, why we have selected these two platform given the fact that there are more popular programming tools such as Java and simulation and modelling software such as MatLab, Mapel and Mathematical. One of the reasons is that programming languages such as Java are designed for witting general-purpose software. They are not developed specically for engineering and scientic computing as is the case for FORTRAN. Modelling and simulation tool such as MatLab, Mapel and Mathematical are very expensive and outside our reach. Plato IDE and Octave are open-source software, thus they are freely available for teaching and research. The more technical reasons for our choices are as follows.

3.1 Why FORTRAN 95


The FORTRAN (FORmular TRANslator) programming language was originally written for engineering and scientic programming. It was therefore design to provide the necessary programming structure for expressing and solving engineering and scientic problems. Specically: 1. It provides greater expressive power. FORTRAN statements are intuitive to engineers and scientist. 2. It enhances safety (i.e. provide tools for computational errors detection). 3. It enhance regularity since it uses standard syntax (i.e. rule for writing program instruction). 4. It provides extra fundamental features (such as dynamic storage). 5. It exploits modern computer hardware better, and 6. It has better portability between different machines. 5

7. There are a lot of free supports, in terms of program code, suggestion, patches, etc., from research groups and online open source for FORTRAN. 8. FORTRAN produces more efcient codes as codes are compiled not interpreted like Java. In addition, the Plato IDE (Integrated Development Environment) that we will be using in this course is freely available and can be downloaded from the Internet. The IDE is also easy to use as it provides a lot of supporting documents and examples online. The main disadvantage of FORTRAN is that it was developed before several important advances in modern programming paradigm. This limitation nonetheless, the drill provided by the FORTRAN programming language platform facilitates the kind of environment suitable for educational development for improving students learning experiences. Also, several modern features have been added to FORTRAN in the last decade. One of such addition is the object orientation programming concept.

3.2 Why Octave


We will be using the Octave platform to plot graphs and write simple simulation programs in this course. Although this tasks could be achieved using FORTRAN it will take more programming efforts to achieve comparative results. Note that, FORTRAN will allow us to access the features of the computer system directly in a manner that modelling as simulation languages like Octave or even MatLab will not. Octave package provides a close compatibility with MatLab. This gives us the opportunity to learn the syntax and power of both packages without nancial and/or licence restrictions associated with MatLab.

3.3 Programming Works


You will be required to carry out your programming works inside the laboratory. You may design your algorithms and test your code before coming to the laboratory. You may also create the source code and make some corrections by removing errors (debugging). You will, however, be required to explain the design procedure and the contents of your program. Familiarity with the contents of your program code and your creativity and ingenuity at problem solving will be reckoned and rewarded accordingly. Documentation on Plato and Octave can be obtained from the Internet. The respective URL will be provided and a pointer to the website to download them is provided on the Computing and Intelligent Systems Research Group webpage www.ifecisrg.org. If you are having issues with these packages please seek assistance from the laboratory coordinators.

4 Laboratory Reports
The documentation of your experiments is very important and will be given much attention during the grading of your laboratory work. You should, therefore, ensure 6

that your laboratory reports follow the format stated in Table 1. It should include information about your observations as much as possible. Also note that the items listed as 1, 2, 3, 6, 7, 8, and 13 must be included in your reports. Items listed as 5, 9, 10, 11 are important. While items listed as 4 and 12 should be included when used. The presentation of your laboratory report should be clean, clear and legible.

4.1 Assignment Submission Dates


The dates when you will be required to submit the reports of your assignments will be announced during lectures. You should endeavour to submit your reports on or before such dates. Late submission may attract stiff penalties, for example rejection of submission or mark deductions.

Table 1: Laboratory Document Contents Ser. No. 1. Item names Name and Title of Report Item Descriptions (i) NAME: Numerical Computations, Laboratory 1. (ii) TITLE: Error and Error Propagation (i) Your full name(s), Identication numbers, and Department of major (this must be listed for all group members if it is a group work) (i) Table of Contents (ii) List of Figures (If any) (iii) List of Tables (If any) (i) List all uncommon symbols and terms and specify their meanings. (i) Brief discussions, stating the background of the present work. (i) Statement of the problem (ii) Supporting theory and physical laws (if any). (iii) Mathematical models of problem and solution. (i) General Objective or aim of the lab. work (ii) Specic Objective to be accomplished in this experiment (i) Initial setting/material and tools. (ii) Experiment processes. (iii) Measurement and recording processes. (iv ) Algorithm design. (v ) Program development and running. (vi) Final setting (if any) (i) Analysis of method used, signicance of result obtained. (i) Illustrate, with examples, the real-life interpretation and applications of the result of your experiment. (i) Summary and general observation, next course of action. (i) Tables of values and results (ii) Program listings. (i) List all the literature cited in your work.

2.

Author and date

3.

Tables of Contents

4. 5. 6.

Glossary of Notation and Terms Introduction Problem statement

7.

Objectives of Experiment

8.

Experiment procedure

9. 10.

Discussion of results Technological interpretation and Application of results Conclusion and Suggestions for further works Appendixes References

11. 12. 13.

5 Laboratory Assignment 1
5.1 EXPERIMENT I: Error and Error Propagation in Numerical Computations
The aim of most activities in numerical computations is to develop and deploy accurate, effective and efcient numerical solution. Therefore, numerical algorithms are designed with the almost aim of eliminating errors that can affect the computations of exact result. Numerical analysis have provided us with recipe of mathematical methods that can be employed in solving general numerical problems. Computational techniques are required for crafting out a solution which can give accurate result when implemented on a computer. Built-in mechanisms for estimating the error expected are also used in this process. The task of minimizing error in a computational process is, therefore, a relative one which depends on many factors. Some of the factors include: 1. Software environment for implementing the computation algorithm. 2. Hardware capability of the computational tool used: This will affect how numbers are stored. 3. Features of the problem being solved, e.g. the kind of mathematical models used. An adequate understating of how these three factors culminate to affect the results of computational processes is necessary for the proper interpretation the resulting model. Familiarity with subtle, but critical, factors which may give rise to computational errors, is also important. For example how a programming language built-in functions are implemented. This will help in putting in place adequate measured to reduce the occurrence of avoidable errors. It must, however, be noted that this course only focus on errors which may arise at the algorithm design and implementation level of numerical solution to problems. Errors which have to do with the conception of the problems and/or it solution methods are not of primary concern.

5.2 Aim of Experiment I


This laboratory experiment is meant to familiarise students with the sources of some of errors in computational methods, implemented as algorithms on digital computer system.

5.3 Experiment 1A: Inherent Error in single precision arithmetic


The accuracy of any real number stored on a computer system depend heavily on the word length of the system. The word length is the number of signal communication lines on the processors data bus. Errors related to the limited capability of a computational tool such as this are called machine inherent error. The format for storing real numbers 1 on the computer can also inuence the accuracy of numeric data.
1 Note

that No real number is real. All numbers are representation for an abstract entity or object

The representation of real numbers is similar to that used in scientic notations. A real number is represented as F = M E Where: M is called the mantissa of the number. is the base of the number. E is the exponent. The number of bits in M determines the precision of the number. It determines the smallest number that can be represented by the system. Therefore, the more the number of bits that is assigned to the mantissa, the more precise the data that can be stored on the computer system. is the base of the number. In the case of digital computers, = 2. This is because, digital computers uses the binary number system. = 10 for the decimal number system. Since is same for all the numbers that will be stored on a machine, it is generally not store, but implied. The exponent E , determines the magnitude of the number. The number of bits for representing E determines the magnitude of the number that can be store. For example, the decimal number 23.567635 105 is smaller but more precise than 23.56 1010 . This is because, there are more digits in the mantissa but the exponent is smaller, i.e. 5 < 10. Generally, the more the number of bits assigned to the mantissa, i.e. M , the better the precision of the number stored. (1)

5.4 Machine epsilon


The epsilon (represented using the symbol ) of a computer is the smallest number it can store, using the machines oating-point arithmetic. Any number smaller than the machine epsilon of a machine will be stored as zero. Determining the epsilon of a machine is critical to the implementation of most numerical algorithms and the interpretation of their results. Also, the accuracy of real numbers and the results of arithmetic operations on it are constrained by machine epsilon. Algorithms for determining the machine epsilon are meant to nd the smallest number such that when added to the value one (1.0) will be greater than one, that is 1.0 + epsilon > 1.0. A way to do this is to start with a small number and repeatedly divide it by 2 until the expression 1.0 + epsilon > 1.0 becomes false. The last number that was obtained before this is taken as the machine epsilon. The algorithm in Table 2 implements this computation process for machine epsilon.

5.5 Task set 1


1. Write a program to implement the algorithm in Table 2

10

Table 2: Algorithm: Machine Epsilon START: REAL epsilon REAL Testeps = 1.0 5 Testeps := Testeps/2.0 IF ((1.0 + Testeps) > 1.0) THEN epsilon = Testeps GO TO 5 ENDIF WRITE epsilon END: 2. Experiment with various division for the epsilon by modifying line 5 of the algorithm, i.e. 5 Testeps := Testeps/2.0, using various divisor such as 3.0, 5.0, 10.00 3. Plot a graph (use Octave) relating the epsilon you obtained against the divisor you used. 4. Discuss your observations and results.

5.6 Machine Gamma


The Machine Gamma () of a computer is the largest real number that it can correctly store. Any number bigger than the machine Gamma is considered as Not a Numbers and represented as N aN . The machine Gamma is related to the machine epsilon by the formula: = 1 2 (2)

5.7 Task set 2


1. Write a program to computer machine Gamma based on the machine epsilon you obtained in Task 1. 2. Plot a graph to relate the machine epsilon and Gamma. 3. Discuss your observations and results. The numerical space of the computer is dened by the closed interval [ , ]. Any computation that results in values outside this range cannot be handled by the computer and the result is said to be out of scope for the respective machine. If a computation produces a result smaller than the machine epsilon (i.e. ), the operation is said to have resulted in underow error. If a computation produces a result bigger than the machine gamma (i.e. ), the operation is said to have resulted in overow error.

11

Overow and underow operations are outside the scope of the a computing device. However, wider numerical scope can be achieved on the same computer systems by increasing the precision of data and operations using programming language data declaration facilities.

5.8 Experiment 1B: Inherent Error in Double Precision Arithmetic


The values for machine epsilon and gamma obtained in Experiment 1A are for single precision arithmetic. This is the native precision of the computer. It is possible to extend this precision using program instruction. When a variable is dened as double precision, the mantissa used is doubled. This has the effect of increasing the precision of the number that can be processes. This however, comes at the cost of slow computation.

5.9 Task set 3


1. Repeat Task 1 and Task 2 using double precision arithmetic. 2. Discuss your observations and results.

12

You might also like