You are on page 1of 208

Past Qs 1) What do you mean by SDLC. Steps in SDLC. Error encountered while developing software? 2) Difference between: a.

Primary and secondary storage devices b. System and application software c. Master and Transaction Files d. Front-End and Back-end computing. 3) How is on-line processing different from batch processing? Illustrate different characteristics of these two types of processing systems and their suitability for different areas of applications. Give two examples of each. 4) Design Master and Transaction Table structures for Leave Accounting System for a medium sized manufacturing setup. Explain the process of updating of master data table for the application. 5) Discuss utility: a. Spreadsheet software system b. Text Processing software system c. Presentation graphics software system d. Database manager 6) Discuss in MS-Office: a. What if analysis b. Mail-Merge c. Slide Transition d. Data Query Take the example of a courier company for explaining the above. 7) Short notes on: a. Computer Networks b. I-P-O Cycle

c. Generation of languages d. Internet based computing 8) Discuss the organization you have studied as a part of your field study and illustrate the following aspects of computerization w.r.t. this organization: a. H/W Platforms b. S/W Platforms c. DBMS d. Networking e. Most Critical Applications

9) What do you understand by an I-P-O cycle in data processing? Discuss the relevance of an I-P-O cycle for: a. Batch processing system b. On-line processing system, detailing various components of the I-P-O cycle.

10) What is the significance if Data Tables in business data processing? Discuss various types of Data Tables required to be used for processing taking an example each of: a. Inventory Accounting System b. Savings Bank Accounting System

11) Critically evaluate the Indian IT Industry and its growth. What do you think is responsible for such a growth of this industry? Specifically identify the major components of Indian IT industry and respective key players.

12) How do we develop IT portfolio for an organization? Illustrate with the help of an example. Distinguish between generic and business specific applications for the chosen organization.

13)Discuss the following in MS-Office Suite:

a. Goal Seeking Analysis in Excel b. Consolidation of Data in Excel c. Mail Merge in Word d. Report Creation In Access e. Animation in Power point

14)Short Notes:

a. Programming b. E-Commerce c. Flow Charting d. Computer Networking

15) Discuss the organization you have studied as a part of your field study and illustrate the following aspects of computerization in the context of the organization: a. Which H/W systems are being faced by this organization? b. Which are the key DBs being used by the organization? c. Comment on the computer networking in the organization? d. Discuss the critical business applications of computers in this organization e. If you were to recommend three improvements in the IT systems, what would be these recommendations?

16)a) Give a brief outline of development of modern computing. b) What are the factors that affect the performance of a desktop computer? c) Make a brief comparison between MS-Windows and Linux

17)What do you understand by the concept of a data centre? What type of organizations would you like to have a data centre? At what levels are redundancies created in a data centre? Explain with help of examples.

18) Introduce the organization of your field study and discuss the following aspects of this study: a. Major Applications

b. Major databases c. IT Manpower

19) What do you understand by the term computer system setup? Discuss its major elements? Discuss its relevance in planning the computerization of an organization.

20)Discuss the significance of Data Tables in data processing activity taking the following examples: a. Sales Accounting System for a departmental store b. Leave accounting system for a 200 employee company.

21)Trace the growth of Indian IT Industry and comment on its status in view of the global scenario. How has the implementation of IT based solutions grown in Indian Economy?

22)Utility Of ms Office facilities: a. Goal Seeking

b. Mail merge c. Data Filtering d. Styles

23)Short Note: a. Internet and business b. I-P-O Cycle c. Online data processing d. Software Development Process

24)a) What do you understand by the term Operating System? b) What are the essentials of an Operating System? c) Draw a comparison between any two operating systems. d) What are Applications? And how do they differ from System Software?

25)What are the advantages of Computer Networks? How do these help in managerial decision making?

26)Discuss various phases of development of modern computing. How has this affected the way business s is conducted?

27)At what levels are redundancies created in a data center? Explain with the help of examples.

28)What is the difference between Primary and Secondary storage? Use example of a financial application to illustrate your answer.

29)Why are certain types of databases called relation DBs?

30)What do you understand by the term I-P-O cycle? How does it help understand the functioning of Data Processing Systems? Illustrate with the help of a few examples.

31) Discuss the relevance of master and transaction data tables in data processing activity taking the following examples: a. Inventory Accounting Systems for a Hospital. b. Payroll accounting system for a 200 employee company.

32)Explain the following MS Office commands:

a. Goal Seek analysis b. Track Changes c. Name range d. IF function

33)Short notes on: a. Indian Computing environment b. World Wide Web c. Embedded Systems d. URL e. Internet and Business f. Software Development Process

What do you mean by SDLC? Steps in SDLC. Error encountered while developing software?

Software Development life cycle is a process of building an application through different phases. Here. The phases are 5 types, they are: - Requirement Analysis, Design, Coding, Testing and Maintenance. SDLC Model is also known as Classic Life Cycle Model (or) Linear Sequential Model (or) Waterfall Method. This model has the following activities. 1. System/Information Engineering and Modeling Work begins by establishing the requirements for all system elements and then allocating some subset of these requirements to software. This system view is essential when the software must interface with other elements such as hardware, people and other resources. If the system is not in place, the system should be engineered and put in place. In some cases, to extract the maximum output, the system should be re-engineered and spruced up. Once the ideal system is engineered or tuned, the development team studies the software requirement for the system. 2. Software Requirement Analysis This process is also known as feasibility study. In this phase, the development team visits the customer and studies their system. They investigate the need for possible software automation in the given system. By the end of the feasibility study, the team furnishes a document that holds the different specific recommendations for the candidate system. It also includes the personnel assignments, costs, project schedule, target dates etc.... The requirement gathering process is intensified and focused specially on software. To understand the nature of the program(s) to be built, the system engineer or "Analyst" must understand the information domain for the software, as well as required function, behavior, performance and interfacing. The essential purpose of this phase is to find the need and to define the problem that needs to be solved. 3. System Analysis and Design In this phase, the software development process, the software's overall structure and its nuances are defined. In terms of the client/server technology, the number of tiers needed for the package architecture, the database design, the data structure design etc... are all defined in this phase. A software development model is thus created. Analysis and Design are very crucial in the whole development cycle. Any glitch in the design phase could be very expensive to solve in the later stage of the software development. Much care is taken during this phase. The logical system of the product is developed in this phase. 4. Code Generation

The design must be translated into a machine-readable form. The code generation step performs this task. If the design is performed in a detailed manner, code generation can be accomplished without much complication. Programming tools like compilers, interpreters, debuggers etc... are used to generate the code. Different high level programming languages like C, C++, Pascal, Java are used for coding. With respect to the type of application, the right programming language is chosen. 5. Testing Once the code is generated, the software program testing begins. Different testing methodologies are available to unravel the bugs that were committed during the previous phases. Different testing tools and methodologies are already available. Some companies build their own testing tools that are tailor made for their own development operations. Types of testing: Data set testing. Unit testing System testing Integration testing Black box testing White box testing Regression testing Automation testing User acceptance testing Performance testing Operations and maintenance

6. Maintenance The software will definitely undergo change once it is delivered to the customer. There can be many reasons for this change to occur. Change could happen because of some unexpected input values into the system. In addition, the changes in the system could directly affect the software operations. The software should be developed to accommodate changes that could happen during the post implementation period. Basic Roles and responsibilities

In the analysis phase, the Company level people and Client or Customer side people will participate in a meeting called Kickoff meeting. The Client provides the information and The Company side people (Business Analyst) will participate to gather the information from the client. The Business Analyst who is well in Domain Skills, Technical Skills and Functionality Skills. By the Gathered information the Business Analyst will prepare the BRS Document which is also called as Business Requirement Specification. Then later the same document is also called as FRD document. That's Functional Requirement Document. Project Manager will prepare SRS Document i.e.,: System Requirement Specification Document. Test Lead will prepare the Test Plan Document. Later all these documents are verified by Quality Analyst. Here the quality Analyst will check the gaps or loopholes in between the document to map the client Specification document and Business Requirement Specification Document. Again Business Analyst will be involved to prepare the Use Case Document and later these all documents are maintained as baseline Document, The Base line Document which is called called as Stable document.

But It Doesn't Work! The waterfall model is well understood, but it's not as useful as it once was. In a 1991 Information Center Quarterly article, Larry Runge says that SDLC "works very well when we are automating the activities of clerks and accountants. It doesn't work nearly as well, if at all, when building systems for knowledge workers -- people at help desks, experts trying to solve problems, or executives trying to lead their company into the Fortune 100." Another problem is that the waterfall model assumes that the only role for users is in specifying requirements, and that all requirements can be specified in advance. Unfortunately, requirements grow and change throughout the process and beyond, calling for considerable feedback and iterative consultation. Thus many other SDLC models have been developed. The fountain model recognizes that although some activities can't start before others -- such as you need a design before you can start coding -- there's a considerable overlap of activities throughout the development cycle. The spiral model emphasizes the need to go back and reiterate earlier stages a number of times as the project progresses. It's actually a series of short waterfall cycles, each producing an early prototype representing a part of the entire project.

This approach helps demonstrate a proof of concept early in the cycle, and it more accurately reflects the disorderly, even chaotic evolution of technology. Build and fix is the crudest of the methods. Write some code, then keep modifying it until the customer is happy. Without planning, this is very open-ended and can by risky. In the rapid prototyping (sometimes called rapid application development) model, initial emphasis is on creating a prototype that looks and acts like the desired product in order to test its usefulness. The prototype is an essential part of the requirements determination phase, and may be created using tools different from those used for the final product. Once the prototype is approved, it is discarded and the "real" software is written. The incremental model divides the product into builds, where sections of the project are created and tested separately. This approach will likely find errors in user requirements quickly, since user feedback is solicited for each stage and because code is tested sooner after it's written. Strengths and weaknesses of SDLC Few people in the modern computing world would use a strict waterfall model for their Systems Development Life Cycle (SDLC) as many modern methodologies have superseded this thinking. Some will argue that the SDLC no longer applies to models like Agile computing, but it is still a term widely in use in Technology circles. The SDLC practice has advantages in traditional models of software development that lends itself more to a structured environment. The disadvantages to using the SDLC methodology is when there is need for iterative development or (i.e. web development or e-commerce) where stakeholders need to review on a regular basis the software being designed. Instead of viewing SDLC from a strength or weakness perspective, it is far more important to take the best practices from the SDLC model and apply it to whatever may be most appropriate for the software being designed.

A comparison of the strengths and weaknesses of SDLC

Strength and Weaknesses of SDLC Strengths Control. Monitor Large projects. Detailed steps. Evaluate costs and completion targets. Documentation. Well defined user input.

[11]

Weaknesses Increased development time. Increased development cost. Systems must be defined up front. Rigidity. Hard to estimate costs, project overruns. User input is sometimes limited.

Ease of maintenance. Development and design standards. Tolerates changes in MIS staffing.

Errors while developing software Not understanding the users needs. Lack of user input, or not even asking. Underestimating the size of the project. Rushing through the planning stage, or avoiding the planning all together. Not testing early enough, often, or at all! Choosing the Cool methodology at the time, vs. one that has worked in the past OR not using a methodology at all. Letting a software developer run the software development project. Bored, unmotivated team! Planning on catching up later. Non Source Control! Deciding to switch your development tools when youre already into the project. Allowing feature creep. Dont entertain just any request. Omitting necessary tasks to shorten the project plan. Insufficient management controls in the development project. Adding people at the end of the project to speed things up. No unit testing. Lack of error handling. Typos No naming style or code conventions. Using global variables everywhere. Not asking for help at all during the software development process. Not commenting your code. Hogging all information to yourself. Performing database operations at the application layer instead of the database layer. Not only is this putting the processing juice on your application instead of your server, but you have put your database at risk of data integrity issues, and getting bad data Not validating your data!

PROGRAMMING
Computer programming (often shortened to programming or coding) is the process of writing, testing, debugging/troubleshooting, and maintaining the source code of computer programs. This source code is written in a programming language. The code may be a modification of an existing source or something completely new. The purpose of programming is to create a program that exhibits a certain desired behavior (customization). The process of writing source code often requires expertise in many different subjects, including knowledge of the application domain, specialized algorithms and formal logic.

Overview Within software engineering, programming (the implementation) is regarded as one phase in a software development process. There is an ongoing debate on the extent to which the writing of programs is an art, a craft or an engineering discipline. In general, good programming is considered to be the measured application of all three, with the goal of producing an efficient and evolvable software solution. The discipline differs from many other technical professions in that programmers, in general, do not need to be licensed or pass any standardized (or governmentally regulated) certification tests in order to call themselves "programmers" or even "software engineers." However, representing oneself as a "Professional Software Engineer" without a license from an accredited institution is illegal in many parts of the world Another ongoing debate is the extent to which the programming language used in writing computer programs affects the form that the final program takes. Different language patterns yield different patterns of thought. Modern programming 1) Quality requirements Whatever the approach to software development may be, the final program must satisfy some fundamental properties. The following properties are among the most relevant: a) Efficiency/performance: the amount of system resources a program consumes (processor time, memory space, slow devices such as disks, network bandwidth and to some extent even user interaction): the less, the better. This also includes correct disposal of some resources, such as cleaning up temporary files and lack of memory leaks. b) Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms, and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero). c) Robustness: how well a program anticipates problems not due to programmer error. This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services and network connections, and user error. d) Usability: the ease with which a person can use the program for its intended purpose, or in some cases even unanticipated purposes. e) Portability: the range of computer hardware and operating system platforms on which the source code of a program can be compiled/interpreted and run. f) Maintainability: the ease with which a program can be modified by its present or future developers in order to make improvements or customizations, fix bugs and security holes, or adapt it to new environments. This quality may not be directly apparent to the end user but it can significantly affect the fate of a program over the long term. 2) Algorithmic complexity

The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problem. For this purpose, algorithms are classified into orders using so-called Big O notation, O(n), which expresses resource use, such as execution time or memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances. The first step in most formal software development projects is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of differing approaches for each of those tasks. a) One approach popular for requirements analysis is Use Case analysis. Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA. b) A similar technique used for database design is Entity-Relationship Modeling (ER Modeling). Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic languages. c) Measuring language usage: It is very difficult to determine what are the most popular of modern programming languages. Some languages are very popular for particular kinds of applications (e.g., COBOL is still strong in the corporate data center, often on large mainframes, FORTRAN in engineering applications, scripting languages in web development, and C in embedded applications), while some languages are regularly used to write many different kinds of applications. Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language, the number of books teaching the language that are sold (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL). 3) Debugging Debugging is a very important task in the software development process, because an incorrect program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of a static analysis tool can help detect some possible problems. Debugging is often done with IDEs like Visual Studio, NetBeans, and Eclipse. Standalone debuggers like gdb are also used, and these often provide less of a visual environment, usually using a command line. 4) Programming languages Different programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Trade-offs from this ideal involve finding enough

programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. The details look different in different languages, but a few basic instructions appear in just about every language: input: Get data from the keyboard, a file, or some other device. output: Display data on the screen or send data to a file or other device. arithmetic: Perform basic arithmetical operations like addition and multiplication. conditional execution: Check for certain conditions and execute the appropriate sequence of statements. repetition: Perform some action repeatedly, usually with some variation. Many computer languages provide a mechanism to call functions provided by libraries. Provided the functions in a library follow the appropriate runtime conventions (eg, method of passing arguments), then these functions may be written in any other language. 5) Programmers Computer programmers are those who write computer software. Their jobs usually involve: Coding Compilation Documentation Integration Maintenance Requirements analysis Software architecture Software testing Specification Debugging

BASIC FUNDAMENTALS OF PROGRAMMING


Programming is like any other language, say, English. In English, we have Language Character Sets (A-Z, 0-9, Special characters) Words Statements (Clauses/Phrases) Sentences Paragraphs Part 1, 2, 3 Chapter Book Shelf - Library. Each programming language will have Character Set Words Clauses Sentence Paragraph Sections - Program Module (Say Payroll Module) Sub Systems (Compensation Sub system) Systems (HR Systems) Software Base.

Each system will have a no of programs. Each program will have its own objective, scope, IPO Process etc. When we choose a programming language which human beings can edit, and understand, it is called Source Code/Program. Object Program/Code: Machine understands in Object Program. Source Code/Program (Editable Program): translates Object/Program Code (Executable Program). Each Source Code/program has assembler, translator and compiler. o o Translator: analyses s/w programs and eliminates syntax error. Interpreter: X Base kind of programming languages 3 1/2 generation interprets each line of code one by one. Compiler: compiles the typed code into a form that system would understand.

Developing Software Development Life Cycle o o o o o o o Start with Objective/Scope Analysis of I-P-O Processing Logic Create a pseudo code/flow chart prototyping Coding using Programming Languages Source Code/Program Translate (If Error it is called Type 1 error, go back to last step and do debugging). If no error, move to next step. Create Object Program/Code. Test the object program (If Error Type 2, go do debugging again) If OK, Program is ready to use. Error Type 1: Syntax Error Type 2: Semantics (Logical Mistake).

o o o

Processing Logic: Each program works as follows:

o o o

Initiation of program Process Termination

Programming: A set of commands in a chosen programming language bunched up in a sequential manner so as to build up a processing logic for a business problem. It makes use of following constructs: o o o o A sequence A selection A loop Exact syntax will depend upon the available phrases, clauses etc of the chosen language and the transaction programming rules.

Traditional Flow Chart (Draw them as flow charts) o A Sequence: o Total = A+B+C Discount = Total * 0.12 NET = Total Discount

A selection (If/Then/Else) If Salary > 100, Bonus: Yes; Else No

A Loop Make a flow chart showing a loop.

Draw a complete flow chart with sequence, selection and loop Assume a salary and bonus example. This flow chart will be called Program/Logic flow chart.

Another flow chart called system flow chart would be required that would tell which data tables and programs are getting impacted and what programs updates which tables.

DIFFERENCE BETWEEN Master and Transaction Files

A file is an organized collection of data. The term file may be used in reference to a database where a file is equivalent to a table although any data can be stored in a file. When data processing takes place, generally two types of files are used; transaction files and master files. File is of two types: 1) Master file. 2) Transaction file. Master file: It contains records of permanent data types. Master files are created at the time when you install your business. If you wish to convert your company into computerized one you need to create master file which can be created by using your manual file folder and keying data onto storage devices for example the name of customer , dob , gender etc, these are permanent data types. Master files contain descriptive data, such as name and address, as well as summary information, such as amount due and year-to-date sales Following are the kinds of fields that make up a typical master record in a business information system. There can be many more fields depending on the organization. The "key" fields below are the ones that are generally indexed for matching against the transaction records as well as fast retrieval for queries. EMPLOYEE MASTER RECORD key Employee account number key Name (last) Name (first) Address, city, state, zip Hire date Birth date Title Job class Pay rate Year-to-date gross pay CUSTOMER MASTER RECORD key Customer account number key Name

Bill-to address, city, state, zip Ship-to address, city, state, zip Credit limit Date of first order Sales-to-date Balance due VENDOR MASTER RECORD key Vendor account number key Name Address, city, state, zip Terms Quality rating Shipping method PRODUCT MASTER RECORD key Product number key Name Description Quantity on hand Location Primary vendor Secondary vendor Transaction file: The data in transaction files is used to update the master files, which contain the data about the subjects of the organization (customers, employees, vendors, etc.). Transaction files also serve as audit trails and history for the organization. Where before they were transferred to offline storage after some period of time, they are increasingly being kept online for routine analyses There is no fast rule to distinguish the two. In application programming, a file used to store transactions prior to posting to a summary for a general ledger, for example, would be called a transaction file. In application programming where data entry is separated from the main actors, that is, done by data entry operators rather than accounting clerks, then the transactions entered are stored in a transaction file and are subjected to manual and automated checks for accuracy before being committed to a master file.

Following are the kinds of fields that make up a typical transaction record in a business information system. There can be many more fields depending on the organization. The "key" fields below are the ones that are generally indexed for fast matching against the master record.

EMPLOYEE PAYROLL RECORD key Employee account number Today's date Hours worked ORDER RECORD key Customer account number Today's date Quantity Product number PAYMENT RECORD key Customer number Today's date Invoice number Amount paid Check number PURCHASE ORDER key Purchase order number Today's date Department Authorizing agent Vendor account number Quantity Product number Due date Total cost WAREHOUSE RECEIPT key Purchase order number key Invoice number Today's date

Quantity Product number

I-P-O Cycle (For Insurance Company), Difference between I-P-O for On-line and batch processing In case of an Insurance company, following information would need to be covered for I-P-O Cycle: o Policyholders Information: Name, Age, Sex, Address etc. o Policy Master: Policy No, Policy Type, Branch, Policyholders information, Maturity, Surrender Value, Total Premiums paid, Policy Frequency etc. o Investments o Premium Payments For Insurance Company, policy table would be the master table. Another master would be Portfolio master (is called Term Table Reference, in case of LIC of India Ltd.) Key is to identify Engine Application which can be developed first as it can grow into various business opportunities. For Instance, PF Application is not an engine application whereas Payroll is an engine application. Master and Transaction Tables in case of, say, LIC
POLICY MASTER

Data Name POLICY_NO FNAME LNAME GENDER

Data Type N C C C

Width 20 30 30 6

Decimal 0 0 0 0

DOB ADD_1 ADD_2 ADD_3 CELL_NO LANDLINE_NO OFFICE_CODE AGENT_CODE TT_REF (Term table ref) SUM_ASSRD INST_AMT INST_MO LAST_INST FUP_INST (First unpaid premium) BONUS_ACCR NOMINEE REL_NOMINEE

D C C C N N N N N N N N N N N C C

10 30 30 30 12 12 6 8 5 15 15 2 15 15 15 30 10

0 0 0 0 0 0 0 0 0 2 2 0 2

2 2 0 0

POLICY TRANSACTION (for a period)

Data Name TXN_ID POLICY_NO TXN_DATE TXN_MODE TXN_AMT INST_AMT N N D C N N

Data Type 12 12 8 1 10 10

Width 0 0 0 0 2 2

Decimal

INST_LATE TXN_DETAIL UPDATE_FLAG

N N C

10 30 1

2 0 0

I-P-O Cycle for On-Line Transaction Processing (e.g., Railway Inquiry)

REPORT (hard copy)

DATA FILE(S) (are generally master data files) PROGRAM

DATA FILES (are updated master data files)

Keyboard Input (Transaction data)

Soft Copy (On Screen Inquiry Result)

For railway inquiry, after one railway ticket booking, updated master becomes unupdated master, before your next transaction is received. I-P-O Cycle for Batch Processing (e.g., Electricity Bill Billing)

REPORT (hard copy)

DATA FILE(S) (Master as on 1/1/2010)DATA FILE(S) Data File (1/1/2010 31/3/2010) (Data collected over 1-2 months period) 1 PROGRAM

DATA FILES (Master as on 1/4/2010)

In case of batch processing, every transaction is different, no concept of regular master update. Difference between Online and Batch Processing S.No. 1 2 3. Online Processing Huge Infrastructure: 24*7 Status of master data always up-todate. Batch Processing Shared Infrastructure Status of master data up-to-date only till last process run.

Some applications are best suited Some Applications are best for Online Processing. For Ex. Ticket suited for Batch Processing. For Booking Ex. NDPL Billing. However, some applications are hybrid as well.

4.

Used where there is high customer interaction.

Used when customer interaction is not required.

In 1950, start of computerization, everything was batch processing. In 1970s, 6-7% of online processing. 1n 1980s, 40% of online processing. In 1990s, 70-85% of online processing. In 2010, 75-85% of online processing. Some batch processing may always be needed.

INDIAN IT SCENARIO

IT industry: on a steady growth track

The total revenues for the Indian IT industry were estimated to touch US$ 71.7 billion in 2008-09. The Indian IT industry has been growing at a compound annual growth rate (CAGR) of 27 per cent for the last five years.

India maintains lead in ITeS-BPO

Indian IT/ITeS sector has matured considerably with its expansion into varied verticals well differentiated service offerings increasing geographic penetration

Indias importance among emerging economies, both as a supply and demand centre, is fuelling further IT/ITeS growth

Continues to be one of the fastest growing industries in India; while India maintains its leading position as a strategic off-shoring destination for multinationals worldwide The Indian ITeS-BPO (domestic and exports) revenues are estimated at US$ 14.7 billion and the sector grew at a rate of 18.9 per cent in 2008-09 India, the primary global off-shoring destination for low-end back-office services earlier, is now emerging as an innovation and research hub. India is estimated to continue attracting substantial investments in the sector, with the cost arbitrage factor expected to prevail for another 10-15 years. The ITeS segment is expected to leverage on the penetration of the IT segment, complementing and completing end-to-end customer requirements with the aid of offshore and onshore service offerings.

INVENTORY MANAGEMENT SYSTEM

Inventory Management INVENTORY ACCOUNTING (+) INVENTORY CONTROL

ITEM MASTER (As on a date, balance always a part of master)

Data Name Item_Code Item_Description Units MaxLimit MinLimit Safety_Stock EOQ ROL N C N N N N N N

Data Type 10 30 10 10 10 10 10 10

Width 0 0 0 0 0 0 0 0

Decimal

Location_Code Category Balance

N C N

5 10 10

0 0 0

ITEM TRANSACTION (for a period)

Data Name Item_Code Txn_Id Txn_Date Txn_Party Description Ref_No Transaction_Qty Transaction_Type N N D C C N N C

Data Type 10 10 8 20 30 10 10 1

Width 0 0 0 0 0 0 0 0

Decimal

Inventory Reports Status OR As-On Reports For the Period Reports STOCK REGISTER CONSUMPTION REPORT RECEIPT REPORT COSTING REPORT

STOCK STATUS REPORT STOCK VALUATION REPORT OVERSTOCKED VALUATION REPORT UNDER STOCKED VALUATION REPORT GOODS ON ORDER REPORT PHYSICAL STOCK VERIFICATION REPORT

STOCK STATUS REPORT Office Date Page Remarks (Under/Over

S.No. Item Code Item Description Unit Stock Location Stock)

STOCK VALUATION REPORT Office Date Page Remarks

S.No. Item Code Item Description Unit Stock Rate Location

Sub Total Total

UNDER STOCKED ITEMS REPORT Office Date Page ROL Remarks

S.No. Item Code Item Description Unit Stock Min Stock

Sub Total Sub Total Total

OVER STOCKED ITEMS REPORT Office Date Page Equal No Days

S.No. Item Code Item Description Unit Stock Max Stock Inventory Remarks

Sub Total Sub Total Total

GOODS ON ORDER REPORT Office Date Page ROL (For Ref Only)

S.No. Item Code Item Description Unit Stock Stock ROQ Remarks

Sub Total Sub Total Total

PHYSICAL STOCK VERIFICATION REPORT Office Date Page Stock (Records) Stock (Physical)

S.No. Item Code Item Description Unit Variance Location Code

Sub Total Sub Total Total

Leave Accounting System LEAVE MASTER (As on a date, balance always a part of master)

Data Name Emp_Id Leave_Cat N C N

Data Type 10 3 3 40 40 40 15 6 40 10 20 3 10 10 3

Width 0 0 1

Decimal

Leave_Bal (Gets Updated)


Emp_Name Emp_Add1 Emp_Add2 Emp_AddCity Emp_Add_PINCODE Emp_Workloc Emp_DOJ Emp_Designation Leave_CFBal Emp_Mgr Emp_BU (Code) Max_Allowed

C C C C N C D C N N N N

0 0 0 0 0 0 0 0 0 0 0 0

LEAVE TRANSACTION (for a period)

Data Name Txn_Id Emp_Id Leave_Cat From Date N N C D

Data Type 10 10 3 10

Width 0 0 0 0

Decimal

To Date No of days Status (Applied/Approved/Reject ed) Approval Date

D N C D

10 2 3 10

0 0 0 0

DATA MINING
Data that has relevance for managerial decisions is accumulating at an incredible rate due to a host of technological advances. Electronic data capture has become inexpensive and ubiquitous as a by-product of innovations such as the internet, ecommerce, electronic banking, point-of-sale devices, bar-code readers, and intelligent machines. Such data is often stored in data warehouses and data marts specifically intended for management decision support. Data mining is a rapidly growing field that is concerned with developing techniques to assist managers to make intelligent use of these repositories. A number of successful applications have been reported in areas such as credit rating, fraud detection, database marketing, customer relationship management, and stock market investments. The field of data mining has evolved from the disciplines of statistics and artificial intelligence. Definition Term for confluence of ideas from statistics and computer science (machine learning and database methods) applied to large databases in science, engineering and business. Gartner Group - Data mining is the process of discovering meaningful new correlations, patterns and trends by sifting through large amounts of data stored in repositories, using pattern recognition technologies as well as statistical and mathematical techniques. Drivers Market: From focus on product/service to focus on customer

IT: From focus on up-to-date balances to focus on patterns in transactions Data Warehouses -OLAP Dramatic drop in storage costs : Huge databases e.g Walmart: 20 million

transactions/day, 10 terabyte database, Blockbuster: 36 million households Automatic Data Capture of Transactions e.g. Bar Codes , POS devices, Mouse clicks, Location data (GPS, cell phones) Internet: Personalized interactions, longitudinal Data

Process 1. 2. 3. 4. 5. 6. 7. 8. Develop understanding of application, goals Create dataset for study (often from Data Warehouse) Data Cleaning and Preprocessing Data Reduction and projection Choose Data Mining task Choose Data Mining algorithms Use algorithms to perform task Interpret and iterate thru 1-7 if necessary

9. Deploy: integrate into operational systems. SEMMA Methodology Sample from data sets, Partition into Training, Validation and Test datasets Explore data set statistically and graphically Modify: Transform variables, Impute missing values Model: fit models e.g. regression, classfication tree, neural net Assess: Compare models using Partition, Test datasets

Data mining: step 4-8

Customer Relationship Management Target Marketing Attrition Prediction/Churn Analysis Fraud Detection Credit Scoring

Target marketing Business problem: Use list of prospects for direct mailing campaign Solution: Use Data Mining to identify most promising respondents combining demographic and geographic data with data on past purchase behavior Benefit: Better response rate, savings in campaign cost Example: Fleet Financial Group Redesign of customer service infrastructure, including $38 million investment in data warehouse and marketing automation Used logistic regression to predict response probabilities to home-equity product for sample of 20,000 customer profiles from 15 million customer base Used to predict profitable customers and customers who would be unprofitable even if they respond Churn Analysis: Telcos Business Problem: Prevent loss of customers, avoid adding churn-prone customers Solution: Use neural nets, time series analysis to identify typical patterns of telephone usage of likely-to-defect and likely-to-churn customers Benefit: Retention of customers, more effective promotions Example: IDEA CELLULAR

CHURN/Customer Profiling System implemented as part of major custom data warehouse solution Preventive action based on customer characteristics and known cases of churning and non-churning customers identify significant characteristics for churn Early detection Customer Profiling Systems based on usage pattern matching with known cases of churn customers.

Fraud Detection Business problem: Fraud increases costs or reduces revenue Solution: Use logistic regression, neural nets to identify characteristics of fraudulent cases to prevent in future or prosecute more vigorously Benefit: Increased profits by reducing undesirable customers

Risk Analysis Business problem: Reduce risk of loans to delinquent customers Solution: Use credit scoring models using discriminant analysis to create score functions that separate out risky customers Benefit: Decrease in cost of bad debts

Finance Business problem: Pricing of corporate bonds depends on several factors, risk profile of Company , seniority of debt, dividends, prior history, etc. Solution Approach: Through Data Mining , develop more accurate models of predicting prices.

Recommendation Systems Business opportunity: Users rate items (Amazon.com, CDNOW.com, MovieFinder.com) on the web. How to use information from other users to infer ratings for a particular user? Solution: Use of a technique known as collaborative filtering Benefit: Increase revenues by cross selling, up selling

Clicks to Customers Business problem: 50% of Dells clients order their computer through the web. However, the retention rate is 0.5%, i.e. of visitors of Dells web page become customers. Solution Approach: Through the sequence of their clicks, cluster customers and design website, interventions to maximize the number of customers who eventually buy. Benefit: Increase revenues

Emerging Major Data Mining applications Spam

DATA STORAGE
Core Concepts

Bioinformatics/Genomics Medical History Data Insurance Claims Personalization of services in e-commerce RF Tags Security : Container Shipments, Network Intrusion Detection

Types of Data: Numeric, Continuous ratio and interval, Discrete Need for Binning: Categorical order and unordered, Binary Over fitting and Generalization Regularization: Penalty for model complexity Distance Curse of Dimensionality Random and stratified sampling, resampling Loss Functions

Typical characteristics of mining data Standard format is spreadsheet: Row=observation unit, Column=variable Many rows, many columns Many rows moderate number of columns (e.g. tel. calls) Many columns, moderate number of rows (e.g. genomics) Opportunistic (often by-product of transactions): Not from designed experiments, Often has outliers, missing data

Techniques Supervised Techniques o classification: k-Nearest Neighbors, Nave Bayes, Classification Trees, Discriminant Analysis, Logistic Regression, Neural Nets o Prediction (Estimation): Regression, Regression Trees, k-Nearest Neighbors Unsupervised Techniques o Cluster Analysis, Principal Components o Association Rules, Collaborative Filtering

IT SOLUTIONS

Should help

Cost decrease to u, to customer and to overall business Speed increase Quality increase

For an organization to implement IT (5Cs):

Conviction Courage Clarity Capability Commitment and Continuity

DIFFERENCE BETWEEN Primary and secondary storage devices

Computer data storage, often called storage or memory, refers to computer components, devices, and recording media that retain digital data used for computing for some interval of time. Computer data storage provides one of the core functions of the modern computer, that of information retention. It is one of the fundamental components of all modern computers, and coupled with a central processing unit (CPU, a processor), implements the basic computer model used since the 1940s.

In contemporary usage, memory usually refers to a form of semiconductor storage known as random-access memory (RAM) and sometimes other forms of fast but temporary storage. Similarly, storage today more commonly refers to mass storage optical discs, forms of magnetic storage like hard disk drives, and other types slower than RAM, but of a more permanent nature.

Historically, memory and storage were respectively called main memory and secondary storage. The terms internal memory and external memory are also used.

Purpose of storage

Many different forms of storage, based on various natural phenomena, have been invented. So far, no practical universal storage medium exists, and all forms of storage have some drawbacks. Therefore a computer system usually contains several kinds of storage, each with an individual purpose.

A digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data.

Traditionally the most important part of every computer is the central processing unit (CPU, or simply a processor), because it actually operates on data, performs any calculations, and controls all the other components.

Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior.

In practice, almost all computers use a variety of memory types, organized in a storage hierarchy around the CPU, as a trade-off between performance and cost. Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary and off-line storage is also guided by cost per bit.

Hierarchy of storage

Primary storage: Primary storage (or main memory or internal memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.

Random-access memory (RAM) is small-sized, light, but quite expensive at the same time.

Main memory is directly or indirectly connected to the CPU via a memory bus. It is actually two buses: an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address that indicates the desired location of data. Then it reads or writes the data itself using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.

As the RAM types used for primary storage are volatile, a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from nonvolatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory.

Secondary Storage: Secondary storage is a hard disk drive with protective cover removed. Secondary storage (or external memory) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfers the desired data using intermediate area in primary storage. Secondary storage does not lose the data when the device is powered downit is non-volatile. Per unit, it is typically also an order of magnitude less expensive than primary storage. Consequently, modern computer systems typically have an order of magnitude more secondary storage than primary storage and data is kept for a longer time there.

In modern computers, hard disk drives are usually used as secondary storage. The time taken to access a given byte of information stored on a hard disk is typically a

few thousandths of a second, or milliseconds. By contrast, the time taken to access a given byte of information stored in random access memory is measured in billionths of a second, or nanoseconds. Rotating optical storage devices, such as CD and DVD drives, have longer access times. With disk drives, once the disk read/write head reaches the proper placement and the data of interest rotates under it, subsequent data on the track are very fast to access. As a result, in order to hide the initial seek time and rotational latency, data are transferred to and from disks in large contiguous blocks.

Some other examples of secondary storage technologies are: flash memory (e.g. USB flash drives or keys), floppy disks, magnetic tape, paper tape, punched cards, standalone RAM disks, and Iomega Zip drives.

The secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, providing also additional information (called metadata) describing the owner of a certain file, the access time, the access permissions, and other information.

Tertiary storage

Tertiary storage or tertiary memory, provides a third level of storage. Typically it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; this data is often copied to secondary storage before use. It is primarily used for archival of rarely accessed information since it is much slower than Secondary storage (e.g. 5 60 seconds vs. 1-10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.

When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.

Off-line storage

Off-line storage is computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction.

Off-line storage is used to transfer information, since the detached medium can be easily physically transported. Additionally, in case a disaster, for example a fire, destroys the original data, a medium in a remote location will probably be unaffected, enabling disaster recovery. Off-line storage increases general information security, since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is accessed seldom or never, off-line storage is less expensive than tertiary storage.

In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards. PRIMARY STORAGE DEVICES
1.These devices are temporary. 2.These devices are expensive. 3.These devices are faster, therefore expensive. slow and cheaper 4.These devices have less storage capacity 5.These devices refer to RAM.

SECONDARY STORAGE DEVICES


1. These devices are permanent. 2. These are cheaper. 3. These devices Computers via cables, and thus 4.These devices have high storage capacity. 5.These devices refer to FDD

DIFFERENCE BETWEEN System and application software

The software is divided into two main categories i.e system software and application software. System software includes all the programs and softwares designed by the manufacturers to run the system. A computer system can't run without the installation of system software in the computer. System software includes operating system, windows, DOS etc. Application software in contrast to system software are user written applications and user defined programs for example Microsoft power point, word, excel, games, note pad and other applications. The functions and instruction are written by the user in the case of application software whereas instructions are pre-coded by the manufacturers in the case of system software. Both of them are integral parts of computer system.

The software hierarchy is:

End User Application program Utilities Operating System Hardware Both Utilities and Operating System are system software.

Actually, a system software is any computer software which manages and controls computer hardware so that application software can perform a task. Operating systems, such as Microsoft Windows, Mac OS X or Linux, are prominent examples of system software. System software contrasts with application software, which are programs that enable the end-user to perform specific, productive tasks, such as word processing or image manipulation. System software performs tasks like transferring data from memory to disk, or rendering text onto a display device. Specific kinds of system software include loading programs, operating systems, device drivers, programming tools, compilers, assemblers, linkers, and utility software. Software libraries that perform generic functions also tend to be regarded as system software, although the dividing line is fuzzy; while a C runtime library is generally

agreed to be part of the system, an OpenGL or database library is less obviously so. If system software is stored on non-volatile memory such as integrated circuits, it is usually termed firmware while an application software is a subclass of computer software that employs the capabilities of a computer directly and thoroughly to a task that the user wishes to perform. This should be contrasted with system software which is involved in integrating a computer's various capabilities, but typically does not directly apply them in the performance of tasks that benefit the user. In this context the term application refers to both the application software and its implementation. A simple, analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system). The power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user. Typical applications include: industrial automation business software video games quantum chemistry and solid state physics software telecommunications (i.e., the Internet and everything that flows on it) databases educational software medical software military software molecular modeling software image editing spreadsheet simulation software Word processing Decision making software

Multiple applications bundled together as a package are sometimes referred to as an application suite. Microsoft Office and OpenOffice.org, which bundle together a word processor, a spreadsheet, and several other discrete applications, are typical examples. The separate applications in a suite usually have a user interface that has some commonality making it easier for the user to learn and use each application.. In some types of embedded systems, the application software and the operating system software may be indistinguishable to the user, as in the case of software used to control a VCR, DVD player or Microwave Oven.

Difference between Front-End and Back-end computing

Back end storage is the most basic way to understand the back end. When you want to have a picture displayed on a webpage or in an online auction or wherever, it sits waiting on a back end server until the file is requested then for the moment or time it is viewed on the front end. Front-end and back-end are generalized terms that refer to the initial and the end stages of a process. The front-end is responsible for collecting input in various forms from the user and processing it to conform to a specification the back-end can use. The front-end is a kind of interface between the user and the back-end.

In software architecture there are many layers between the hardware and end-user. Each can be spoken of as having a front- and back-end. The "front" is an abstraction, simplifying the underlying component by providing a user-friendly interface.

In software design, the separation of software systems into "front-ends" and "backends" simplifies development and separates maintenance.

For major computer subsystems, the front-end faces the user and the back-end launches the programs of the operating system in response.

Using the CLI (command-line interface ) feature requires the acquisition of special terminology and the memorization of the commands, and so a GUI ( graphical user interface ) acts as a front-end desktop environment instead.

In compilers, the front-end translates a computer programming source language into an intermediate representation, and the back-end works with the internal representation to produce code in a computer output language. The back-end usually optimizes to produce code that runs faster. The front-end/back-end distinction can separate the parser section that deals with source code and the back-end that does code generation and optimization.

In speech synthesis, the front-end refers to the part of the synthesis system that converts the input text into a symbolic phonetic representation, and the back-end converts the symbolic phonetic representation into actual sounds.

In the context of WWW applications, a mediator is a service that functions simultaneously as a server on its front end and as a client on its back end.

How is on-line processing different from batch processing? Illustrate different characteristics of these two types of processing systems and their suitability for different areas of applications. Give two examples of each.

---CHECK EXAMPLE FOR INVENTORY IN OM BOOK.

Batch processing is execution of a series of programs ("jobs") on a computer without manual intervention. Batch jobs are set up so they can be run to completion without manual intervention, so all input data is preselected through scripts or command-line parameters. This is in contrast to "online" or interactive programs which prompt the user for such input. A program takes a set of data files as input, process the data, and produces a set of output data files. This operating environment is termed as "batch processing" because the input data are collected into batches on files and are processed in batches by the program.

Benefits

Batch processing has these benefits: It allows sharing of computer resources among many users and programs, It shifts the time of job processing to when the computing resources are less busy, It avoids idling the computing resources with minute-by-minute manual intervention and supervision, By keeping high overall rate of utilization, it better amortizes the cost of a computer, especially an expensive one.

History

Batch processing has been associated with mainframe computers since the earliest days of electronic computing in the 1950s. Because such computers were enormously costly, batch processing was the only economically-viable option of their use. In those days, interactive sessions with either text-based computer terminal interfaces or

graphical user interfaces were not widespread. Initially, computers were not even capable of having multiple programs loaded into the main memory. Batch processing has grown beyond its mainframe origins, and is now frequently used in UNIX environments and Microsoft Windows too. UNIX systems use shells and other scripting languages. DOS systems use batch files powered by COMMAND.COM, Microsoft Windows has cmd.exe, Windows Script Host and advanced Windows PowerShell.

Modern Systems

Despite their long history, batch applications are still critical in most organizations. While online systems are now used when manual intervention is not desired, they are not well suited to the high-volume, repetitive tasks. Therefore, even new systems usually contain a batch application for cases such as updating information at the end of the day, generating reports, and printing documents. Modern batch applications make use of modern batch frameworks such as Spring Batch, which is written for Java, to provide the fault tolerance and scalability required for high-volume processing. In order to ensure high-speed processing, batch applications are often integrated with grid computing solutions to partition a batch job over a large number of processors.

Common batch processing usage

Data processing A typical batch processing procedure is End of day-reporting (EOD), especially on mainframes. Historically systems were designed to have a batch window where online subsystems were turned off and system capacity was used to run jobs common to all data (accounts, users or customers) on a system. In a bank, for example, EOD jobs include interest calculation, generation of reports and data sets to other systems, print (statements) and payment processing.

Printing

A popular computerized batch processing procedure is printing. This normally involves the operator selecting the documents they need printed and indicating to the batch printing software when, where they should be output and priority of the print job. Then the job is sent to the print queue from where printing daemon sends them to the printer.

Databases Batch processing is also used for efficient bulk database updates and automated transaction processing, as contrasted to interactive online transaction processing (OLTP) applications.

Images Batch processing is often used to perform various operations with digital images. There exist computer programs that let one resize, convert, watermark, or otherwise edit image files.

Converting Batch processing is also used for converting a number of computer files from one format to another. This is to make files portable and versatile especially for proprietary and legacy files where viewers are not easy to come by.

Online transaction processing

Online transaction processing, or OLTP, refers to a class of systems that facilitate and manage transaction-oriented applications, typically for data entry and retrieval transaction processing. OLTP has also been used to refer to processing in which the system responds immediately to user requests. An automatic teller machine (ATM) for a bank is an example of a commercial transaction processing application. The technology is used in a number of industries, including banking, airlines, mail order, supermarkets, and manufacturing. Applications include electronic banking, order processing, employee time clock systems, e-commerce, and eTrading. The most widely used OLTP system is probably IBM's CICS.

Requirements

Online transaction processing increasingly requires support for transactions that span a network and may include more than one company. For this reason, new OLTP software uses client/server processing and brokering software that allows transactions to run on different computer platforms in a network. In large applications, efficient OLTP may depend on sophisticated transaction management software (such as CICS) and/or database optimization tactics to facilitate the processing of large numbers of concurrent updates to an OLTP-oriented database. For even more demanding Decentralized database systems, OLTP brokering programs can distribute transaction processing among multiple computers on a network. OLTP is often integrated into SOA service-oriented architecture and Web services. Because there is a need for transactions you will need online processing.

Benefits

Online Transaction Processing has two key benefits: simplicity and efficiency. Reduced paper trails and the faster, more accurate forecasts for revenues and expenses are both examples of how OLTP makes things simpler for businesses.

Disadvantages

As with any information processing system, security and reliability are considerations. Online transaction systems are generally more susceptible to direct attack and abuse than their offline counterparts. When organizations choose to rely on OLTP, operations can be severely impacted if the transaction system or database is unavailable due to data corruption, systems failure, or network availability issues. Additionally, like many modern online information technology solutions, some systems require offline maintenance which further affects the cost-benefit analysis.

Another way of analyzing the difference b/w Online and Batch Processing

Compared to on-line processing, batch processing is much slower. A complete payroll, from balancing to preparation of checks, can be done in 3 hours by batch processing methods. An on-line application, which handles individual transactions, is measured in seconds. An inquiry as to the availability of a seat on an airline flight can be completed in 2 seconds. There is a fundamental trade-off between serial /sequential processing with magnetic tape and direct on-line processing with magnetic disk. Online processing provides extremely fast access to relatively small amounts of data on a random basis. Batch processing provides an efficient and economical way to process relatively large amounts of information. Many contemporary applications have both batch and on-line processing components.

Advantages of Spreadsheet Programs


A spreadsheet is a computer application that simulates a paper, accounting worksheet. It displays multiple cells that together make up a grid consisting of rows and columns, each cell containing either alphanumeric text or numeric values. A spreadsheet cell may alternatively contain a formula that defines how the content of that cell is to be calculated from the contents of any other cell (or combination of cells) each time any cell is updated. Spreadsheets are frequently used for financial information because of their ability to re-calculate the entire sheet automatically after a change to a single cell is made. Spreadsheets have evolved to use powerful programming languages like VBA; specifically, they are functional, visual, and multiparadigm languages. Many people find it easier to perform calculations in spreadsheets than by writing the equivalent sequential program. This is due to two traits of spreadsheets. They use spatial relationships to define program relationships. Like all animals, humans have highly developed intuitions about spaces, and of dependencies between items. Sequential programming usually requires typing line after line of text, which must be read slowly and carefully to be understood and changed. They are forgiving, allowing partial results and functions to work. One or more parts of a program can work correctly, even if other parts are unfinished or broken. This makes writing and debugging programs much easier, and faster. Sequential programming usually needs every program line and character to be correct for a program to run. One error usually stops the whole program and prevents any result.

Spreadsheets usually attempt to automatically update cells when the cells on which they depend have been changed. A spreadsheet allows users to enter and calculate numerical data. Using a spreadsheet greatly increases productivity for anyone who needs to manage receipts, create budgets, generate financial reports or even keep track of inventories and similar lists. A spreadsheet combines the features of a general ledger with the flexibility of powerful data analysis and reporting. Here are just a few of the uses for Excel: Number crunching: Create budgets, analyze survey results, and perform just about any type of financial analysis you can think of. Creating charts: Create a wide variety of highly customizable charts.

Organizing lists: Use the row-and-column layout to store lists efficiently. Accessing other data: Import data from a wide variety of sources. Creating graphics and diagrams: Use Excel AutoShapes to create simple (and notso-simple) diagrams. Automating complex tasks: Perform a tedious task with a single mouse click with Excels macro capabilities functions.

Evaluation: spreadsheet is a useful tool for data management. It is easy to use and accessible to many people because it is part of the Microsoft Office software package. Excel spreadsheets and spreadsheets created with other software can become complicated quickly when the data being analyzed is very complex in nature. spreadsheet also has problems dealing with multiple variables describing the same subject. When detail becomes to great for excel to effectively handle, spreadsheets should be abandoned in favor of databases. Shortcomings While spreadsheets are a great step forward in quantitative modeling, they have deficiencies as mentioned below: 1) Spreadsheets have significant reliability problems. Research studies estimate that roughly 94% of spreadsheets deployed in the field contain errors, and 5.2% of cells in unaudited spreadsheets contain errors. 2) The practical expressiveness of spreadsheets is limited. Authors have difficulty remembering the meanings of hundreds or thousands of cell addresses that appear in formulas. 3) Collaboration in authoring spreadsheet formulas is difficult because such collaboration must occur at the level of cells and cell addresses. 4) Some sources advocate the use of specialized software instead of spreadsheets for some applications. 5) Many spreadsheet software products, such as Microsoft Excel (versions prior to 2007) and OpenOffice.org Calc, have a capacity limit of 65,536 rows by 256 columns. This can present a problem for people using very large datasets, and may result in lost data. 6) Lack of auditing and revision control. This makes it difficult to determine who changed what and when. This can cause problems with regulatory compliance. Lack of revision control greatly increases the risk of errors due the inability to track, isolate and test changes made to a document.

7) Lack of security. Generally, if one has permission to open a spreadsheet, one has permission to modify any part of it. This, combined with the lack of auditing above, can make it easy for someone to commit fraud. 8) Because they are loosely structured, it is easy for someone to introduce an error, either accidentally or intentionally, by entering information in the wrong place or expressing dependencies among cells (such as in a formula) incorrectly. 9) Trying to manage the sheer volume of spreadsheets which sometimes exists within an organization without proper security, audit trails, the unintentional introduction of errors and other items listed above can become overwhelming. While there are built-in and third-party tools for desktop spreadsheet applications that address some of these shortcomings, awareness and use of these is generally low. A good example of this is that 55% of Capital market professionals "don't know" how their spreadsheets are audited; only 6% invest in a third-party solution Advantages of Text Processing software system

A word processor (more formally known as document preparation system) is a computer application used for the production (including composition, editing, formatting, and possibly printing) of any sort of printable material.

Word processor may also refer to an obsolete type of stand-alone office machine, popular in the 1970s and 80s, combining the keyboard text-entry and printing functions of an electric typewriter with a dedicated computer for the editing of text.

Although features and design varied between manufacturers and models, with new features added as technology advanced, word processors for several years usually featured a monochrome display and the ability to save documents on memory cards or diskettes. Later models introduced innovations such as spell-checking programs, increased formatting options, and dot-matrix printing.

Word processors are descended from early text formatting tools (sometimes called text justification tools, from their only real capability). Word processing was one of the earliest applications for the personal computer in office productivity.

Although early word processors used tag-based markup for document formatting, most modern word processors take advantage of a graphical user interface providing some form of What You See Is What You Get editing. Most are powerful systems consisting of one or more programs that can produce any arbitrary combination of images, graphics and text, the latter handled with type-setting capability.

Microsoft Word is the most widely used computer word processing system; Microsoft estimates over five hundred million people use the Office suite, which includes Word. Open-source applications such as OpenOffice.org Writer are rapidly gaining in popularity. Online word processors such as Google Docs are a relatively new category.

Characteristics

1) Word processing typically implies the presence of text manipulation functions that extend beyond a basic ability to enter and change text, such as automatic generation of: 2) Batch mailings using a form letter template and an address database (also called mail merging); 3) Indices of keywords and their page numbers; 4) Tables of contents with section titles and their page numbers; 5) Tables of figures with caption titles and their page numbers; 6) Cross-referencing with section or page numbers; 7) Footnote numbering; 8) Other word processing functions include "spell checking" (actually checks against wordlists), "grammar checking" (checks for what seem to be simple grammar errors), and a "thesaurus" function (finds words with similar or opposite meanings). 9) Other common features include collaborative editing, comments and annotations, support for images and diagrams and internal cross-referencing. 10)Most current word processors can calculate various statistics pertaining to a document. These usually include: Character count, word count, sentence

count, line count, paragraph count, page count, Word, sentence and paragraph length, Editing time ETC.

Typical usage

Word processors have a variety of uses and applications within the business world, home, and education.

1) Business: Within the business world, word processors are extremely useful tools. Typical uses include: legal copies letters and letterhead memos reference documents

Businesses tend to have their own format and style for any of these. Thus, versatile word processors with layout editing and similar capabilities find widespread use in most businesses.

2) Education: Many schools have begun to teach typing and word processing to their students, starting as early as elementary school. Typically these skills are developed throughout secondary school in preparation for the business world. Undergraduate students typically spend many hours writing essays. Graduate and doctoral students continue this trend, as well as creating works for research and publication.

3) Home: While many homes have word processors on their computers, word processing in the home tends to be educational, planning or business related, dealing with assignments or work being completed at home, or occasionally recreational, e.g. writing short stories. Some use word processors for letter writing, rsum creation, and card creation. However, many of these home publishing processes have been taken over by desktop publishing programs specifically oriented toward home use such as The Print Shop, which is better suited for these types of documents

Other advantages are: It allows easy and near-instant textual input. There is a large variety of formatting options, including a variety of colour choices, the ability to change font-weight (bold/regular), underlining, etc. Images can be 'drag-and-dropped' into the application, inserted manually, or imported from the word-art galleries. Lists can be easily created and formatted. Content is displayed using the standard HTML formatting, allowing great compatability and creating a standardised, easy-to-control layout. Compatible with a large number of standard document formats. Extensive help documentation. Self-explanatory dialogue boxes and menu options. The biggest advantage to MS Word is its ubiquity. You will find it installed on most business computers and a large percentage of home computers. This also means that there are a lot of people who know how to use it, and if you send someone a Word document they will have the tools to open it.

Few Downsides

High Cost and the fact that it has a lots of extra features that most people won't use and that can sometimes get in the way of getting things done. Importing pictures, tables, graphs, etc is very unpredictable, and users often can't get an imported graphic to look how you would want it.

The grammar/spelling check is tricky sometimes it tells you to change a word to something else, which once you do, would ask you to change it back to what you had the first time!

ADVATAGES OF Presentation graphics software system

A presentation program is a computer software package used to display information, normally in the form of a slide show. It typically includes three major functions: an editor that allows text to be inserted and formatted, a method for inserting and manipulating graphic images and a slide-show system to display the content.

It is a type of business software that enables users to create highly stylized images for slide shows and reports. The software includes functions for creating various types of charts and graphs and for inserting text in a variety of fonts. Most systems enable you to import data from a spreadsheet application to create the charts and graphs.

Presentation software (sometimes called "presentation graphics") is a category of application program used to create sequences of words and pictures that tell a story or help support a speech or public presentation of information. Presentation software can be divided into business presentation software and more general multimedia authoring tools, with some products having characteristics of both. Business presentation software emphasizes ease- and quickness-of-learning and use. Multimedia authoring software enables you to create a more sophisticated presentation that includes audio and video sequences. Business presentation software usually enables you to include images and sometimes audio and video developed with other tools.

Some very popular presentation software, such as Microsoft's PowerPoint and Lotus's Freelance Graphics, are sold stand-alone or can come as part of office-oriented suites or packages of software. Other popular products include Adobe Persuasion, Astound, Asymetrix Compel, Corel Presentations, and Harvard Graphics. Among the most popular multimedia authoring tools are Macromedia Director and Asymetrix's Multimedia Toolbook. These authoring tools also include presentation capability as well. Most if not all of these products come in both PC and Mac versions.

Database manager

Database is a software program, used to store, delete, update and retrieve data. A database can be limited to a single desktop computer or can be stored in large server machines, like the IBM Mainframe. There are various database management systems available in the market. Some of them are Sybase, Microsoft SQL Server, Oracle RDBMS, PostgreSQL, MySQL, etc.

The commonly used database management system is called relational database management system (RDBMS). The data is stored as tuples (read: rows) in a RDBMS.

Database management systems have brought about systematization in data storage, along with data security.

The advantages of the database management systems can be enumerated as under:

1) Warehouse of Information

The database management systems are warehouses of information, where large amount of data can be stored. The common examples in commercial applications are inventory data, personnel data, etc. It often happens that a common man uses a database management system, without even realizing, that it is being used. The best examples for the same would be the address book of a cell phone, digital diaries, etc. Both these equipments store data in their internal database.

2) Defining Attributes The unique data field in a table is assigned a primary key. The primary key helps in the identification of data. It also checks for duplicates within the same table, thereby reducing data redundancy. There are tables, which have a secondary key in addition to the primary key. The secondary key is also called 'foreign key'. The secondary key refers to the primary key of another table, thus establishing a relationship between the two tables.

3) Systematic Storage The data is stored in the form of tables. The tables consist of rows and columns. The primary and secondary key help to eliminate data redundancy, enabling systematic storage of data.

4) Changes to Schema The table schema can be changed and it is not platform dependent. Therefore, the tables in the system can be edited to add new columns and rows without hampering the applications, that depend on that particular database.

5) No Language Dependence The database management systems are not language dependent. Therefore, they can be used with various languages and on various platforms.

6) Table Joins The data in two or more tables can be integrated into a single table. This enables to reduce the size of the database and also helps in easy retrieval of data.

7) Multiple Simultaneous Usage The database can be used simultaneously by a number of users. Various users can retrieve the same data simultaneously. The data in the database can also be modified, based on the privileges assigned to users.

8) Data Security Data is the most important asset. Therefore, there is a need for data security. Database management systems help to keep the data secured.

9) Privileges Different privileges can be given to different users. For example, some users can edit the database, but are not allowed to delete the contents of the database.

10)Abstract View of Data and Easy Retrieval DBMS enables easy and convenient retrieval of data. A database user can view only the abstract form of data; the complexities of the internal structure of the database are hidden from him. The data fetched is in user friendly format.

11)Data Consistency Data consistency ensures a consistent view of data to every user. It includes the accuracy, validity and integrity of related data. The data in the database must satisfy certain consistency constraints, for example, the age of a candidate appearing for an exam should be of number datatype and in the range of 20-25. When the database is updated, these constraints are checked by the database systems.

What if analysis

One of the most appealing aspects of Excel is its ability to create dynamic models. A dynamic model uses formulas that instantly recalculate when you change values in cells to which the formulas refer. When you change values in cells in a systematic manner and observe the effects on specific formula cells, youre performing a type of what-if analysis. What-if analysis is the process of asking such questions as What if the interest rate on the loan changes to 7.5 percent rather than 7.0 percent? or What if we raise our product prices by 5 percent? If you set up your spreadsheet properly, answering such questions is simply a matter of plugging in new values and observing the results of the recalculation. Excel provides useful tools to assist you in your what-if endeavors.

A What-If Example

Figure shows a spreadsheet that calculates information pertaining to a mortgage loan. The worksheet is divided into two sections: the input cells and the result cells (which contain formulas).

With this worksheet, you can easily answer the following what-if questions: What if I can negotiate a lower purchase price on the property? What if the lender requires a 20-percent down payment? What if I can get a 40-year mortgage? What if the interest rate increases to 7.0 percent?

Types of What-If Analyses

As you may expect, Excel can handle much more sophisticated models than the preceding example. To perform a what-if analysis using Excel, you have three basic options: Manual what-if analysis: Plug in new values and observe the effects on formula cells. Data tables: Create a table that displays the results of selected formula cells as you systematically change one or two input cells. Scenario Manager: Create named scenarios and generate reports that use outlines or pivot tables.

Manual What-If Analysis

This method doesnt require too much explanation. In fact, the example that opens this chapter demonstrates how its done. Manual what-if analysis is based on the idea that you have one or more input cells that affect one or more key formula cells. You

change the value in the input cells and see what happens to the formula cells. You may want to print the results or save each scenario to a new workbook. The term scenario refers to a specific set of values in one or more input cells. This is how most people perform what-if analysis. Manual what-if analysis certainly has nothing wrong with it, but you should be aware of some other techniques.

What-if analysis is the process of changing the values in cells to see how those changes will affect the outcome of formulas on the worksheet. For example, you can use a data table to vary the interest rate and term length that are used in a loan to determine possible monthly payment amounts.

Three kinds of what-if analysis tools come with Excel: scenarios, data tables, and Goal Seek. Scenarios and data tables take sets of input values and determine possible results. A data table works only with one or two variables, but it can accept many different values for those variables. A scenario can have multiple variables, but it can accommodate only up to 32 values. Goal Seek works differently from scenarios and data tables in that it takes a result and determines possible input values that produce that result. In addition to these three tools, you can install add-ins that help you perform what-if analysis, such as the Solver add-in. The Solver add-in is similar to Goal Seek, but it can accommodate more variables. You can also create forecasts by using the fill handle and various commands that are built into Excel. For more advanced models, you can use the Analysis Pack add-in.

Creating Data Tables

A data table is a range of cells that shows how changing one or two variables in your formulas (formula: A sequence of values, cell references, names, functions, or operators in a cell that together produce a new value. A formula always begins with an equal sign (=).) will affect the results of those formulas. Data tables provide a shortcut for calculating multiple results in one operation and a way to view and compare the results of all the different variations together on your worksheet.

Like scenarios, data tables help you explore a set of possible outcomes. Unlike scenarios, data tables show you all the outcomes in one table on one worksheet. Using data tables makes it easy to examine a range of possibilities at a glance. Because you focus on only one or two variables, results are easy to read and share in tabular form.

A data table cannot accommodate more than two variables. If you want to analyze more than two variables, you should instead use scenarios. Although it is limited to only one or two variables (one for the row input cell and one for the column input cell), a data table can include as many different variable values as you want. A scenario can have a maximum of 32 different values, but you can create as many scenarios as you want.

Data table basics

You can create one-variable or two-variable data tables, depending on the number of variables and formulas that you want to test.

Use a one-variable data table if you want to see how different values of one variable in one or more formulas will change the results of those formulas. For example, you can use a one-variable data table to see how different interest rates affect a monthly mortgage payment by using the PMT function. You enter the variable values in one column or row, and the outcomes are displayed in an adjacent column or row.

A one-variable data table

Two-variable data tables Use a two-variable data table to see how different values of two variables in one formula will change the results of that formula. For example, you can use a twovariable data table to see how different combinations of interest rates and loan terms will affect a monthly mortgage payment. In the following illustration, cell C2 contains the payment formula, =PMT(B3/12,B4,B5), which uses two input cells, B3 and B4.

Create a one-variable data table A one-variable data table has input values that are listed either down a column (column-oriented) or across a row (row-oriented). Formulas that are used in a onevariable data table must refer to only one input cell (input cell: The cell in which each input value from a data table is substituted. Any cell on a worksheet can be the input cell. Although the input cell does not need to be part of the data table, the formulas in data tables must refer to the input cell.).

1) Type the list of values that you want to substitute in the input cell either down one column or across one row. Leave a few empty rows and columns on either side of the values. 2) Do one of the following:

If the data table is column-oriented (your variable values are in a column), type the formula in the cell one row above and one cell to the right of the column of values. The one-variable data table illustration shown in the Overview section is column-oriented, and the formula is contained in cell D2.

If you want to examine the effects of various values on other formulas, type the additional formulas in cells to the right of the first formula.

If the data table is row-oriented (your variable values are in a row), type the formula in the cell one column to the left of the first value and one cell below the row of values.

If you want to examine the effects of various values on other formulas, type the additional formulas in cells below the first formula. 3) Select the range of cells that contains the formulas and values that you want to substitute. Based on the first illustration in the preceding Overview section, this range is C2:D5. 4) On the Data tab, in the Data Tools group, click What-If Analysis, and then click Data Table. 5) Do one of the following: If the data table is column-oriented, type the cell reference (cell reference: The set of coordinates that a cell occupies on a worksheet. For example, the reference of the cell that appears at the intersection of column B and row 3 is B3.) for the input cell in the Column input cell box. Using the example shown in the first illustration, the input cell is B3. If the data table is row-oriented, type the cell reference for the input cell in the Row input cell box. Note: After you create your data table, you might want to change the format of the result cells. In the illustration, the result cells are formatted as currency.

Create a two-variable data table

A two-variable data table uses a formula that contains two lists of input values. The formula must refer to two different input cells. In a cell on the worksheet, enter the formula that refers to the two input cells. In the following example, in which the formula's starting values are entered in cells B3, B4, and B5, you type the formula =PMT(B3/12,B4,-B5) in cell C2. Type one list of input values in the same column, below the formula. In this case, type the different interest rates in cells C3, C4, and C5. Enter the second list in the same row as the formula, to its right. Type the loan terms (in months) in cells D2 and E2. Select the range of cells that contains the formula (C2), both the row and column of values (C3:C5 and D2:E2), and the cells in which you want the calculated values (D3:E5). In this case, select the range C2:E5. On the Data tab, in the Data Tools group, click What-If Analysis, and then click Data Table. In the Row input cell box, enter the reference to the input cell for the input values in the row. Type cell B4 in the Row input cell box. In the Column input cell box, enter the reference to the input cell for the input values in the column. Type B3 in the Column input cell box. Click OK.

Example: A two-variable data table can show how different combinations of interest rates and loan terms will affect a monthly mortgage payment. In the following illustration, cell C2 contains the payment formula, =PMT(B3/12,B4,B5), which uses two input cells, B3 and B4.

GOAL SEEK ANALYSIS IN EXCEL

Goal Seek Goal Seek is used when you know what answer you want, but don't know the exact figure to input for that answer. For example, you're quite certain that 8 multiplied by something equals 56. You just not sure what that missing number is. Is it 8 multiplied by 6? Or Is it 8 multiplied by 7? Goal Seek will tell you the answer. We'll test that example out right now. So start a new spreadsheet, and create one the same as in the image below:

Before you can use Goal Seek, Excel needs certain things from you. First it needs some sort of formula to work with. In the image above we have the simple formula =B1 * B2. We've put this in cell B3. But the answer is wrong for us. We had a Goal of 56 (8 times something). We want to know which number you have to multiply 8 by in order to get the answer 56. We tried 8 times 6, and that gave the answer of 48. So we have to try again. Instead of us puzzling the answer out, we can let Goal Seek handle it. So do the following:

From the Excel menu bar, click on Tools From the drop down menu, click on Goal Seek A dialogue box pops up like the one below:

The dialogue box needs a little explaining. "Set cell" is the answer you're looking for, this is the Goal. Set cell needs a formula or function to work with. Our formula is in cell B3, so if your "Set cell" text box does not say B3, click inside it and type B3.

"To Value" is the actual answer you're looking for. With "Set cell", you're just telling Excel where the formula is. With "To Value" you have to tell Excel what answer you're looking for. We wanted an answer of 56 for our formula. So click inside the "To Value" text box and type 56. "By Changing Cell" is the missing bit. This is the part of the formula that needs to change in order to get the answer you want. In our formula we have an 8 and a 6. Clearly, the 6 is the number that has to go. So the cell that needs to change is B2. So go ahead and enter B2 in the "By Changing Cell" text box. Your dialogue box should now look like this:

Click OK when your dialogue box looks like the one above. Excel will then Set the cell B3 to the Value of 56, and change the figure in cell B2. You'll also get a dialogue box like the one below:

Click OK on the dialogue box. Your new spreadsheet will look like this one:

So Goal Seek has given us the answer we wanted: it is 7 that when times by 8 equals 56.

Another Solution

Consider the following what-if question: What is the total profit if sales increase by 20 percent? If you set up your worksheet properly, you can change the value in one cell to see what happens to the profit cell. Goal seeking takes the opposite approach. If you know what a formula result should be, Excel can tell you the values that you need to enter in one or more input cells to produce that result. In other words, you can ask a question such as How much do sales need to Increase to produce a profit of $1.2 million? Excel provides two tools that are relevant: Goal seeking: Determines the value that you need to enter in a single input cell to produce a result that you want in a dependent (formula) cell. Solver: Determines the values that you need to enter in multiple input cells to produce a result that you want.

Use Goal Seek to find out how to get a desired result If you know the result that you want from a formula, but you are not sure what input value the formula requires to get that result, you can use the Goal Seek feature. For example, suppose that you need to borrow some money. You know how much money you want, how long a period you want in which to pay off the loan, and how much you can afford to pay each month. You can use Goal Seek to determine what interest rate you must secure in order to meet your loan goal.

NOTE Goal Seek works with only one variable input value. If you want to determine more than one input value, for example, the loan amount and the monthly payment amount for a loan, you should instead use the Solver addin.

MAIL MERGE

Sometimes the term mail merge can be a little misleading. We assume from the title that the intent of mail merge is to produce letters for mass mailing purposes. That's not necessarily the case. Mail merge is for simplifying repetitive documents and tasks. Mail merge can be used for creating many documents at once that contain identical formatting, layout, text, graphics, etc., and where only certain portions of each document varies. Mail merge is also used for generating mailing labels, envelopes, address lists, personalized training handouts, etc. As well as hard copy mailshots, it can be used to generate multiple emails and electronic faxes. And it can even be used to create a friendly front-end to spreadsheet or database information. Whenever you need to assemble similar data, mail merge is the answer! Mail merge primarily consists of two files, the Main Document and the Data Source. The Main Document contains the information that will remain the same in each record, and the Data Source contains all the variable information, in the form of fields. This is the information that will change in the Main Document when the merge is completed. Along with the information that remains the same, the Main Document also contains merge fields, which are references to the fields in the Data Source. When the Main Document and Data Source are merged, Microsoft Word replaces each merge field in the Main Document with the data from the respective field contained in the Data Source. The end result is a third document, a combination of the Main Document and Data Source although you can also mail merge directly to the printer; (or fax or email) you don't need to create a merged document on screen; and you can also preview the mail merge without actually merging (using the ViewMergedData button). Start a mail merge Start a mail merge. To do this, follow these steps, as appropriate for the version of Word that you are running. Microsoft Word 2002 On the Tools menu, click Letters and Mailings, and then click Mail Merge Wizard. Microsoft Office Word 2003 On the Tools menu, click Letters and Mailings, and then click Mail Merge. Microsoft Office Word 2007 On the Mailings tab, click Start Mail Merge, and then click Step by Step Mail Merge Wizard.

Select document type In the Mail Merge task pane, click Letters. This will allow you to send letters to a group of people and personalize the results of the letter that each person receives. Click Next: Starting document.

Select the starting document 1. Click one of the following options: o Use the current document: Use the currently open document as your main document. o Start from a template: Select one of the ready-to-use mail merge templates. o Start from existing document: Open an existing document to use as your mail merge main document. 2. In the Mail Merge task pane, click Next: Select recipients. Select recipients When you open or create a data source by using the Mail Merge Wizard, you are telling Word to use a specific set of variable information for your merge. Use one of the following methods to attach the main document to the data source. Method 1: Use an existing data source To use an existing data source, follow these steps: 1. In the Mail Merge task pane, click Use an existing list. 2. In the Use an existing list section, click Browse. 3. In the Select Data Source dialog box, select the file that contains the variable information that you want to use, and then click Open. Note If the data source is not listed in the list of files, select the appropriate drive and folder. If necessary, select the appropriate option in the All Data Sources list. Select the file, and then click Open. Word displays the Mail Merge Recipients dialog box. You can sort and edit your data if you want to. 4. Click OK to return to the main document. 5. Save the main document. When you save the main document at this point, you are also saving the data source and attaching the data source to the main document. 6. Type the name that you want to give to your main document, and then click Save. Method 2: Use names from a Microsoft Outlook Contacts List To use an Outlook Contact List, follow these steps: 1. In the Mail Merge task pane, click Next: Select recipients. 2. Click Select from Outlook contacts. 3. In the Select from Outlook contacts section, click Choose Contacts Folder.

4. In the Select Contact List Folder dialog box, select the Outlook contacts folder that you want, and then click OK. Word displays the Mail Merge Recipients dialog box. You can sort and edit your data if you want. 5. Click OK to return to the main document. Method 3: Create a database of names and addresses To create a new database, follow these steps: 1. In the Mail Merge task pane, click Next: Select Recipients. 2. Click Type a new list. 3. Click Create. The New Address List dialog box appears. In this dialog box, enter the address information for each record. If there is no information for a particular field, leave the box blank. By default, Word skips blank fields. Therefore, the merge is not affected if blank entries are in the data form. The set of information in each form makes up one data record. 4. After you type the information for a record, click New Entry to move to the next record. To delete a record, click Delete Entry. To search for a specific record, click Find Entry. To customize your list, click Customize. In the Customize Address List dialog box, you can add, delete, rename, and reorder the merge fields. 5. In the New Address List dialog box, click OK. In the Save Address List dialog box, type the name that you want to give to your data source in the File name box, and then click Save. 6. In the Mail Merge Recipients dialog box, make any changes that you want, and then click OK. 7. Click Next: Write your letter to finish setting up your letter. 8. Save the main document. When you save the main document at this point, you are also saving the data source and attaching the data source to the main document. 9. Type the name that you want to give to your main document, and then click Save. To proceed to the next step, click Next: Write your letter. Write your letter In this step, you set up your main document. 1. Type or add any text and graphics that you want to include in your letter. 2. Add the field codes where you want the variable information to appear. In the Mail Merge task pane, you have four options: o Address block: Use this option to insert a formatted address. o Greeting line: Use this option to insert a formatted salutation. o Electronic postage: Use this option to insert electronic postage. Note This option requires that you have a postage software program installed on your computer.

More items: Use this option to insert individual merge fields. When you clickMore Items, the Insert Merge Field dialog box appears. Note Make sure that your cursor is where you want to insert the information from your data source before you click More Items. In the Insert Merge Field dialog box, click the merge field that you want to use, and then click Insert. Note You can insert all of your fields and then go back and add any spaces or punctuation. Alternatively, you can insert one field at a time, close the Insert Merge Fields dialog box, add any spaces or punctuation that you want, and then repeat this step for each additional merge field that you want to insert. You can also format (apply bold or italic formatting to) the merge fields, just like regular text. 3. When you finish editing the main document, click Save or Save As on the File menu.
o

Note In Word 2007, click the Microsoft Office Button, and then click Save or Save As. Name the file, and then click Save. To proceed to the next step, click Next: Preview your letters.

Preview your letters This step allows you to preview your merged data, one letter at a time. You can also make changes to your recipient list or personalize individual letters. To proceed to the next step, click Next: Complete the merge. Complete the merge This step merges the variable information with the form letter. You can output the merge result by using either of the following options:

Print: Select this option to send the merged document directly to the printer. You will not be able to view the document on your screen. When you click Print, the Merge to Printer dialog box appears. In the Merge to Printer dialog box, you can choose which records to merge. When you click OK, the Print dialog box appears. Click Print to print the merge document. Edit individual letters: Select this option to display the merged document on your screen. When you click Edit individual letters, the Merge to New Document dialog box appears. In the Merge to New Document dialog box, you can choose which records to merge. When you click OK, the documents are merged to a new Word document. To print the file, on the File menu, click Print. Note In Word 2007, click the Microsoft Office Button, and then click Print.

Glossary

Address list: An address list is a file that contains the data that varies in each copy of a merged document. For example, a data source can include the name and address of each recipient of a form letter. Boilerplate: Generic information that is repeated in each form letter, mailing label, envelope, or directory (catalog). Data field: A category of information in a data source. A data field corresponds to one column of information in the data source. The name of each data field is listed in the first row (header row) of the data source. "PostalCode" and "LastName" are examples of data field names. Data record: A complete set of related information in a data source. A data record corresponds to one row of information in the data source. All information about one client in a client mailing list is an example of a data record. Delimited file: A text file that has data fields separated (or delimited) by tab characters or commas, and data records delimited by paragraph marks. Header row: The first row (or record) in a mail merge data source. The header row contains the field names for the categories of information in the data source; for example, "Name" and "City." The header row can also be stored in a separate document called the header source. Main document: In a mail merge operation, the document that contains the text and graphics that remain the same for each version of the merged document; for example, the return address and body of a form letter. Merge field: A placeholder that you insert in the main document. Merge fields tell Microsoft Word where to insert specific information from the data source. For example, insert the merge field "City" to have Word insert a city name, such as "Paris," that is stored in the City data field. Merged document: The document that is created by merging the data from the data source into the main document.

SLIDE TRANSITION

What is a Slide Transition?

A slide transition is the visual motion when one slide changes to the next during a presentation. By default, one slide simply replaces the previous one on screen, much the same way that a slide show of photographs would change from one to the next. Most presentation software programs provide many different transition effects that you can use to liven up your slide show.

Slide Transition Choices

Transitions range from a simple Cover Down, where the next slide covers the current one from the top of the screen, to a Wheel Clockwise where the new slide spins in like spokes on a wheel to cover the previous one. You can also have slides dissolve into each other, push each other off the screen, or open up like horizontal or vertical blinds.

Common Mistakes When Using Slide Transitions

While all this choice may seem like a great thing, common mistakes made are to use too many transitions, or to use one that doesnt fit well with the subject matter. In most cases, find one transition that doesnt detract from the presentation and use it throughout the show. Add a Different Slide Transition to Slides Needing Special Emphasis

If there is a slide that needs special emphasis, you might consider using a separate transition for it, but dont choose a separate transition for each slide. Your slide show will look amateurish and your audience will quite likely be distracted from the presentation itself, as they wait and watch for the next transition.

Slide Transitions are one of the many finishing touches to a presentation. Wait until you have the slides edited and arranged in the preferred order before setting animations

Slide Transition Effects

It includes Blinds, Box, Checkerboard, Dissolve, Random, Split, Strips, and Wipe. There are four other effects for slide transitions: Cover brings the next slide over the current one from a direction you specify Cut this is similar to the Appear effect where the slide appears over the previous slide Fade this is similar to the Dissolve effect except that it starts with an all black screen before dissolving the new slide on the screen Uncover takes away the previous slide to reveal the new slide underneath

Many of the transitions have a choice of what speed the transition should happen at and you can choose whether the transition should happen on the slide advance or at a certain time. There is also an option to randomly choose the transition effect from the list of possibilities. You can choose to apply the transition to one slide only or to all the slides in the slide show.

It is important not to let special effects have a detrimental effect on your presentation. Just because sound and motion are available to you, they should not necessarily be used. More often than not, these features are used inappropriately and become more of a focus than the content of the presentation. Beware of this as you create your presentation. Used sparingly, these effects can have a greater impact on audience attention when they do appear. To apply a transition, select "Slide Transition" from the "Slide Show" menu shown below:

The following pop-up window will appear.

From the "Effect" section of this window, you will be able to select a transition form the drop-down menu (shown above with "No Transition" selected). Click on the downward arrow to view the various options. When a transition is selected, you will be able to choose the speed of the transition by clicking on the radio button for slow, medium, or fast. On this window, you will also be able to determine how you advance to the next slide. The default is a mouse click (see checked box above in the "Advance" section). You may also choose to automatically have your presentation move onto the next slide after a number of seconds. There is also an option to play a sound when moving between slides. Again, beware of using too many special effects. They can all too easily make your presentation turn tacky and distracting.

When you have set your transition preferences, click the "Apply" button to set the preferences for just the slide you have selected or click "Apply to All" to apply this transition to all of the slides in your presentation.

Another way to set slide transitions is to go to the "Slide Sorter" view. To select this view, go to the "View" menu and select "Slide Sorter" as follows:

There are also shortcut buttons for each viewing option. These are located in the lower left corner of the PowerPoint window and look like the following.

In the "Slide Sorter" view, all of the slides are viewed at once. Drop-down menus will appear at the top of the window that will allow you select slide transition effects and text animation effects easily. These menus will also display the chosen special effect for a selected slide. You can select a slide by clicking on it. More than one slide can be selected by holding down the Shift button on your keyboard as you click on additional slides.

In 2007 Transition View

Slide Sorter

DATA QUERY

A database "query" is basically a "question" that you ask the database. The results of the query is the information that is returned by the database management system. Queries are usually constructed using SQL (structured query language) which resembles a high-level programming language. SQL is originally based upon Relational Algebra. Its scope includes data query and update, schema creation and modification, and data access control.

Queries

The most common operation in SQL is the query, which is performed with the declarative SELECT statement. SELECT retrieves data from one or more tables, or expressions. Standard SELECT statements have no persistent effects on the database. Some non-standard implementations of SELECT can have persistent effects, such as the SELECT INTO syntax that exists in some databases.

A query includes a list of columns to be included in the final result immediately following the SELECT keyword. An asterisk ("*") can also be used to specify that the query should return all columns of the queried tables. SELECT is the most complex statement in SQL, with optional keywords and clauses that include:

The FROM clause which indicates the table(s) from which data is to be retrieved. The FROM clause can include optional JOIN sub clauses to specify the rules for joining tables. The WHERE clause eliminates all rows from the result set for which the comparison predicate does not evaluate to True. The GROUP BY clause is used to project rows having common values into a smaller set of rows. The WHERE clause is applied before the GROUP BY clause. The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. The ORDER BY clause identifies which columns are used to sort the resulting data, and in which direction they should be sorted (options are ascending or

descending). Without an ORDER BY clause, the order of rows returned by an SQL query is undefined.

Data manipulation

The Data Manipulation Language (DML) is the subset of SQL used to add, update and delete data: INSERT adds rows to an existing table, e.g.,: INSERT INTO My_table (field1, field2, field3) VALUES ('test', 'N', NULL); UPDATE modifies a set of existing table rows, e.g.,: UPDATE My_table SET field1 = 'updated value' WHERE field2 = 'N'; DELETE removes existing rows from a table, e.g.,: DELETE FROM My_table WHERE field2 = 'N'; TRUNCATE deletes all data from a table in a very fast way. It usually implies a subsequent COMMIT operation. MERGE is used to combine the data of multiple tables. It combines the INSERT and UPDATE elements. It is defined in the SQL:2003 standard.

Transaction controls

Transactions, if available, wrap DML operations: COMMIT causes all data changes in a transaction to be made permanent.

ROLLBACK causes all data changes since the last COMMIT or ROLLBACK to be discarded, leaving the state of the data as it was prior to those changes. Once the COMMIT statement completes, the transaction's changes cannot be rolled back. COMMIT and ROLLBACK terminate the current transaction and release data locks.

Data definition

The Data Definition Language (DDL) manages table and index structure. The most basic items of DDL are the CREATE, ALTER, RENAME, DROP and TRUNCATE statements: CREATE creates an object (a table, for example) in the database. DROP deletes an object in the database, usually irretrievably. ALTER modifies the structure of an existing object in various waysfor example, adding a column to an existing table.

Data types Each column in an SQL table declares the type(s) that column may contain. ANSI SQL includes the following data types.

Character strings CHAR(n) fixed-width n-character string, padded with spaces as needed VARCHAR(n) variable-width string with a maximum size of n characters

Bit strings BIT(n) an array of n bits

Numbers

INTEGER and SMALLINT FLOAT, REAL and DOUBLE PRECISION

Date and Time DATE TIME TIMESTAMP INTERVAL

Data control The Data Control Language (DCL) authorizes users and groups of users to access and manipulate data. Its two main statements are: GRANT authorizes one or more users to perform an operation or a set of operations on an object. REVOKE eliminates a grant, which may be the default grant.

Criticisms of SQL

Implementations are inconsistent and, usually, incompatible between vendors.

The language makes it too easy to do a Cartesian join (joining all possible combinations), which results in "run-away" result sets when WHERE clauses are mistyped. It is also possible to misconstruct a WHERE on an update or delete, thereby affecting more rows in a table than desired. The grammar of SQL is perhaps unnecessarily complex

Alternatives to SQL

D is a query language for truly relational database management systems (TRDBMS); DMX is a query language for Data Mining models; LDAP is an application protocol for querying and modifying directory services running over TCP/IP. MDX is a query language for OLAP databases; XQuery is a query language for XML data sources; XPath is a language for navigating XML documents;

COMPUTER NETWORKS

A computer network can be divided into a small segments called Local Area Network (LAN), networking between computers in a building of a office, medium sized networks (MAN), communication between two offices in a city and wide area networks (WAN) networking between the computers, one is locally placed and the other can be thousands of miles away in another city or another country in the world.

Networking is the practice of linking two or more computers or devices with each other. The connectivity can be wired or wireless. A computer network can be categorized in different ways, depends on the geographical area as mentioned above.

WAN connectivity is achieved by a device known as Router. The internet is the worlds largest WAN, where millions of computers from all over the globe and connected with each other.

There are two main types of the computer networking client-server and peer to peer.

In the client server computing, a computer plays a major role known as server, where the files, data in the form of web pages, docs or spread sheet files, video, database & resources are placed. All the other computers in the client/server networks are called clients and they get the data from the server. In the peer to peer networks all the computers play the same role and no computer act as a centralized server. In the major businesses around the world client-server networks model is in major use.

A network topology defines the structure, design or layout of a network. There are different topologies like bus, ring, star, mesh, hybrid etc. The star topology is most commonly used topology. In the star topology, all the computers in the network are connected with a centralized device such as hub or switch. Thus forms a star like structure. If the hubs/switch fails to work for any reason then all the connectivity and communication between the computers will be halted.

A common communication language is used by the computers and the communication devices, known as protocols. The most commonly used and popular protocol on the internet and in the home and other networks is called TCP/IP. TCP/IP is not a single protocol but it is a suite of several protocols. A computer network can be a wired or wireless and TCP/IP protocol can work in both types of network. Data flow or communication can be divided into seven logical layers called OSI layers model that was developed by Intel and Xerox Corporation and was standardized by ISO.

1. Application layer 2. Presentation layer 3. Session layer 4. Transport layer 5. Network layer 6. Data Link layer 7. Physical layer

A Computer Network or simply Network is a collection of computers and devices connected by communications channels that facilitates communications among users and allows users to share resources with other users.

Purpose

a. Facilitating communications: Using a network, people can communicate efficiently and easily via e-mail, instant messaging, chat rooms, telephony, video telephone calls, and videoconferencing. b. Sharing hardware: In a networked environment, each computer on a network can access and use hardware on the network. Suppose several personal computers on a network each require the use of a laser printer. If the personal computers and a laser printer are connected to a network, each user can then access the laser printer on the network, as they need it.

c. Sharing files, data, and information: In a network environment, any authorized user can access data and information stored on other computers on the network.

d. Sharing software. Users connected to a network can access application programs on the network.

Network classification

The following list presents categories used for classifying networks.

Connection method Computer networks can be classified according to the hardware and software technology that is used to interconnect the individual devices in the network, such as optical fiber, Ethernet, Wireless LAN, HomePNA, Power line communication or G.hn, Ethernet, Wired technologies, Wireless technologies, Terrestrial Microwave , Communications Satellites, Cellular and PCS Systems, Wireless LANs, Bluetooth

Scale

Networks are often classified as local area network (LAN), wide area network (WAN), metropolitan area network (MAN), personal area network (PAN), virtual private network (VPN), campus area network (CAN), storage area network (SAN), and others, depending on their scale, scope and purpose. Usage, trust level, and access right often differ between these types of network.

Functional relationship (network architecture) Computer networks may be classified according to the functional relationships which exist among the elements of the network, e.g., active networking, client-server and peer-to-peer (workgroup) architecture.

Network topology Computer networks may be classified according to the network topology upon which the network is based, such as bus network, star network, ring network, mesh network, star-bus network, tree or hierarchical topology network. Network topology is the coordination by which devices in the network are arrange in their logical relations to one another, independent of physical arrangement. Even if networked computers are physically placed in a linear arrangement and are connected to a hub, the network has a star topology, rather than a bus topology. In this regard the visual and operational characteristics of a network are distinct.

Types of networks

Common types of computer networks may be identified by their scale.

Personal area network A personal area network (PAN) is a computer network used for communication among computer and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and wireless connections between devices.

Local area network

A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as home, school, computer laboratory, office building, or closely positioned group of buildings. Each computer or device on the network is a node.

Home area network A home area network (HAN) or home network is a residential local area network. It is used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices.

Campus area network A campus area network (CAN) is a computer network made up of an interconnection of local area networks (LANs) within a limited geographical area. It can be considered one form of a metropolitan area network, specific to an academic setting. In the case of a university campus-based campus area network, the network is likely to link a variety of campus buildings including; academic departments, the university library and student residence halls. A campus area network is larger than a local area network but smaller than a wide area network (WAN) (in some cases).

A CAN may be considered a type of MAN (metropolitan area network), but is generally limited to a smaller area than a typical MAN. This term is most often used to discuss the implementation of networks for a contiguous area. A metropolitan area network (MAN) is a network that connects two or more local area networks or campus area networks together but does not extend beyond the boundaries of the immediate town/city. Routers, switches and hubs are connected to create a metropolitan area network.

Wide area network A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances, using a communications channel that combines many types of media such as telephone lines, cables, and air waves. A WAN often uses transmission facilities provided by common carriers, such as telephone companies.. Global area network

A global area network (GAN) is a model for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc.

Virtual private network A virtual private network (VPN) is a computer network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network when this is the case. A VPN allows computer users to appear to be editing from an IP address location other than the one which connects the actual computer to the Internet.

Internetwork An Internetwork is the connection of two or more distinct computer networks via a common routing technology.

Internet The Internet is a global system of interconnected governmental, academic, public, and private computer networks. It is based on the networking technologies of the Internet Protocol Suite. The Internet is also the communications backbone underlying the World Wide Web (WWW). The 'Internet' is most commonly spelled with a capital 'I' as a proper noun, for historical reasons and to distinguish it from other generic internetworks.

Intranets and extranets Intranets and extranets are parts or extensions of a computer network, usually a local area network. An intranet is a set of networks, using the Internet Protocol and IPbased tools such as web browsers and file transfer applications, that is under the control of a single administrative entity. That administrative entity closes the intranet to all but specific, authorized users. Most commonly, an intranet is the internal network of an organization. A large intranet will typically have at least one web server to provide users with organizational information. An extranet is a network that is limited in scope to a single organization or entity and also has limited connections to the networks of one or more other usually, but not necessarily, trusted organizations or entities. Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other type of network, although, by definition, an

extranet cannot consist of a single LAN; it must have at least one connection with an external network.

IPO Cycle

The IPOS Cycle is how a computer intakes data, processes the data, outputs information, and then saves the information.

1. Input Computer receives data from input device.

2. Processing Computer's central processing unit (CPU) processes the data into information.

3. Output Meaningful information displayed on monitor or printed out.

4. Storage Saves results to computers hard drive or other types of secondary storage.

IPO Cycle in Detail

1. Input Devices Devices which transfer data, programs, or signals into a computer systems are called input devices. These devices are used to give raw data to the computer to perform the specific tasks. Firstly, the data, programs, a signals are fed into the input devices in a suitable form, and are then converted by the device into electrical signals from human-readable format that are transmitted to the central processing unit of the computer.

2. The Processing Device

The main unit inside the computer is the CPU, the processor. This unit is responsible for all events inside the computer. It controls all internal and external devices, performs arithmetic and logic operations. The CPU (Central Processing Unit) is the device that interprets and executes instructions. The operations, a microprocessor performs are called the instruction set of this processor. Processors differ from one to another by the instruction set.

Todays single chip central processing units, called microprocessors, make personal computers and workstations possible. By definition, the CPU is the chip that functions as the brain of a computer. In some instances, however, the term encompasses both the processor and the computers memory or, even more broadly, the main computer console.

The CPU is composed of several units:

The Control Unit (CU) directs add controls the activities of the internal and external devices. It interprets the instructions obtained into the computer, determines what data are needed, where it is stored, where to stored the results of the operations, and sends the control signal to the devices involved in the execution of the instructions.

The Arithmetic and Logic Unit (ALU) is the part where actual computations take place. It consists of circuits, which perform arithmetic operations (e.g. Addition, subtraction, multiplication, division) over data receive from memory and capable to compare numbers.

3. Output Devices Output devices are used to get final result from the computer. Firstly, output is displayed on monitor. Then we can print out these outputs on a paper with the help of printer. The purpose of the output devices is to translate data and information from electrical impulses to human-readable format. The output device, which is necessary for the computer to display messages to the user, is a monitor. If we want to keep the copy of the work on paper, we used printers. Plotters are devices that are more

suitable for the large scale outputs like engineering drawings and high quality graphics.

Linkage to Batch Processing Batch jobs can be stored up during working hours and then executed during the evening or whenever the computer is idle. Batch processing is particularly useful for operations that require the computer or a peripheral device for an extended period of time. Once a batch job begins, it continues until it is done or until an error occurs. Note that batch processing implies that there is no interaction with the user while the program is being executed. An example of batch processing is the way that credit card companies process billing. The customer does not receive a bill for each separate credit card purchase but one monthly bill for all of that months purchases. The bill is created through batch processing, where all of the data are collected and held until the bill is processed as a batch at the end of the billing cycle. The opposite of batch processing is transaction processing or interactive processing. In interactive processing, the application responds to commands as soon as you enter them. Linkage to Online Processing Online transaction processing increasingly requires support for transactions that span a network and may include more than one company.

GENERATION OF LANGUAGES

Programming languages have evolved tremendously since early 1950's and this evolution has resulted in over hundreds of different languages being invented and used in the industry.

1) We start out with the first and second generation languages during the period of 1950-60, which to many experienced programmers will say are machine and assembly languages. Computers then were programmed in binary notation that was very prone to errors. A simple algorithm resulted in lengthy code. This was then improved to mnemonic codes to represent operations. 2) Symbolic assembly codes came next in the mid 1950's, the second generation of programming language like AUTOCODER, SAP and SPS. Symbolic addresses allowed programmers to represent memory locations, variables and instructions with names. This kind of programming is still considered fast and to program in machine language required high knowledge of the CPU and machine's instruction set. This also meant high hardware dependency and lack of portability. 3) Throughout the early 1960's till 1980 saw the emergence of the third generation programming languages. Languages like ALGOL 58, 60 and 68, COBOL, FORTRAN IV, ADA and C are examples of this and were considered as high level languages. Most of these languages had compilers and the advantage of this was speed. Independence was another factor as these languages were machine independent and could run on different machines. The advantages of high level languages include the support for ideas of abstraction so that programmers can concentrate on finding the solution to the problem rapidly, rather than on low-level details of data representation. The comparative ease of use and learning, improved portability and simplified debugging, modifications and maintenance led to reliability and lower software costs. These languages were mostly created following von Neumann constructs which had sequential procedural operations and code executed using branches and loops. Although the syntax between these languages were different but they shared similar constructs and were more readable by programmers and users compared to assembly languages. COBOL (COmmon Business-Oriented Language), a business data processing language is an example of a language constantly improving over the decades. The new COBOL 97 has included new features like Object Oriented Programming to keep up with current languages. One good possible reason for this is the fact that existing code is important and to totally develop a new

language from start would be a lengthy process. This also was the rationalisation behind the developments of C and C++. 4) Third generation languages often followed procedural code, meaning the language performs functions defined in specific procedures on how something is done. In comparison, most fourth generation languages are nonprocedural. A disadvantage with fourth generation languages was they were slow compared to compiled languages and they also lacked control. Features evident in fourth generation languages quite clearly are that it must be user friendly, portable and independent of operating systems, usable by non-programmers, having intelligent default options about what the user wants and allowing the user to obtain results fasts using minimum requirement code generated with bug-free code from highlevel expressions (employing a data-base and dictionary management which makes applications easy and quick to change), which was not possible using COBOL or PL/I. Examples of this generation of languages are IBM's ADRS2, APL, CSP and AS, Power Builder, Access.

5) The 1990's saw the developments of fifth generation languages like PROLOG, referring to systems used in the field of artificial intelligence, fuzzy logic and neural networks. This means computers can in the future have the ability to think for themselves and draw their own inferences using programmed information in large databases.

6) What does the next generation of languages hold for us? The sixth generation? The current trend of the Internet and the World Wide Web could cultivate a whole new breed of radical programmers for the future, now exploring new boundaries with languages like HTML and Java. What happens next is entirely dependent on the future needs of the whole computer and communications industry.

Another explanation 1GL or first-generation language was (and still is) machine language or the level of instructions and data that the processor is actually given to work on (which in conventional computers is a string of 0s and 1s). 2GL or second-generation language is assembler (sometimes called "assembly") language. 3GL or third-generation language is a "high-level" programming language, such as PL/I, C, or Java. A compiler converts the statements of a specific high-level programming language into machine language. (In the case of Java, the output is called bytecode, which is

converted into appropriate machine language by a Java virtual machine that runs as part of an operating system platform.) A 3GL language requires a considerable amount of programming knowledge. 4GL or fourth-generation language is designed to be closer to natural language than a 3GL language. Languages for accessing databases are often described as 4GLs. A 4GL language statement might look like this: EXTRACT ALL CUSTOMERS WHERE "PREVIOUS PURCHASES" TOTAL MORE THAN $1000 5GL or fifth-generation language is programming that uses a visual or graphical development interface to create source language that is usually compiled with a 3GL or 4GL language compiler. Microsoft, Borland, IBM, and other companies make 5GL visual programming products for developing applications in Java, for example. Visual programming allows you to easily envision object-oriented programming class hierarchies and drag icons to assemble program components.

INTERNET BASED COMPUTING

Internet computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This Internet model promotes availability and is composed of five essential characteristics, three service models, and four deployment models.

Essential Characteristics: 1) On-demand self-service: A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each services provider.

2) Broad network access: Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). 3) Resource pooling: The providers computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. Examples of resources include storage, processing, memory, network bandwidth, and virtual machines. 4) Rapid elasticity: Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. 5) Measured Service: Internet systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service.

Service Models: 1. Internet Software as a Service (SaaS). The capability provided to the consumer is to use the providers applications running on a Internet infrastructure. The applications are accessible from various client devices through a thin client interface such as a Internet browser (e.g., Internet-based email). The consumer does not manage or control the underlying Internet infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

2. Internet Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the Internet infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying Internet infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.

3. Internet Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying Internet infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).

Deployment Models:

1. Private Internet: The Internet infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise. 2. Community Internet: The Internet infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise. 3. Public Internet: The Internet infrastructure is made available to the general public or a large industry group and is owned by an organization selling Internet services. 4. Hybrid Internet: The Internet infrastructure is a composition of two or more Internets (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., Internet bursting for load-balancing between Internets).

DATA CONSOLIDATION IN EXCEL

To summarize and report results from multiple worksheets, you can consolidate data from each worksheet into a master worksheet. The worksheets can be in the same workbook or other workbooks. When you consolidate data, you are assembling data so you can more easily update and aggregate it on a regular or ad hoc basis.

For example, if you have a worksheet of expense figures for each of your regional offices, you might use a consolidation to roll up these figures into a corporate expense worksheet. This master worksheet might contain sales totals and averages, current inventory levels, and highest selling products for the entire enterprise.

To consolidate data, you use the Consolidate command from the Data menu to display the Consolidate dialog box. You can use this dialog box in several ways to consolidate your data:

1. Consolidate by Position: Use this approach when the data in all worksheets is arranged in identical order and location. 2. Consolidate by Category: Use this approach when each worksheet organizes the data differently, but has the same row and column labels, which you can use to match the data. 3. Consolidate by using 3-D formulas: Use this approach when the worksheets do not have a consistent pattern you can rely on. You can create formulas that refer to cells in each range of data that you're combining. Formulas that refer to cells on multiple worksheets are called 3-D formulas. 4. Other ways to combine data.

How to do: If you have two or more Microsoft Excel worksheets that are identical to each other (except the values are different), you can have Excel's Data Consolidate feature consolidate the worksheets into a summary report. For example, suppose you have a workbook that consists of two worksheets. One worksheet has your students' names in A1:A20 and their corresponding midterm grades in B1:B20. The second worksheet lists the students' names in column A and their final grades in column B.

To create a worksheet listing the students' average grade, follow these steps: 1. Create a new worksheet and click A1. 2. Go to Data | Consolidate. 3. Select Average from the Function drop-down list. 4. Click the Collapse dialog button. 5. Select A1:B20 in Midterm Grades Sheet. 6. Click the Collapse dialog button and click Add. 7. Click the Collapse dialog button and Select A1:B20 in Final Grades Sheet. 8. Click the Collapse dialog button and click Add. 9. Under Use Labels In: select the Left Column check box. Click OK. The students' average grades are now listed in the new worksheet. Data Consolidation for Excel is an Excel add-in which allows to consolidate spreadsheets data from one or several sheets from one or many open workbooks quickly and easily.

Key Features of Data Consolidation for Excel include:

1. Data Consolidator - This useful tool makes easy the process of consolidating information proceeding from a different workbook but from the same range. a. Consolidator (Creating models) - This utility is contained within the Consolidator tool. It is useful for the need to often create and execute the same model with the same data. The consolidation tool allows to save those models and simply execute them the next time required. b. Consolidator (Existing Models) - After a model has been saved, it can be executed using this tool. c. Consolidator (Copy models) - Enables to copy consolidation models from a workbook to another. For this objective, the workbooks that contain the consolidation models must to be open. d. Multi-Sheet Consolidator - This tool copies the selected sheets from each chosen workbook to a existing workbook or to a new workbook. Afterward, it consolidates the same range from each sheet in a new sheet (in the case of a new workbook) or in the selected sheet (in the case of a open workbook). The process of data consolidation can also be automated.

2. Detailed Consolidation - This type of consolidation is appropriate to see the detail along with the consolidation totals.

3. Export sheets - This tool allows the exporting of chosen sheets from a workbook with options such as converting formulas to values, keeping colors, and ordering.

4. Toggle settings - This tool saves time by saving settings that can be recuperated and utilized for repetitive tasks.

5. Freeze or Divide panes - This tool can help in the process of creating, navigating through, editing excessively long models and keeping an eye on them

REPORT CREATION IN MS-ACCESS


The sample images in this tutorial were created using Access 2000. If you are running an earlier version of Access, your screen images may appear slightly different. However, the same general principles still apply and you should be able to follow along. Once again, we're going to use the Northwind sample database (FOR THAT MATTER ANY DATABASE CAN BE USED). Before we get started, open up Microsoft Access and then open the Northwind database.

1. Choose the Reports menu. Once you've opened Northwind, you'll be presented with the main database menu shown below. Go ahead and click on the "Reports" selection and you'll see a list of the various reports Microsoft included in the sample database. 2. Create a new report. Click on the "New" button and we'll begin the process of creating a report from scratch.

Create a new report 3. Select the Report Wizard. The next screen that appears will ask you to select the method you wish to use to create the report. We're going to use the Report Wizard which will walk us through the creation process step-by-step. After you've mastered the wizard, you might want to return to this step and explore the flexibility provided by the other creation methods. 4. Choose a table or query. Before leaving this screen, we want to choose the source of data for our report. If you want to retrieve information from a single table, you can select it from the drop-down box below. Alternatively, for more complex reports, we can choose to base our report on the output of a query that

we previously designed. For our example, all of the data we need is contained within the Employees table, so choose this table and click on OK.

Select a creation method Next, we'll select exactly which table data to include in the report and learn how to apply formatting to our finished product 5. Select the fields to include. Use the > button to move over the desired fields. Note that the order you place the fields in the right column determines the default order they will appear in your report. Remember that we're creating an employee telephone directory for our senior management. Let's keep the information contained in it simple -- the first and last name of each employee, their title and their home telephone number. Go ahead and select these fields. When you are satisfied, click the Next button. 6. Select the grouping levels. At this stage, you can select one or more grouping levels to refine the order in which our report data is presented. For example, we may wish to break down our telephone directory by department so that all of the members of each department are listed separately. However, due to the small number of employees in our database, this is not necessary for our report. Go ahead and simply click on the Next button to bypass this step. You may wish to return here later and experiment with grouping levels.

Select the fields to include

Choose the grouping levels

7. Choose your sorting options. In order to make reports useful, we often want to sort our results by one or more attributes. In the case of our telephone directory, the logical choice is to sort by the last name of each employee. Select this attribute from the first drop-down box and then click the Next button to continue.

Choose the sorting options

8. Choose the formatting options. In the next screen, were presented with some formatting options. Well accept the default tabular layout but lets change the page orientation to landscape to ensure the data fits properly on the page. Once youve completed this, click the Next button to continue. 9. Select a report style. The next screen asks you to select a style for your report. Click on the various options and youll see a preview of your report in that style in the left portion of the screen. Well use the corporate style for this report. Select this option and then click the Next button to move on.

Chose Formatting Options

Select a Report Style

10.Add the title. Finally, we need to give the report a title. Access will automatically provide a nicely formatted title at the top of the screen, with the appearance shown in the report style you selected during the previous step. Lets call our report Employee Home Phone List. Make sure that the Preview the report option is selected and click on Finish to see our report!

Adding a Title

Our Finished Product

Congratulations, you've successfully created a report in Microsoft Access! The final report you see should appear similar to the one presented above. When you close this report, you'll once again see the main database menu illustrated below. Notice that your report now appears in the list (I've added a red box to the figure below for your viewing convenience, this won't appear on your screen.) In the future, you can simply double-click on the report title and a new report will instantly be generated with up-to-date information from your database.

Updated Reports Menu

ANIMATION IN POWER POINT


Animations in Microsoft PowerPoint refer to the way that items, such as text boxes, bullet points or images move onto a slide during a slide show. There are two types of animations available in PowerPoint a) Preset Animation Schemes, that affect all of the content on a slide, and b) Custom Animations, which allow you to apply a variety of animation effects to individual items on a slide. Note - All versions of PowerPoint have the Custom Animations feature, but Animation Schemes are specific to PowerPoint 2003. While PowerPoint animations can certainly add variety and interest to your presentation, be careful in how you use them. The most common mistake in using animations, is in applying too many, which can overwhelm and distract your audience. Stick to one, or at most, two different animations throughout the show. Choose animations that are appropriate to subject matter. Animations are one of the finishing touches to a presentation. Wait until you have the slides edited and arranged in the preferred order before setting animations. Custom Animation Custom Animation is a set of effects which can be applied to objects in PowerPoint so that they will animate in the Slide Show. They can be added under the Custom Animation function (Slide Show | Custom Animation) or through the use of Visual Basic for Applications (VBA). PowerPoint 2000 and earlier versions introduced basic effects such as Appear, Dissolve, Fly In and so forth. In PowerPoint 2002/XP and later versions, the Custom Animation feature was improved, adding new animation effects grouped into four categories: Entrance, Emphasis, Exit and Motion Paths. Entrance effects can be set to objects so that they enter with animations during Slide Show. Emphasis effects animate the objects on the spot. Exit effects allow objects to leave the Slide Show with animations. Motion Paths allow objects to move around the Slide Show. Each effect contains variables such as start (On click, With previous, After previous), delay, speed, repeat and trigger. This makes animations more flexible and interactive. Animation Trigger: Animation Trigger is another feature introduced in Microsoft PowerPoint 2002/XP and the later versions. This feature allows animators to apply effects that can be triggered when a specific object on the Slide Show is clicked. This feature is the basis for the majority of PowerPoint games, which usually involve clicking objects to advance in the game.

While PowerPoint offers various distribution formats, notably in PowerPoint Show (.pps, .ppsx) and web page (.html), it should be noted that not all animation functions work accurately when saved as a web page or executed with a PowerPoint Viewer. Standalone EXE is also an alternate way for a creator to distribute his work with PowerPoint Viewer embedded. This allows for the audience without access to PowerPoint to view these works, as well. A screen capture can also be used to manually convert a PowerPoint movie into a more viable format (e.g. WMV).

Drawbacks It may be much more tedious to complete a project in PowerPoint than in professional animation programs such as Adobe Flash due to the absence of key frames and tweening. When effects such as Emphasis Grow/Shrink and Spin are applied to objects, they may appear to be jagged or pixelated when previewing in the slide show. In addition, excessive use of these effects may degrade the slide show's performance. (PowerPoint's built in Hardware Graphics Acceleration feature does help in minimizing these set-backs, however which requires a video card that supports Microsoft Direct3D.) PowerPoint 2000 and later versions introduced macro security to help protect computers from malicious code within a PowerPoint presentation. This led to disabling all VBA or macro code by default, causing presentations containing codes unable to run properly. This complication can be easily fixed by adjusting the macro security settings to Low. Security Warning in PowerPoint 2007 alerts the user of macros in a presentation as soon as it is opened, giving the option to run the presentation with or without the macros enabled.

E-Commerce
E-commerce (electronic commerce or EC) is the buying and selling of goods and services on the Internet. In practice, this term and e-business are often used interchangeably. For online retail selling, the term e-tailing is sometimes used. Aspects of e-commerce include: Websites with online catalogs. The gathering and use of demographic data through Web contacts. Electronic Data Interchange (EDI), the business-to-business exchange of data. Email, instant messaging, and social networking as media for reaching prospects and established customers. Business-to-business buying and selling. The security of business transactions.

As a place for direct retail shopping, with its 24-hour availability, a global reach, the ability to interact and provide custom information and ordering, and multimedia prospects, the Web is a multi-billion dollar source of revenue for the world's businesses. As early as the middle of 1997, Dell Computers reported orders of a million dollars a day. By early 1999, projected e-commerce revenues for business were in the billions of dollars and the stocks of companies deemed most adept at ecommerce were skyrocketing. Web retailing continues to grow. The Internet is now a flourishing industry. With the technology advancing at a fast rate, more and more people are open to computers and internet. Increasingly they are learning to utilize the Internet for their day to day needs. Here Ecommerce websites take a front seat, moving out to the millions of people searching for your kind of product or services online. Electronic commerce that is conducted between businesses is referred to as businessto-business or B2B. B2B can be open to all interested parties (e.g. commodity exchange) or limited to specific, pre-qualified participants (private electronic market). Electronic commerce that is conducted between businesses and consumers, on the other hand, is referred to as business-to-consumer or B2C. This is the type of electronic commerce conducted by companies such as Amazon.com. Electronic commerce is generally considered to be the sales aspect of e-business. It also consists of the exchange of data to facilitate the financing and payment aspects of the business transactions. Evolution of E-Business, E-Commerce, and E-Governance 1) Presence of Internet (e.g., Access a portal) 2) Interaction with visitors (e.g., Download a form) 3) Transaction (e.g., Buy Something, E-Business becomes E-Commerce from this point) 4) Transformation

5) Concept of B2B E-Commerce (e.g., Inter organizational) 6) Concept of B2C E-Commerce (e.g., Amazon.com) 7) Concept of C2C E-Commerce 8) Concept of B2G E-Governance (e.g., Central, Excise Taxes) 9) Concept of C2G E- Governance (e.g., Income Tax Online, Tax Refunds) 10)Concept of G2G E- Governance (e.g., One state interacting with other state, Centre to Centre, Country to Country etc.) Ecommerce-is it for you? Putting it simply, Ecommerce or Electronic Commerce means buying and selling of goods and services on the Internet. Before making any decision in business, it is worth taking into consideration the benefits, the company would reap on implementation of the new strategy of Ecommerce. So, the first and foremost thing that you need to know is whether your kind of business needs an ecommerce feature enabled website? The arrival of ecommerce websites has brought upheaval in the process of purchasing and selling goods. In this regard, it is important to consider websites like ebay, amazon etc. They have largely used the ecommerce procedure to make sales revenue. So, whether you have an existing business or launching a brand new business, whether the volume of your business is large or small, you can always generate profit by demonstrating your products or services online, thereby acquiring a large amount of viewer exposure. In a nutshell, just any selling/buying business can profit by the ecommerce method. The benefits of having an ecommerce website are many. a) Revealation - Your products showcased on your website, provides a huge exposure to the millions of visitors on the web. For example, if you have a computer showroom in a city, the visitors that you would get, will only be people from the city itself and possibly some from in and around the city. Once in a while there will be visitors from places outside your city. Thus your product exposure is limited. On the contrary, if your products are demonstrated on a website, it connects you to the plentifull people who access the internet and looking for similar product as that of yours. b) Time and Convenience - Time is one of the crucial factor in our lives now-adays. A customer may find it difficult to visit your store physically everytime. On the other hand, if you have your store put on view on the Internet, anyone can pay a visit to your online store at their convenient time. Your store shuts down at some point of time. But your e-store works 24X7 for you to bring in customers. Moreover, with all your product images and descriptions provided on your estore, the customer gets a detailed idea about your product and you do not have to squander time briefing the same thing to each and every visitor to your store. It further saves time per transaction. Any sales executive will definitely take some time to illustrate your product to each customer. Your ecommerce website does

the same task to hundreds and thousands of your likely customers at the same time. c) Cost Effective - Sustaining a store in a primary locality is not highly expensive. On the contrary using the ecommerce is a whole lot easier technique of demonstrating and providing information about your products. Moreover promotion of your store and its products should be carried out from time-to-time. Web advertisement expenses are also less compared to print or other audio-visual medias, like radio or TV. Business applications Some common applications related to electronic commerce are the following: a) Email b) Enterprise content management c) Instant messaging d) Newsgroups e) Online shopping and order tracking f) Online banking g) Online office suites h) Domestic and international payment systems i) Shopping cart software j) Teleconferencing k) Electronic tickets Electronic commerce or e-commerce refers to a wide range of online business activities for products and services. It also pertains to any form of business transaction in which the parties interact electronically rather than by physical exchanges or direct physical contact. E-commerce is usually associated with buying and selling over the Internet, or conducting any transaction involving the transfer of ownership or rights to use goods or services through a computer-mediated network. Though popular, this definition is not comprehensive enough to capture recent developments in this new and revolutionary business phenomenon. A more complete definition is: E-commerce is the use of electronic communications and digital information processing technology in business transactions to create, transform, and redefine relationships for value creation between or among organizations, and between organizations and individuals. International Data Corp (IDC) estimates the value of global e-commerce in 2000 at US$350.38 billion. This is projected to climb to as high as US$3.14 trillion by 2004. Is e-commerce the same as e-business? While some use e-commerce and e-business interchangeably, they are distinct concepts. In e-commerce, information and communications technology (ICT) is used in inter-business or inter-organizational transactions (transactions between and among firms/organizations) and in business-to-consumer transactions (transactions between firms/organizations and individuals). In e-business, on the other hand, ICT is used to enhance ones business. It includes any process that a business organization

(either a for-profit, governmental or non-profit entity) conducts over a computermediated network. A more comprehensive definition of e-business is: The transformation of an organizations processes to deliver additional customer value through the application of technologies, philosophies and computing paradigm of the new economy. Three primary processes are enhanced in e-business: 1. Production processes, which include procurement, ordering and replenishment of stocks; processing of payments; electronic links with suppliers; and production control processes, among others; 2. Customer-focused processes, which include promotional and marketing efforts, selling over the Internet, processing of customers purchase orders and payments, and customer support, among others; and 3. Internal management processes, which include employee services, training, internal information-sharing, video-conferencing, and recruiting. Electronic applications enhance information flow between production and sales forces. What are the different types of e-commerce? The major different types of e-commerce are: business-to-business (B2B); businessto-consumer (B2C); business-to-government (B2G); consumer-to-consumer (C2C); and mobile commerce (m-commerce). What is B2B e-commerce? B2B e-commerce is simply defined as e-commerce between companies. This is the type of e-commerce that deals with relationships between and among businesses. About 80% of e-commerce is of this type, and most experts predict that B2B ecommerce will continue to grow faster than the B2C segment. The B2B market has two primary components: e-frastructure and e-markets. Efrastructure is the architecture of B2B, primarily consisting of the following: logistics - transportation, warehousing and distribution (e.g., Procter and Gamble); application service providers - deployment, hosting and management of packaged software from a central facility (e.g., Oracle and Linkshare); outsourcing of functions in the process of e-commerce, such as Web-hosting, security and customer care solutions (e.g., outsourcing providers such as eShare, NetSales, iXL Enterprises and Universal Access); auction solutions software for the operation and maintenance of real-time auctions in the Internet (e.g., Moai Technologies and OpenSite Technologies); content management software for the facilitation of Web site content management and delivery (e.g., Interwoven and ProcureNet); and Web-based commerce enablers (e.g., Commerce One, a browser-based, XML enabled purchasing automation software). Most B2B applications are in the areas of supplier management (especially purchase order processing), inventory management (i.e., managing order-ship-bill

cycles), distribution management (especially in the transmission of shipping documents), channel management (i.e., information dissemination on changes in operational conditions), and payment management (e.g., electronic payment systems or EPS). What is B2C e-commerce? Business-to-consumer e-commerce, or commerce between companies and consumers, involves customers gathering information; purchasing physical goods (i.e., tangibles such as books or consumer products) or information goods (or goods of electronic material or digitized content, such as software, or e-books); and, for information goods, receiving products over an electronic network. It is the second largest and the earliest form of e-commerce. Its origins can be traced to online retailing (or e-tailing).13 Thus, the more common B2C business models are the online retailing companies such as Amazon.com, Drugstore.com, Beyond.com, Barnes and Noble and ToysRus. Other B2C examples involving information goods are E-Trade and Travelocity. The more common applications of this type of e-commerce are in the areas of purchasing products and information, and personal finance management, which pertains to the management of personal investments and finances with the use of online banking tools (e.g., Quicken). What is B2G e-commerce? Business-to-government e-commerce or B2G is generally defined as commerce between companies and the public sector. It refers to the use of the Internet for public procurement, licensing procedures, and other government-related operations. This kind of e-commerce has two features: first, the public sector assumes a pilot/leading role in establishing e-commerce; and second, it is assumed that the public sector has the greatest need for making its procurement system more effective. Web-based purchasing policies increase the transparency of the procurement process (and reduces the risk of irregularities). To date, however, the size of the B2G ecommerce market as a component of total e-commerce is insignificant, as government e-procurement systems remain undeveloped. What is C2C e-commerce? Consumer-to-consumer e-commerce or C2C is simply commerce between private individuals or consumers. This type of e-commerce is characterized by the growth of electronic marketplaces and online auctions, particularly in vertical industries where firms/businesses can bid for what they want from among multiple suppliers. It perhaps has the greatest potential for developing new markets. This type of e-commerce comes in at least three forms: o auctions facilitated at a portal, such as eBay, which allows online real-time bidding on items being sold in the Web; o peer-to-peer systems, such as the Napster model (a protocol for sharing files between users used by chat forums similar to IRC) and other file

exchange and later money exchange models; and classified ads at portal sites such as Excite Classifieds and eWanted (an interactive, online marketplace where buyers and sellers can negotiate and which features Buyer Leads & Want Ads). There is little information on the relative size of global C2C e-commerce. However, C2C figures of popular C2C sites such as eBay and Napster indicate that this market is quite large. These sites produce millions of dollars in sales every day.

What is m-commerce? M-commerce (mobile commerce) is the buying and selling of goods and services through wireless technology-i.e., handheld devices such as cellular telephones and personal digital assistants (PDAs). Japan is seen as a global leader in m-commerce. As content delivery over wireless devices becomes faster, more secure, and scalable, some believe that m-commerce will surpass wireline e-commerce as the method of choice for digital commerce transactions. This may well be true for the Asia-Pacific where there are more mobile phone users than there are Internet users. Industries affected by m-commerce include: o Financial services, including mobile banking (when customers use their handheld devices to access their accounts and pay their bills), as well as brokerage services (in which stock quotes can be displayed and trading conducted from the same handheld device); o Telecommunications, in which service changes, bill payment and account reviews can all be conducted from the same handheld device; o Service/retail, as consumers are given the ability to place and pay for orders on-the-fly; and Information services, which include the delivery of entertainment, financial news, sports figures and traffic updates to a single mobile device.

What forces are fueling e-commerce? There are at least three major forces fuelling e-commerce: economic forces, marketing and customer interaction forces, and technology, particularly multimedia convergence. Economic forces: One of the most evident benefits of e-commerce is economic efficiency resulting from the reduction in communications costs, lowcost technological infrastructure, speedier and more economic electronic transactions with suppliers, lower global information sharing and advertising costs, and cheaper customer service alternatives. Economic integration is either external or internal. External integration refers to the electronic networking of corporations, suppliers, customers/clients, and independent contractors into one community communicating in a virtual environment (with the Internet as medium). Internal integration, on the other hand, is the networking of the various departments within a corporation, and of business operations and processes. This allows critical business information to be stored in a digital form that can be retrieved instantly and transmitted electronically. Internal integration is best exemplified by

corporate intranets. Among the companies with efficient corporate intranets are Procter and Gamble, IBM, Nestle and Intel. Market forces: Corporations are encouraged to use e-commerce in marketing and promotion to capture international markets, both big and small. The Internet is likewise used as a medium for enhanced customer service and support. It is a lot easier for companies to provide their target consumers with more detailed product and service information using the Internet. Technology forces: The development of ICT is a key factor in the growth of ecommerce. For instance, technological advances in digitizing content, compression and the promotion of open systems technology have paved the way for the convergence of communication services into one single platform. This in turn has made communication more efficient, faster, easier, and more economical as the need to set up separate networks for telephone services, television broadcast, cable television, and Internet access is eliminated. From the standpoint of firms/businesses and consumers, having only one information provider means lower communications costs. How can government use e-commerce? Government can use e-commerce in the following ways: E-procurement: Government agencies should be able to trade electronically with all suppliers using open standards-through agency enablement programs, supplier enablement programs, and e-procurement information systems. Customs clearance: With the computerization of customs processes and operations (i.e., electronic submission, processing and electronic payment; and automated systems for data entry to integrate customs tables, codes and pre assessment), one can expect more predictable and more precise information on clearing time and delivery shipments, and increased legitimate revenues. Tax administration: This includes a system for electronic processing and transmission of tax return information, online issuances of tax clearances, permits, and licenses, and an electronic process registration of businesses and new taxpayers, among others. More often than not, the e-commerce initiatives of government are a barometer indicating whether or not the infrastructure supports e-commerce use by private firms. This means that if government is unable to engage in e-procurement, secure records online, or have customs fees remitted electronically, then the private sector will also have difficulties in e-commerce uptake. Virtually, the benefits from e-commerce accrue to the government, as the experiences of some countries reflect. E-Government: Government should be the lead-user of e-commerce if various business and private-sector related activities are to be prompted to move online. In effect, government becomes a positive influence. E-government can take the form of various online transactions such as company registration, taxation, applications for a variety of employee- and business-related requirements, and the like.

FLOW CHARTING
INTRODUCTION The flowchart is a means of visually presenting the flow of data through an information processing systems, the operations performed within the system and the sequence in which they are performed.. The program flowchart can be likened to the blueprint of a building. As we know a designer draws a blueprint before starting construction on a building. Similarly, a programmer prefers to draw a flowchart prior to writing a computer program. As in the case of the drawing of a blueprint, the flowchart is drawn according to defined rules and using standard flowchart symbols prescribed by the American National Standard Institute, Inc. MEANING OF A FLOWCHART A flowchart is a diagrammatic representation that illustrates the sequence of operations to be performed to get the solution of a problem. Flowcharts are generally drawn in the early stages of formulating computer solutions. Flowcharts facilitate communication between programmers and business people. These flowcharts play a vital role in the programming of a problem and are quite helpful in understanding the logic of complicated and lengthy problems. Once the flowchart is drawn, it becomes easy to write the program in any high level language. Often we see how flowcharts are helpful in explaining the program to others. Hence, it is correct to say that a flowchart is a must for the better documentation of a complex program. GUIDELINES FOR DRAWING A FLOWCHART Flowcharts are usually drawn using some standard symbols; however, some special symbols can also be developed when required. Some standard flowchart symbols, which are frequently required for flowcharting many computer programs are shown in Figure below.

Types of flowcharts High-Level Flowchart A high-level (also called first-level or top-down) flowchart shows the major steps in a process. It illustrates a "birds-eye view" of a process, such as the example in the figure entitled High-Level Flowchart of Prenatal Care. It can also include the intermediate outputs of each step (the product or service produced), and the substeps involved. Such a flowchart offers a basic picture of the process and identifies the changes taking place within the process. It is significantly useful for identifying appropriate team members (those who are involved in the process) and for developing indicators for monitoring the process because of its focus on intermediate outputs. Most processes can be adequately portrayed in four or five boxes that represent the major steps or activities of the process. In fact, it is a good idea to use only a few boxes, because doing so forces one to consider the most important steps. Other steps are usually sub-steps of the more important ones. Detailed Flowchart The detailed flowchart provides a detailed picture of a process by mapping all of the steps and activities that occur in the process. This type of flowchart indicates the steps or activities of a process and includes such things as decision points, waiting periods, tasks that frequently must be redone (rework), and feedback loops. This type of flowchart is useful for examining areas of the process in detail and for looking for problems or areas of inefficiency. For example, the Detailed Flowchart of Patient Registration reveals the delays that result when the record clerk and clinical officer are not available to assist clients.

Deployment or Matrix Flowchart A deployment flowchart maps out the process in terms of who is doing the steps. It is in the form of a matrix, showing the various participants and the flow of steps among these participants. It is chiefly useful in identifying who is providing inputs or services to whom, as well as areas where different people may be needlessly doing the same task. Which Flowchart Should be Used Each type of flowchart has its strengths and weaknesses; the high-level flowchart is the easiest to construct but may not provide sufficient detail for some purposes. In choosing which type to use, the group should be clear on their purpose for flowcharting. Types of flowcharts Four general types: Document flowcharts, showing controls over a document-flow through a system Data flowcharts, showing controls over a data flows in a system System flowcharts showing controls at a physical or resource level Program flowchart, showing the controls in a program within a system Notice that every type of flowchart focusses on some kind of control, rather than on the particular flow itself.

Software to make flowcharts


Manual: Some tools offer special support for flowchart drawing, e.g., Visio, OmniGraffle. Automatic: Many software packages exist that can create flowcharts automatically, either directly from source code, or from a flowchart description language. Web-based: Recently, online flowchart solutions have become available, e.g., Creately.

DEVELOPMENT OF MODERN COMPUTING


Computer History Early forms of computers, called "microcomputers" were downright expensive and cost-prohibitive for the masses. It's no surprise, though, that the personal computer revolution really launched with the introduction of the Apple Computer Company and its 1977 Apple II model. The Apple II was the first computer to offer color graphics, sported 4kb or RAM and sold for around $1300.

In 1981 IBM introduced its IBM PC - featuring an operating system known as "DOS" made by a small, 32-person company known as Microsoft. This marked the competitive development of the modern day computer. From this point on, the world of computers took two sides. On one side was Apple Computer and its Macintosh line of computers. On the other, the army of personal computers whose ranks included not only IBM, but eventually names like Gateway, Dell, HP, Sony Vaio and Compaq. Shortly after the first mass-produced calculator (1820), Charles Babbage develops difference engine by 1842. In 1856, George Boole, while professor of Mathematics at Cork University, writes An Investigation of the Laws of Thought (1854), and is generally recognized as the father of computer science. The 1890 census, Developed by Herman Hollerith of MIT, used electric power (non-mechanical). In 1892, William Burroughs, a sickly ex-teller, introduces a commercially successful printing calculator. In 1925, unaware of the work of Charles Babbage, Vannevar Bush of MIT builds a machine he calls the differential analyzer. Using a set of gears and shafts, much like Babbage, the machine can handle simple calculus problems, but accuracy is a problem. In 1935, Konrad Zuse, a German construction engineer, builds a mechanical calculator to handle the math involved in his profession. Also completes a programmable electronic device in 1938. In 1943 development begins on the Electronic Numerical Integrator And Computer (ENIAC) in earnest at Penn State. Designed by John Mauchly and J. Presper Eckert of the Moore School, they get help from John von Neumann and others. In 1944, the Havard Mark I is introduced. Mark I computes complex tables for the U.S. Navy. In 1945, Von Neumann proposes the concept of a "stored program" in a paper that is never officially published. Scientists employed by Bell Labs complete work on the transistor (John Bardeen, Walter Brattain and William Shockley receive the Nobel Prize in Physics in 1956). IBM introduces the 701. It is the first commercially successful computer. In 1956 FORTRAN is introduced. In 1961 Fairchild Semiconductor introduces the integrated circuit. Within ten years all computers use these instead of the transistor. Formally building sized computers are now room-sized, and are considerably more powerful. On April 7, 1964, IBM introduces the System/360. While a technical marvel, the main feature of this machine is business oriented...IBM guarantees the "upward compatibility" of the system, reducing the risk that a business would invest in outdated technology. In 1969 Bell Labs, unhappy with the direction of the MIT project, leaves and develops its own operating system, UNIX. One of the many precursors to today's Internet, ARPANet, is quietly launched. Alan Keys, who will later become a designer for Apple, proposes the "personal computer."

Also in 1969, unhappy with Fairchild Semiconductor, a group of technicians begin discussing forming their own company. This company, formed the next year, would be known as Intel. In 1971, Texas Instruments introduces the first "pocket calculator." It weighs 2.5 pounds. In 1973, Xerox introduces the mouse. Proposals are made for the first local area networks. In 1975 the first personal computer is marketed in kit form. The Altair features 256 bytes of memory. Bill Gates, with others, writes a BASIC compiler for the machine. The next year Apple begins to market PC's, also in kit form. It includes a monitor and keyboard. During the next few years the personal computer explodes on the American scene. Microsoft, Apple and many smaller PC related companies form (and some die). By 1977 stores begin to sell PC's. Continuing today, companies strive to reduce the size and price of PC's while increasing capacity. Entering the fray, IBM introduces it's PC in 1981. Time selects the computer as its Man of the Year in 1982. Tron, a computer-generated special effects extravaganza is released the same year.

FACTORS AFFECTING THE PERFORMANCE OF A DESKTOP COMPUTER


The three top factors that can slow a computer down are Spyware, a cluttered registry and unwanted desktop items. Spyware is one of the leading causes of a slow computer. These spyware programs are known as malicious programs that attach themselves to the back of your computer, so that its maker can monitor your internet activity. It would definitely help if you could invest in a good anti-spyware program to prevent this from happening. This usually comes with an anti-virus program which scans your system to remove the spyware and all other viruses. Having a cluttered system registry can really slow one's computer down as it is packed with unnecessary registry files. Registry files are very important to a computer as they contain preferences and settings for your computer. However, unwanted files get added into the system's registry each time you add a new software or hardware into your computer. Unfortunately, they will clutter your system's registry even after you remove or delete the software or hardware. This is because they usually remain in your registry long after you remove and delete the programs, and you might not even be aware of its existence. Nonetheless, you can fix this problem by using a registry cleaner software. A registry cleaner scans your computer and removes these unnecessary registry files. When a scan is done, it will generate a list of unneeded registry files for you to delete. The last cause of a slow running computer can be found on your desktop. Many find that they love the convenience of having shortcuts on their desktops as well as highresolution wallpaper and animated cursors. Unfortunately, all of these things can slow down your computer as it uses up valuable system resources. It is better to shut these items off and not use them when you are working on the computer. This will free up some memory space in your computer, letting it run more smoothly and efficiently In nutshell, be careful with: -Computer viruses -Spyware and adware -Too many installed programs (programs running in the background) -Hard drive that is too full -Software conflicts -Hardware conflicts -Not performing regular routine maintenance Even the process of adding and deleting programs can affect your PC's performance. When you install and uninstall Windows programs, they leave behind parts or applications that can slow down your computer. You may even unknowingly delete a file needed for other software applications. This can result in a minor glitch or a complete failure with some programs or even your whole system

a) What do you understand by the term Operating System? b) What are the essentials of an Operating System? c) Draw a comparison between any two operating systems. d) What are Applications? And how do they differ from System Software?
Meaning of an Operating System The operating system is a set of services which simplifies development of applications. Executing a program involves the creation of a process by the operating system. The kernel creates a process by assigning memory and other resources, establishing a priority for the process (in multi-tasking systems), loading program code into memory, and executing the program. The program then interacts with the user and/or other devices and performs its intended function. In computing, an operating system (OS) is an interface between hardware and user, which is responsible for the management and coordination of activities and the sharing of the resources of a computer, that acts as a host for computing applications run on the machine. One of the purposes of an operating system is to handle the resource allocation and access protection of the hardware. This relieves the application programmers from having to manage these details. Operating systems offer a number of services to application programs and users. Applications access these services through application programming interfaces (APIs) or system calls. By invoking these interfaces, the application can request a service from the operating system, pass parameters, and receive the results of the operation. Users may also interact with the operating system with some kind of software user interface like typing commands by using command line interface (CLI) or using a graphical user interface. For hand-held and desktop computers, the user interface is generally considered part of the operating system. On large systems such as Unix-like systems, the user interface is generally implemented as an application program that runs outside the operating system. While servers generally run Unix or some Unix-like operating system, embedded system markets are split amongst several operating systems, although the Microsoft Windows line of operating systems has almost 90% of the client PC market.

Essentials of on Operating System

1) Interrupts: Interrupts are central to operating systems, as they provide an efficient way for the operating system to interact with and react to its environment. Interrupt-based programming is directly supported by most modern CPUs. Interrupts provide a computer with a way of automatically saving local register contexts, and running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place. When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, saves its status, and runs computer code previously associated with the interrupt; this is analogous to placing a bookmark in a book in response to a phone call. In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware or from the running program. 2) Protected mode and supervisor mode: Modern CPUs support something called dual mode operation. CPUs with this capability use two modes: protected mode and supervisor mode, which allow certain CPU functions to be controlled and affected only by the operating system kernel. When a computer first starts up, it is automatically running in supervisor mode. The first few programs to run on the computer, being the BIOS, bootloader and the operating system have unlimited access to hardware - and this is required because, by definition, initializing a protected environment can only be done outside of one. However, when the operating system passes control to another program, it can place the CPU into protected mode. In protected mode, programs may have access to a more limited set of the CPU's instructions. A user program may leave protected mode only by triggering an interrupt, causing control to be passed back to the kernel. In this way the operating system can maintain exclusive control over things like access to hardware and memory. 3) Memory management: Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already used by another program. Since programs time share, each program must have independent access to memory. 4) Virtual memory: The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks. In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. 5) Multitasking: Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute. An operating system kernel contains a piece of software called a scheduler which determines how much time each program will spend executing, and in which order execution control should be passed to programs. 6) Virtual file system: Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make

better use out of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree. 7) Device driver: A device driver is a specific type of computer software developed to allow interaction with hardware devices. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs. 8) Computer network: Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Client/server networking involves a program on a computer somewhere which connects via a network to another computer, called a server. Servers offer (or host) various services to other network computers and users. 9) Computer security: A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel. The operating system must be capable of distinguishing between requests which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all (such as reading files over a network share). 10) File system support in modern operating systems: Support for file systems is highly varied among modern operating systems although there are several common file systems which almost all operating systems include support and drivers for. Operating systems vary on file system support and on the disk formats they may be installed on. 11)Graphical user interfaces: Most of the modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementations of Microsoft Windows and the Mac OS, the GUI is integrated into the kernel. While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Many computer operating systems allow the user to install or create any user

interface they desire. The X Window System in conjunction with GNOME or KDE is a commonly-found setup on most Unix and Unix-like (BSD, Linux, Solaris) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows. 12) Various OS: a. Windows b. Linux/Unix c. Ubuntu d. Mac OS X e. Google Chrome OS f. Plan 9 g. Real-time operating systems (RTOS) COMPARISON BETWEEN MS-WINDOWS AND LINUX Both Linux and Windows are operating systems. An operating system is the most important program that runs on a computer. Every general-purpose computer must have an operating system to run other programs. Operating systems perform basic tasks, such as recognizing input from the keyboard, sending output to the display screen, keeping track of files and directories on the disk, and controlling peripheral devices such as disk drives and printers.

Other differences between Linux and Windows: 1) Linux is customizable in a way that Windows is not. 2) For desktop or home use, Linux is very cheap or free, Windows is expensive. For server use, Linux is very cheap compared to Windows. Microsoft allows a single copy of Windows to be used on only one computer. Starting with Windows XP, they use software to enforce this rule (activation). In contrast, once you have purchased Linux, you can run it on any number of computers for no additional charge. 3) You have to log on to Linux with a userid and password. This is not true of Windows. 4) Linux has a reputation for fewer bugs than Windows 5) Windows must boot from a primary partition. Linux can boot from either a primary partition or a logical partition inside an extended partition.

6) Windows uses a hidden file for its swap file. Linux uses a dedicated partition for its swap file 7) Windows uses FAT12, FAT16, FAT32 and/or NTFS with NTFS almost always being the best choice. Linux also has a number of its own native file systems. The default file systeAll the file systems use directories and subdirectories. Windows separates directories with a back slash, Linux uses a normal forward slash. Windows file names are not case sensitive. Linux file names are. 8) Windows and Linux use different concepts for their file hierarchy. Windows uses a volume-based file hierarchy, Linux uses a unified scheme. Windows uses letters of the alphabet to represent different devices and different hard disk partitions. Under Windows, you need to know what volume (C:, D:,...) a file resides on to select it, the file's physical location is part of it's name. In Linux all directories are attached to the root directory, which is identified by a forward-slash, "/". 9) Both support the concept of hidden files. Linux implements this with a filename that starts with a period. Windows tracks this as a file attribute in the file metadata (along with things like the last update date). Windows allows programs to store user information (files and settings) anywhere. This makes it impossibly hard to backup user data files and settings and to switch to a new computer. In contrast, Linux stores all user data in the home directory making it much easier to migrate from an old computer to a new one. If home directories are segregated in their own partition, you can even upgrade from one version of Linux to another without having to migrate user data and settings. 10)Full access vs. no access 11)Licensing freedom vs. licensing restrictions 12)Online peer support vs. paid help-desk support 13)Full vs. partial hardware support 14)Command line vs. no command line 15)Automated vs. non-automated removable media 16)Multilayered run levels vs. a single-layered run level

What do you understand by the concept of a data centre? What type of organizations would you like to have a data centre? At what levels are redundancies created in a data centre? Explain with help of examples.

A data center or datacenter, also called a server farm, is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices.

Requirements for modern data centers

1) One of the main concerns is business continuity; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. 2) Information security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of both fiber optic cables and power, which includes emergency backup power generation. Data center classification

The higher the tier, the greater the accessibility. The levels are: Tier I - Basic site infrastructure guaranteeing 99.671% availability Tier II - Redundant site infrastructure capacity components guaranteeing 99.741% availability Tier III - Concurrently maintainable site infrastructure guaranteeing 99.982% availability Tier IV - Fault tolerant site infrastructure guaranteeing 99.995% availability

Physical layout

A typical server rack, commonly seen in colocation. A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are usually placed in single rows forming corridors between them. This allows people access to the front and rear of each cabinet. Very large data centers may use shipping containers packed with 1,000 or more servers each; when repairs or upgrades are needed, whole containers are replaced (rather than repairing individual servers). Local building codes may govern the minimum ceiling heights.

The physical environment of a data center is rigorously controlled. Air conditioning is used to control the temperature and humidity in the data center. Recommends a temperature range is 1624 C (6175 F) and humidity range of 4055% with a maximum dew point of 15C as optimal for data center conditions.

The electrical power used heats the air in the data center. Unless the heat is removed, the ambient temperature will rise, resulting in electronic equipment malfunction. By controlling the air temperature, the server components at the board level are kept within the manufacturer's specified temperature/humidity range. Air conditioning systems help control humidity

by cooling the return space air below the dew point. Too much humidity, and water may begin to condense on internal components.

Modern data centers try to use economizer cooling, where they use outside air to keep the data center cool. Washington state now has a few data centers that cool all of the servers using outside air 11 months out of the year. They do not use chillers/air conditioners, which creates potential energy savings in the millions.

Backup power consists of one or more uninterruptible power supplies and/or diesel generators. To prevent single points of failure, all elements of the electrical systems, including backup system, are typically fully duplicated. This arrangement is often made to achieve N+1 Redundancy in the systems. Static switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.

Data cabling is typically routed through overhead cable trays in modern data centers. But some are still recommending under raised floor cabling for security reasons and to consider the addition of cooling systems above the racks in case this enhancement is necessary.

Data centers feature fire protection systems, including passive and active design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a developing fire by detecting particles generated by smoldering components prior to the development of flame. A fire sprinkler system is often provided to control a full scale fire if it develops. Clean agent fire suppression gaseous systems are sometimes installed to suppress a fire earlier than the fire sprinkler system. Passive fire protection elements include the installation of fire walls around the data center, so a fire can be restricted to a portion of the facility for a limited time in the event of the failure of the active fire protection systems, or if they are not installed.

Physical security also plays a large role with data centers. Physical access to the site is usually restricted to selected personnel. Video camera surveillance and permanent security guards are almost always present if

the data center is large or contains sensitive information on any of the systems within. The use of finger print recognition man traps is starting to be commonplace.

Network infrastructure

Communications in data centers today are most often based on networks running the IP protocol suite. Data centers contain a set of routers and switches that transport traffic between the servers and to the outside world. Redundancy of the Internet connection is often provided by using two or more upstream service providers. Some of the servers at the data center are used for running the basic Internet and intranet services needed by internal users in the organization, e.g., e-mail servers, proxy servers, and DNS servers. Network security elements are also usually deployed: firewalls, VPN gateways, intrusion detection systems, etc. Also common are monitoring systems for the network and some of the applications. Additional off site monitoring systems are also typical, in case of a failure of communications inside the data center.

Applications

The main purpose of a data center is running the applications that handle the core business and operational data of the organization. Such systems may be proprietary and developed internally by the organization, or bought from enterprise software vendors. Such common applications are ERP and CRM systems.

Other Services

A data center may be concerned with just operations architecture or it may provide other services as well.

Data centers are also used for off site backups. Backups can be taken of servers locally on to tapes., however tapes stored on site pose a security threat and are also susceptible to fire and flooding. Larger companies may also send their backups off site for added security. This can be done by backing up to a data center. Encrypted backups can be sent over the Internet to another data center where they can be stored securely.

Need for Date Centers

Every organization is under pressure to accomplish more with less resources (financial, technologically, and human capital) Many are finding that specific targets for reduction have been set, especially in the near term. Server and storage growth do not appear to be keeping pace with economic times in some cases, the needs for technology are on the rise Server sprawl continues in a lot of cases even with virtualized servers. The impact of storage growth is stressing backup and recovery subsystems and the frequency of missed backup windows remains for many. The typical data center will run out of power and cooling many already have o Facilities and IT continue to struggle with an integrated capacity planning approach

Disaster recovery is something most organizations hope they can achieve, but wouldnt want to test it

Companies likely to have a data center

A typical example of a company that almost certainly has a data center is a bank or other kind of financial institution. A bank's data center will have a mainframe or other kind of computer network, on which customers' account information and other data are stored. A university will also have a data center, which includes not only personal information about the university's employees and students, but also information on the university's buildings, construction projects, and physical and intellectual history. These kinds of data centers contain information that is critical to the continued operation of the bank, university, or other business. Other kinds of data centers can be found in government institutions; companies that have multiple headquarters; and providers of electronic services such as television, mobile phones, and the like.

A data center can also be a single computer, storing and accessing one company's or one person's critical data. Smaller data centers usually have less complicated forms of data protection. No matter the size, all data centers serve the same function: to compile and protect the data of a person or company.

Data Center Infrastructure

Data Centers are valuable resources as they get close to capacity those resources must be carefully managed. Their infrastructure includes Racks Switches and switch ports VLANs Patch panels and cables (of all types) Power utilization and monitoring Generators High voltage power components HVAC components

By accurately tracking the usage of systems and their placement in the data center we can ensure that overload conditions do not occur

Features of a data center

Disaster Recovery

Virtualization (e.g., Cloud Computing)

LEVELS AT WHICH REDUNDANCIES ARE CREATED IN A DATA CENTER

Redundancy in the data center industry is regarded as computer or network system components, such as fans, hard disk drives, servers, operating systems, switches, and telecommunication links that are installed to back up primary resources in case they fail, according to techtarget.com. One can clearly deduce that the more redundant a data center, the better it is for the consumer. Cooling or power interruptions can dramatically affect a data center's environment. Because system failure can be prohibitively expensive, reliability and redundancy are vital. Redundancy in cooling systems minimizes risk of system failure and increases system performance by decreasing downtime for maintenance and repairs. The main advantage of redundancy is that it increases system reliability. Running a data center is a tricky business. Operational controls and efficiences are paramount. From the customer perspective though, the key word is: uptime. While the day to day operations are important, its the operations that result in maintaining customer uptime as the most critical.

Many data centers tout uptime statistics. In fact, the Uptime Institute is an organization that has developed standards surrounding levels of redundancy within data centers and the types of uptime that is to be expected. Most data centers strive to achieve what is known as Tier IV, or the highest tier. To get there, a data center must have multiple active systems (cooling, electrical, etc) such that failure of any single system goes unnoticed. This allows the data center to target 99.995% uptime. Because of the expense, most data centers choose to build to Tier III, or close to it. They sacrifice some of the inherent redundancies for operational and capital cost. Types of redundancy: o Cooling Redundancy: One major aspect of data center redundancy is in the cooling systems. Most data centers employ multple CRAC (Computer Room Air Conditioner) units to keep the data center area cool. Generally, an extra unit is installed such that failure of any single unit will go unnoticed to the end user. This is generally known as an N+1 setup, meaning N is the number of units needed to operate, and +1 denotes an extra unit is running as a backup. While this design is sound, the CRAC units are not the only part of the entire cooling system that is critical. The CRAC units themselves must be serviced by another piece of the HVAC chain, be it a refrigerant based system or a chilled water based system. Thus, there should also be some redundancy in those servicing systems as well. o Electrical Redundancy: Most data centers will offer redundant electrical circuits, commonly called A+B feeds. Again, it is necessary to follow the distribution chain of these circuits back to the source. Do they go to separate power distribution units? If not, there is a single point of failure. Do they go back to separate UPS units? If not, there is a single point of failure. Does the power come in from two separate transformers? If not, there is a single point of failure. Furthermore, what happens during a power outage? Just about every critical data center will have backup generator capacity, but is it just a single generator? What happens if the generator fails to start? What happens if the service department is performing an oil change on the generator when the power goes out? Again, this is another single point of failure risk that must be analyzed. o Data Redundancy: RAID (redundant array of independent disks;

originally redundant array of inexpensive disks) is a way of storing the same data in different places (thus, redundantly) on multiple hard disks. By placing data on multiple disks, I/O (input/output) operations can overlap in a balanced way, improving performance. Since multiple disks increase the mean time between failures (MTBF), storing data redundantly also increases fault tolerance. There are at least nine types of RAID plus a non-redundant array (RAID-0): RAID-0, RAID-1, RAID-2, RAID-3, RAID-4, RAID-5, RAID-6, RAID-7, RAID-10, RAID-50(or RAID-5+0), RAID-53 (or RAID-5+3), RAID-S.

DATA FILTERING (Filtering Data in MS Excel)

A single data list may contain records that fall into several categories or groups. Depending on the size of the data list, it may be difficult to focus on all of the records that belong to a specific group. Data filtering in MS Excel enables the user to work with a subset of data within the data list. When a filter is created, only the records that contain the values specified are displayed. Other records in the data list are hidden temporarily. AutoFilter is an automated filtering tool included in Excel. When AutoFilter is applied to a data list, the column headings change to drop-down list boxes. Each drop-down list contains all of the unique values that are found within a data field. Selecting a value from the drop-down list box will automatically filter the data to display only the rows or records matching the field value you have specified. Data can be filtered using filter values in multiple fields. In order to use the AutoFilter tool, the data list must be organized according to Excel data list guidelines. AutoFilter assumes that the top row of the worksheet is the header row. If that is not the case, select the header row before activating AutoFilter. Follow the steps below to use the AutoFilter feature: o Position the active cell anywhere within the data list or within the header row. o o Choose Data |Filter |AutoFilter. Drop down arrows now appear along the top row of the list.

o Select a field value to filer from any AutoFilter drop-down list. Select additional filter values from other fields, if you desire. When additional filter values are selected from other fields in the data list, the additional filter criteria are combined with the original filter value. o The drop-down arrows of filtered fields become highlighted in blue, as well as the record number of filtered records. To remove a filter, follow the steps below: o o o o Choose All from the AutoFilter drop-down list in the desired column. Repeat the above step to remove any additional filters. You can also use Data | Filter | Show | All. When the filter is removed, all of the records in the data list are displayed.

Apply the Excel Advanced Filter 1. Select a cell in the database.

2. From the Data menu, choose Filter, Advanced Filter. (In Excel 2007, click the Data tab on the Ribbon, then click Advanced Filter.) 3. You can choose to filter the list in place, or copy the results to another location. 4. Excel should automatically detect the list range. If not, you can select the cells on the worksheet. 5. Select the criteria range on the worksheet 6. If you are copying to a new location, select a starting cell for the copy Note: If you copy to another location, all cells below the extract range will be cleared when the Advanced Filter is applied. 7. Click OK

Filter Unique Records

You can use an Excel Advanced Filter to extract a list of unique items in the database. For example, get a list of customers from an order list, or compile a list of products sold: Note: The list must contain a heading, or the first item may be duplicated in the results. 1. Select a cell in the database. 2. From the Data menu, choose Filter, Advanced Filter.(In Excel 2007, click the Data tab on the Ribbon, then click Advanced Filter.) 3. Choose 'Copy to another location'. 4. For the List range, select the column(s) from which you want to extract the unique values. 5. Leave the Criteria Range blank. 6. Select a starting cell for the Copy to location. 7. Add a check mark to the Unique records only box. 8. Click OK.

MS OFFICE STYLES
Styles are arguably the most important feature in Microsoft Word. Why? Because everything that you do in Word has a style attached. The definition of a style is twofold. First, you can think of a style as a set of pre-defined formatting instructions that you can use repeatedly throughout the document. Let's say each heading in a document must be centered, uppercase, bold, and a slightly larger font size. Each time you need to apply formatting to the heading, you have to go through the entire process to get the text the way you want it. If you store the formatting commands in a style, you can apply that style any time you need it without having to do all of the reformatting. Possibly more important however is that styles are used to "tag" or identify parts of a document. An example of this is whether text is part of a heading, a footnote, a hyperlink, or body text. These are all examples of styles in Word. Styles are the architecture upon which Word is based. Just about everything in Word is style-driven. In fact, many people in the industry refer to Word as a "style-driven" program. Styles allow for quick formatting modifications throughout the document and can be tied into numbering to make working with outline numbered lists easier. Microsoft recommends that you use numbering linked to styles to get the best result. There are several reasons for using styles in a document: Consistency When you use styles to format your document, each section is formatted the same and therefore, provides a professional, clean-looking document. Easier to Modify If you use styles in your document consistently, you only need to update a given style once if you want to change the characteristics of all text formatted in that style. Efficiency You can create a style once, and then apply it to any section in the document without having to format each section individually. Table of Contents Styles can be used to generate a table of contents quickly. Faster Navigation Using styles lets you quickly move to different sections in a document using the Document Map feature. Working in Outline View Styles allow you to outline and organize your document's main topics with ease. Legal Outline Numbering Numbering, when linked to styles, allows you to generate and update consistent outline numbering in legal documents, even ones with complicated numbering schemes like municipal law, tax law, and mergers and acquisitions documents. Efficiency of Word Files which are predominantly manually formatted are less efficient than those which have formatting that has been imposed by styles.

HTML AND XML A fully structured, styled document will move into HTML and XML incredibly well.

Styles are an essential part of Microsoft Word. In fact, everything you type into a document has a style attached to it, whether you design the style or not. When you start Microsoft Word, the new blank document is based on the Normal template, and text that you type uses the Normal style. This means that when you start typing, Word uses the font name, font size, line spacing, indentation, text alignment, and other formats currently defined for the Normal style. The Normal style is the base style for the Normal template, meaning that it's a building block for other styles in the template. Whenever you start typing in a new document, unless you specify otherwise, you are typing in the Normal style. Paragraph vs. Character Styles There are two types of styles in Microsoft Word; character and paragraph. Paragraph styles are used more frequently than character styles, and they are easier to create. It's important to understand both, however, since understanding styles is so important. Character styles can be applied to individual words even (you guessed it) single characters. Character formatting is built from the formatting options available from the Format menu, by selecting Font; settings from the Tools menu, by selecting Language, and then selecting Set Language; and in certain cases from the Format menu by selecting Borders and Shading, and looking on the Borders and Shading tabs of the Borders and Shading dialog box. A paragraph style contains both font and paragraph formatting which makes it more flexible than a character style. When you apply a paragraph style the formatting affects the entire paragraph. For example, when you center text, you cannot center a single word. Instead, the entire paragraph is centered. Other types of paragraph-level formats that styles control are line spacing (single-space, double-space, etc.), text alignment, bullets, numbers, indents, tabs and borders.

There are actually four style types in Word. Each has an icon that appears next to it in the Styles and Formatting task pane. When you use the New Style dialog box to create a new style, the types are available on the Style type list. Paragraph . Applies to all the text within the end paragraph mark of where your pointer is positioned. Character List Table . Applies at the character levelto blocks of words and letters. . Provides a consistent look to lists. . Provides a consistent look to tables.

Applying Styles The same rules that apply to direct formatting of text apply to style formatting of text. If you want to apply a text attribute to a single word, you can click anywhere in the word and select a formatting option such as bold, italics or underline Word applies the selected format to the entire word. Similarly, if you want to format multiple words you must first select the multiple words. The same is true for applying character styles. To apply a character style, you can click in the middle of any word and select the character style to format the entire word. If you want to change a group of words you must first select the text before applying the character style. Applying formatting to paragraphs is a little different. Just click anywhere in a paragraph and apply direct formats such as dragging the ruler to change indentation since paragraph formats affect an entire paragraph, you don't

have to select the paragraph. If you want to affect multiple paragraphs, you must first select the multiple paragraphs. And, similar to applying text formatting and character styles, to apply a paragraph style, click within the paragraph and apply the paragraph style. Or, select multiple paragraphs to apply the same style to each of the selected paragraphs.
Display Paragraph Style Names in Normal View Sometimes it's useful to see what style has been applied to text within a document. You can turn on Word's Style Area feature to see what paragraph styles have been applied throughout the document. The Style Area is a re-sizeable pane on the left side of the window that lists the paragraph style applied to each paragraph. It is only available in Normal View. Replacing Styles Let's say you just finished applying styles to a long agreement only to find that you applied the Heading 2 style where you should have applied the Heading 1 style. This

can easily be remedied by using Word's Find and Replace feature. Instead of searching for text, however, you can tell Word to search and replace text formatted with a specific style. Create a New Style The easiest way to create a new style is to format text with the attributes that you want to apply to the style. It doesn't matter what you type, only what type of paragraph and character formatting that you have applied to the text. Formatting is the only thing that is applied when you apply a style. Modifying Existing Styles There are two ways to modify an existing style. One of these methods is through the Style dialog box. However, an easier method is by changing the style by example using the Style drop-down toolbar button. The Style drop-down is useful if changes have already been manually made to a paragraph formatted in the style to be changed. If this is not the case, styles can be changed using the Style dialog box.

INTERNET AND BUSINESS


How Internet started affecting businesses Fifteen years ago, the early pioneers who launched the world wide web were not aiming to make money. But within a few years, the streets around Palo Alto, California, the home of Stanford University, were buzzing with venture capitalists and dot.com entrepreneurs. An extraordinary group of entrepreneurs emerged who convinced US investors that the pot of internet gold was just around the corner. The first internet company to attract widespread stock market attention was Netscape, led by Jim Clark and Marc Andreessen, who had originally developed the Mosaic browser. When its shares were offered for sale to the public on 9 August 1995, they tripled in value on the first day of trading. Netscape soon fell victim to Microsoft's rival Internet Explorer browser (precipitating a long-running anti-trust case), but other companies in turn attracted investor attention. Portals like Yahoo, Lycos, and AltaVista, were the next big thing. Then there were the companies that provided the internet's backbone, like MCI and WorldCom. The company that made the switching gear for the internet, Cisco Systems, briefly overtook Microsoft to become the world's largest company by market capitalisation, worth over $400bn in March 2000. At its peak, one billion dollars a week was flowing into Silicon Valley and its venture capital firms were desperately searching for dot.com investments with viable business plans. Retailers found backing to launch websites to sell everything from toys (eToys) to pet food (pets.com) to medical advice (webMD.com). The culmination of the dot.com boom was the takeover by AOL, the biggest internet service provider in the US, of Time Warner, the biggest media company, for more than $200bn in January 2000. In the five years from 1995 to 2000, the main US tech stock index, the Nasdaq, rose five-fold.

The crash - and its effects The speed of the boom - which soon led to excesses among advisers and those hyping internet shares - made some sort of correction inevitable. And in the spring of 2000 the stock prices of many internet firms - and other high-tech companies plunged. Many firms with weak cash flow went bust or were forced into mergers. The stronger who survived generally consolidated their position -with companies like Amazon, Yahoo, eBay and Google emerging as the dominant players in their class. It soon became clear that the "network effects" of the internet led to a greater, rather than a lesser degree of concentration online, despite the openness of the internet's formal structure. From the point of view of the economy as a whole, the internet was dramatically lowering the cost of transactions, especially in the services sector. And even the over-investment in networks laid the basis for the broadband revolution, which made the internet faster and prepared the ground for the next round of internet expansion.

The quiet revolution The growth of outsourcing, which led to manufacturing companies moving much of their production to cheaper, overseas locations, could not have happened without the internet. Indeed, almost every Silicon Valley firm - from Apple to Cisco - outsourced their production to locations abroad, mainly in Asia. And many back-office service functions, from data processing to personnel, were also moving offshore, particularly to India, where new offshore business services centres were emerging in Bangalore and Hyderabad.

Global - and local Six years into the new century, it is clear that the internet has become mainstream. Today few big businesses can afford not to have an internet site to advertise and sell their wares. And it has become second nature for many people to check out products, prices and availability online before buying. China is growing even faster, and may have more internet users than the US by the end of the decade.

Benefits of Internet for Businesses

People discover various benefits of the Internet for business. Unlike grassroots companies that don't use the Internet, firms that do have the potential to grow their business, earn greater revenue and save money by opting to do a large percentage of their business online. New businesses and established companies also increase their visibility because of the accessibility of the web.

Small Businesses: One of the benefits of the Internet for small businesses is that the Internet creates a competitive marketplace in which small businesses have the opportunity to grow as much as larger companies. Marketing and Advertising: Creating a website benefits businesses because people can market their products and services without using traditional marketing techniques such as fliers, mailings and newspaper ads. Online marketing saves the company money that would otherwise be spent on traditional means of advertising. Larger Customer Base: A key benefit of the Internet for business is the potential for customer growth. A small business without a website may be able to compete only with other local businesses. However, people conducting business on the Internet have the potential to gain customers from around the world because Internet companies are open 24 hours a day.

Networking: Another benefit of the Internet for business includes the availability to network with other businesspeople and organizations. Many Internet businesspeople have created organizations with others in their field in which they can talk about the challenges and rewards of Internet business. This interchange of encouragement often helps new businesses experience growth. Saves Money on Office Supplies: Businesses that use the Internet for transactions save money on paper and other office supplies. Instead of mailing or faxing multiple letters to clients and other businesses, they can correspond via email or set up paperless eFax accounts (see Resources). Affiliate Programs: Internet businesses that participate in affiliate programs gain extra income by marketing the products and services of other companies on their websites. Many companies that regularly do business with certain companies join these programs, which help both companies gain more customers and revenue.

Difference between system software and application software


The Operating System is the System Software that makes the Computer work. We can say that an Operating System (OS) is Software that acts as an interface between you and the hardware. It not only contains drivers used to speak the hardware's language, but also offers you a very specific graphical user interface (GUI) to control the computer. An OS can also act as an interface (from the hardware) to the other software. A complex OS like Windows or Linux or Mac OS offers the services of an OS, but also has applications built in. Solitaire, Paint, Messenger, etc. Are all applications. Application software is the software that you install onto your Operating System. It consists of the programs that actually let you do things with your computer. These Applications are written to run under the various Operating Systems. These include things like your word processing programs, spread sheets, email clients, web browser, games, etc. Many programs, such as most of the Microsoft Office suite of programs, are written in both Mac and Windows versions, but you still have to have the right version for your OS. So, the Operating system of a Computer is the Software that allows the Computer work. It provides the framework under which the Applications run. An operating system is the type of Computer system you have such as Window XP or Window 95, 98, Mac, etc. The Applications are the Software that actually allows the user to do something with the Computer. Without the applications, all you can do is change settings and navigate among the folders. A system software is any computer software which manages and controls computer hardware so that application software can perform a task. Operating systems, such as Microsoft Windows, Mac OS X or Linux, are prominent examples of system software. System software contrasts with application software, which are programs that enable the end-user to perform specific, productive tasks, such as word processing or image manipulation.

System software performs tasks like transferring data from memory to disk, or rendering text onto a display device. Specific kinds of system software include loading programs, operating systems, device drivers, programming tools, compilers, assemblers, linkers, and utility software. If system software is stored on non-volatile memory such as integrated circuits, it is usually termed firmware while an application software is a subclass of computer software that employs the capabilities of a computer directly and thoroughly to a task that the user wishes to perform. This should be contrasted with system software which is involved in integrating a computer's various capabilities, but typically does not directly apply them in the performance of tasks that benefit the user. A simple, if imperfect analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system). The power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user. Typical examples of software applications are word processors, spreadsheets, and media players. Multiple applications bundled together as a package are sometimes referred to as an application suite. Microsoft Office and OpenOffice.org, which bundle together a word processor, a spreadsheet, and several other discrete applications, are typical examples. The separate applications in a suite usually have a user interface that has some commonality making it easier for the user to learn and use each application. And often they may have some capability to interact with each other in ways beneficial to the user. For example, a spreadsheet might be able to be embedded in a word processor document even though it had been created in the separate spreadsheet application. In some types of embedded systems, the application software and the operating system software may be indistinguishable to the user, as in the case of software used to control a VCR, DVD player or Microwave Oven. System software is an essential part of computer operations. The function of the systems software is to manage the resources of the computer, automate its operation and facilitate program development. It is generally provided by the computer manufacturer or a specialized programming firm (for example: Microsoft is a company that specializes in system software). While, the Application software are designed to perform specific data processing or computational tasks for the user. These programs are specifically designed to meet end-user requirements. (e.g: spreadsheets, word processors, media players and database applications). Application software is a set of one or more programs, designed to solve a specific problem or do a specific task.

DIFFERENCE: o System software makes the physical machine do work. o Application software makes the system software do work. An operating system (OS) is software that acts as an interface between you and the hardware. It not only contains drivers used to speak the hardware's language, but also offers you a very specific graphical user interface (GUI) to control the computer. An OS can also act as an interface (from the hardware) to the other software. A complex OS like Windows or Linux or Mac OS offers the services of an OS, but also has applications built in. Solitaire, Paint, Messenger, etc. are all applications. As others have said, an application is software that serves a specific function. It allows you to communicate, create and modify documents, play games, etc.

What are the advantages of Computer Networks? How do these help in managerial decision making?
Advantages of Computer Networks Following are some of the advantages of computer networks.

File Sharing: The major advantage of a computer network is that is allows file sharing and remote file access. A person sitting at one workstation of a network can easily see the files present on the other workstation, provided he is authorized to do so. It saves the time which is wasted in copying a file from one system to another, by using a storage device. In addition to that, many people can access or update the information stored in a database, making it up-to-date and accurate. Resource Sharing: Resource sharing is also an important benefit of a computer network. For example, if there are four people in a family, each having their own computer, they will require four modems (for the Internet connection) and four printers, if they want to use the resources at the same time. A computer network, on the other hand, provides a cheaper alternative by the provision of resource sharing. In this way, all the four computers can be interconnected, using a network, and just one modem and printer can efficiently provide the services to all four members. The facility of shared folders can also be availed by family members. Increased Storage Capacity: As there is more than one computer on a network which can easily share files, the issue of storage capacity gets resolved to a great extent. A standalone computer might fall short of storage memory, but when many computers are on a network, memory of different computers can be used in such case. One can also design a storage server on the network in order to have a huge storage capacity. Increased Cost Efficiency: There are many softwares available in the market which are costly and take time for installation. Computer networks resolve this

issue as the software can be stored or installed on a system or a server and can be used by the different workstations. Disadvantages of Computer Networks Following are some of the major disadvantages of computer networks.

Security Issues: One of the major drawbacks of computer networks is the security issues involved. If a computer is a standalone, physical access becomes necessary for any kind of data theft. However, if a computer is on a network, a computer hacker can get unauthorized access by using different tools. In case of big organizations, various network security softwares are used to prevent the theft of any confidential and classified data. Rapid Spread of Computer Viruses: If any computer system in a network gets affected by computer virus, there is a possible threat of other systems getting affected too. Viruses get spread on a network easily because of the interconnectivity of workstations. Such spread can be dangerous if the computers have important database which can get corrupted by the virus. Expensive Set Up: The initial set up cost of a computer network can be high depending on the number of computers to be connected. Costly devices like routers, switches, hubs, etc., can add up to the bills of a person trying to install a computer network. He will also have to buy NICs (Network Interface Cards) for each of the workstations, in case they are not inbuilt. Dependency on the Main File Server: In case the main File Server of a computer network breaks down, the system becomes useless. In case of big networks, the File Server should be a powerful computer, which often makes it expensive.

How do networks help in managerial decision making?

----TBD

Why are certain types of databases called relation DBs

In simplest terms, a "database" is a collection of records. There are many types of database models. Databases can be as simple as flat files. Databases can follow the hierarchical model, the relational model, the object-oriented model or the XML model.

A database is something that stores data. Using tools called Database Management Systems(like Oracle, Informix, Sybase, DB2), you can create, view, modify, and delete databases. Databases can be -Relational -Object Oriented -Object Relational

o Relational database stores data in tables (called relations). These


tables are related to each other. Just like in our family, our relations are related with each other.

o In Object Oriented Databases, the information is stored in the form


of Objects as in Object Oriented Programming. OODBMS makes database objects appear as programming language objects in one or more programming languages. Oriented as well as Relational databases. Here you can not only store simple data like text in relational, but you can also store complex objects like images, audio and video in tables.

o Object relational databases combine the features of both Object

A relational database is a collection of data items organized as a set of formallydescribed tables from which data can be accessed or reassembled in many different ways without having to reorganize the database tables. The relational database was invented by E. F. Codd at IBM in 1970.

The standard user and application program interface to a relational database is the structured query language (SQL). SQL statements are used both for interactive queries for information from a relational database and for gathering data for reports. A relational database is a set of tables containing data fitted into predefined categories. Each table (which is sometimes called a relation) contains one or more data categories in columns. Each row contains a unique instance of data for the categories defined by the columns. For example, a typical business order entry database would include a table that described a customer with columns for name, address, phone number, and so forth. Another table would describe an order: product, customer, date, sales price, and so forth. A user of the database could obtain a view of the database that fitted the user's needs. For example, a branch office manager might like a view or report on all customers that had bought products after a certain date. A financial services manager in the same company could, from the same tables, obtain a report on accounts that needed to be paid.

When creating a relational database, you can define the domain of possible values in a data column and further constraints that may apply to that data value. For

example, a domain of possible customers could allow up to ten possible customer names but be constrained in one table to allowing only three of these customer names to be specifiable.

The definition of a relational database results in a table of metadata or formal descriptions of the tables, columns, domains, and constraints.

Relational databases are currently the predominant choice in storing financial records, manufacturing and logistical information, personnel data and much more.

Terminology

Relations or Tables: A relation is usually described as a table, which is organized into rows and columns. All the data referenced by an attribute are in the same domain and conform to the same constraints. Relations can be modified using the insert, delete, and update operators. New tuples can supply explicit values or be derived from a query. Similarly, queries identify tuples for updating or deleting. It is necessary for each tuple of a relation to be uniquely identifiable by some combination (one or more) of its attribute values. This combination is referred to as the primary key.

Base and derived relations: In a relational database, all data are stored and accessed via relations. Relations that store data are called "base relations", and in implementations are called "tables". Other relations do not store data, but are computed by applying relational operations to other relations. These relations are sometimes called "derived relations". In implementations these are called "views" or "queries". Derived relations are convenient in that though they may grab information from several relations, they act as a single relation.

Domain: A domain describes the set of possible values for a given attribute. Because a domain constrains the attribute's values and name, it can be considered constraints.

Constraints: Constraints allow you to further restrict the domain of an attribute. For instance, a constraint can restrict a given integer attribute to values between 1 and 10. Constraints provide one method of implementing business rules in the database. SQL implements constraint functionality in the form of check constraints. Since every attribute has an associated domain, there are constraints (domain constraints). The two principal rules for the relational model are known as entity integrity and referential integrity.

Primary Keys: A primary key uniquely defines a relationship within a database. In order for an attribute to be a good primary key it must not repeat. While natural attributes are sometimes good primary keys, Surrogate keys are often used instead. A surrogate key is an artificial attribute assigned to an object which uniquely identifies it (For instance, in a table of information about students at a school they might all be assigned a Student ID in order to differentiate them). The surrogate key has no intrinsic meaning, but rather is useful through its ability to uniquely identify a tuple. Another common occurrence, especially in regards to N:M cardinality is the composite key. A composite key is a key made up of two or more attributes within a table that (together) uniquely identify a record. (For example, in a database relating students, teachers, and classes. Classes could be uniquely identified by a composite key of their room number and time slot, since no other class could have that exact same combination of attributes.)

Foreign keys: A foreign key is a reference to a key in another relation, meaning that the referencing tuple has, as one of its attributes, the values of a key in the referenced tuple. Foreign keys need not have unique values in the referencing relation.

Stored procedures: A stored procedure is executable code that is associated with, and generally stored in, the database. Stored procedures usually collect and customize common operations, like inserting a tuple into a relation, gathering statistical information about usage patterns, or encapsulating complex business logic and calculations. Stored procedures are not part of the relational database model, but all commercial implementations include them.

Indices: An index is one way of providing quicker access to data. Indices can be created on any combination of attributes on a relation. Queries that filter using those attributes can find matching tuples randomly using the index, without having to check

each tuple in turn. Relational databases typically supply multiple indexing techniques, each of which is optimal for some combination of data distribution, relation size, and typical access pattern. Indices are usually not considered part of the database, as they are considered an implementation detail, though indices are usually maintained by the same group that maintains the other parts of the database.

Relational operations: For Ex: UNION, INTERSECT, MINUS, CROSS JOIN, INNER JOIN, DISTINCT

Normalization: Normalization was first proposed by Codd as an integral part of the relational model. It encompasses a set of best practices designed to eliminate the duplication of data, which in turn prevents data manipulation anomalies and loss of data integrity. Normalization is criticized because it increases complexity and processing overhead required to join multiple tables representing what are conceptually a single item.

Relational database management systems

Relational databases, as implemented in relational database management systems, have become a predominant choice for the storage of information in new databases used for financial records, manufacturing and logistical information, personnel data and much more. Relational databases have often replaced legacy hierarchical databases and network databases because they are easier to understand and use, even though they are much less efficient. However, relational databases have been challenged by Object Databases, which were introduced in an attempt to address the object-relational impedance mismatch in relational database, and XML databases. The three leading commercial relational database vendors are Oracle, Microsoft, and IBM. The three leading open source implementations are MySQL, PostgreSQL, and SQLite.

MICROSOFT WORD TRACK CHANGES

o Track Changes is a way for Microsoft Word to keep track of the changes you make to a document. You can then choose to accept or reject those changes. Let's say Bill creates a document and emails it to his colleague, Lee, for feedback. Lee

can edit the document with Track Changes on. When Lee sends the document back to Bill, Bill can see what changes Lee had made.

o Track Changes is also known as redline, or redlining. This is because some industries traditionally draw a vertical red line in the margin to show that some text has changed.

o To use Track Changes, you need to know that there are three entirely separate things that might be going on at any one time:

o First, at some time in the past (last week, yesterday, one millisecond ago), Word might have kept track of the changes you made. It did this because you turned on Track Changes. Word then remembered the changes you made to your document, and stored the changes in your document. o Second, if Word has stored information about changes you've made to your document, then you can choose to display those changes, or to hide them. Hiding them doesn't make them go away. It just hides them from view. (The only way to remove the tracked changes from your document is to accept or reject them.) o Third, at this very moment in time, Word may be tracking the changes you make to your document. o How to turn track changes on and off

o o

In Word 2002 and 2003: Tools > Track Changes. In Word 2000 and earlier versions: Tools > Track Changes > Highlight Changes. Tick Track Changes while editing. Look at the TRK text in the Status Bar at the bottom of the screen. If it's black, Word is tracking changes. If it's dimmed, Word is not tracking changes.

How to display the tracked changes

There are several ways to do this, depending on what you need: In Word 2002 and 2003, on the Reviewing toolbar, choose Final with Markup or Original with

Markup. This will show you what changes have been made. In Word 2000 and earlier, Tools > Track Changes > Highlight Changes. Tick Highlight Changes on Screen.

How do I control how Word displays tracked changes?

Tools > Options. Click the Track Changes tab. Here you choose how to display tracked changes when you are displaying tracked changes.

How to hide (but not delete) track changes

In Word 2002 and 2003, on the Reviewing toolbar, choose Final. This displays your document as if you had accepted all the tracked changes in the document. It hides (but does not remove) the tracked changes. In Word 2002 and 2003, on the Reviewing toolbar, choose Original. This displays your document as if you had rejected all the tracked changes in the document. It hides (but does not remove) the tracked changes. In earlier versions of Word, Tools > Track Changes > Highlight Changes. Untick Highlight Changes on Screen. This displays your document as if you had accepted all the tracked changes. It hides (but does not remove) the tracked changes.

How to remove tracked changes

o o

To delete a tracked change, either accept it or reject it. To accept one tracked change in Word 2002 or Word 2003, click within the change and then on the Reviewing toolbar, click the Accept Change button (it's the one with the blue tick). Or, right-click on the tracked change and choose Accept Insertion or Accept Deletion or Accept Format Change etc. To reject (ie delete) one tracked change in Word 2002 or Word 2003, on the Reviewing toolbar, click the Reject Change button (it's the one with the red cross). Or, right-click on the tracked change and choose Reject Insertion or Reject Deletion or Reject Format Change etc.

In Word 2000 and earlier, Tools > Track Changes > Accept or Reject Changes. Click one of the Find buttons (with the green arrow) to go through the changes one by one. Accept or reject the change.

How do I accept or reject all tracked changes in the document in one step? o To accept all changes in Word 2002 or Word 2003: on the Reviewing toolbar, hover over the Accept Change button (the one with the blue tick). Click on the arrow you see to the right of the button. Choose Accept all Changes in Document. To reject (or delete) all changes in Word 2002 or Word 2003: on the Reviewing toolbar, hover over the Reject Change button (the one with the red cross). Click on the arrow you see to the right of the button. Choose Reject all Changes in Document. In Word 2000 and earlier, Tools > Track Changes > Accept or Reject Changes. You can choose to accept or reject all the changes in the document.

Printing tracked changes

Word 2002 and 2003: File > Print. In the "Print What" box, choose Document showing Markup.

Word 2002 and before: Tools > Track Changes > Highlight Changes. Tick Highlight Changes in Printed Document.

How do I print out my document without showing the tracked changes?

Word 2002 and 2003: File > Print. In the "Print What" box, choose Document.

Word 2002 and before: Tools > Track Changes > Highlight Changes. Un-tick Highlight Changes in Printed Document

How can I tell if there are Tracked Changes in my document?

In Word 2002 and Word 2003, on the Reviewing toolbar, click the Next button (it's the one with the blue arrow). If the message box says "The document contains no comments or tracked changes", then there are no comments or tracked changes. Otherwise, the cursor will move to the first tracked change in the document. In Word 2000 and earlier, Tools > Track Changes > Accept or Reject Changes. Click one of the Find buttons (with the green arrow).

How can I make sure that Word always displays tracked changes when I open a document?

In Word 2003, Tools > Options > Security. Tick "Make hidden markup visible when opening or saving." This functionality isn't available in earlier versions of Word.

NAME RANGE

You can create Excel names that refer to cells, a range of cells, a constant value, or a formula.

After you define the Excel names, you can use those names in formulas, to replace values or cell references.

If Excel names refer to cells or a range of cells, you can use the names for navigation, to quickly select the Excel named range.

Name a Range - Excel Name Box You can create an Excel named range quickly by typing in the Excel Name Box. 1. 2. 3. 4. Select the cell(s) to be named Click in the Excel Name box, to the left of the formula bar Type a one-word name for the list, e.g. FruitList. Press the Enter key.

Use Excel Names After creating Excel names that refer to a range, you can select an Excel name in the Name Box dropdown list, to select the Excel named range on the worksheet.

You can also use Excel names in formulas. For example, you could have a group of cells with sales amounts for the month of January. Name those cells JanSales, then use this formula to calculate the total amount: =SUM(JanSales)

More Explanation: Instead of using something like = SUM(A2:A5) to add up a column of numbers, you can replace the A2:A5 part of the function with a more descriptive name. This is known as a Named Range. Examine the spreadsheet below:

In the Results Row, cell B5 is a result of adding up cells B2 to B4. The formula used is just this: =Sum(B2:B4) Now examine the same spreadsheet, but with a Named Range used:

This time, cell B5 doesn't have in it the formula = Sum(B2:B4). As you can see, it has =SUM(Monthly_Totals). This is the label from B1. We have created a Named Range. The formula in cell B5 is now more descriptive. We can tell at a glance what it is we're adding up. Excel has replaced the B2:B4 part with the name we gave it. Behind the scenes, though, we're still adding up the numbers in cells B2 to B4. Excel has just hidden the cell references behind our descriptive name. You'll now see how to create your own Named Ranges.

Creating a Named Range Start a new spreadsheet, and enter the same data as in the image below:

Make sure you have the same formula in cell B5 =Sum(B2:B4). We're going to create a Named Range, and then pop it in cell B5. To create a Named Range then, do the following:

Highlight the B column, from B2 to B4 (Don't include the formula when you're highlighting. Just highlight the same cells as the ones in the function) From the menu bar, click on Insert From the drop-down menu select Name A sub menu appears like the one below:

There's a two-step process involved with setting up a Named Range. The fist thing to do is Define the name. You then Apply the name to your formula.

So select Define from the sub menu The Define Name dialogue box pops up. This one:

With the B column highlighted, Excel will use your label at the top as the name (Monthly_Totals for us). But you can change it if you want. Notice the narrow text box at the bottom, "Refers to". This is showing the highlighted cells. Click OK on the dialogue box. You are returned to your spreadsheet. Nothing will happen. This is because we have haven't done step two of the two-step process - Applying the name. To apply your new name to a formula, do this:

Click inside the cell where your formula is, B5 in our case Click on Insert from the menu bar From the drop down menu, select Name From the sub menu that appears, click on Apply A dialogue box will appear showing a list of all the Names you have set up

You'll have only one Name set up , so there's not much to do except click the OK button. When you click OK, Excel should adapt your formula in cell B5. If you've done it right, your spreadsheet should look like the one below:

As you can see, the cell B5 now reads =SUM(Monthly_Totals). Excel has hidden the cell references behind the Name we defined.

Advantages of using Named Ranges In addition to providing an alternative to repeatedly typing in cell addresses and cell ranges, using named ranges have several other advantages. 1) They improve readability and make your formulas much easier to understand meaning there is less chance of errors. 2) Moving or shifting cells that have a named range means that the formulas are adjusted automatically. There is no need to worry about whether the formulas use absolute or relative references. 3) Inserting and deleting cells, rows or columns will not change the location of your named ranges. Moving cells, rows or columns will though. 4) Typing a descriptive name is much easier than remembering a specific cell address, therefore simplifying your formulas. 5) You can quickly move to particular areas of your workbook (or worksheet) very quickly by either using the Name Box or the (Edit > Goto) dialog box. 6) You can also create 3-D named ranges that represent the same cell or range of cells across multiple worksheets. 7) Allows you to define Named Constants which are single, frequently used values. 8) Allows you to define Named Formulas which are common frequently used formulas (save re-typing them).

WORLD WIDE WEB

The World Wide Web, abbreviated as WWW and commonly known as The Web, is a system of interlinked hypertext documents contained on the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia and navigate between them by using hyperlinks. "The World-Wide Web (W3) was developed to be a pool of human knowledge, which would allow collaborators in remote sites to share their ideas and all aspects of a common project." If two projects are independently created, rather than have a central figure make the changes, the two bodies of information could form into one cohesive piece of work.

The terms Internet and World Wide Web are often used in every-day speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global system of interconnected computer networks. In contrast, the Web is one of the services that runs on the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs. In short, the Web is an application running on the Internet.

Viewing a web page on the World Wide Web normally begins either by typing the URL of the page into a web browser, or by following a hyperlink to that page or resource. The web browser then initiates a series of communication messages, behind the scenes, in order to fetch and display it.

First, the server-name portion of the URL is resolved into an IP address using the global, distributed Internet database known as the domain name system, or DNS. This IP address is necessary to contact the Web server. The browser then requests the resource by sending an HTTP request to the Web server at that particular address. In the case of a typical web page, the HTML text of the page is requested first and parsed immediately by the web browser, which then makes additional requests for images and any other files that form parts of the page. Statistics measuring a website's popularity are usually based either on the number of 'page views' or associated server 'hits' (file requests) that take place.

While receiving these files from the web server, browsers may progressively render the page onto the screen as specified by its HTML, CSS, and other web languages. Any images and other resources are incorporated to produce the onscreen web page that the user sees.

Most web pages will themselves contain hyperlinks to other related pages and perhaps to downloads, source documents, definitions and other web resources.

EMBEDDED SYSTEMS

An embedded system is a computer system designed to perform one or a few dedicated functions often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. By contrast, a general-purpose computer, such as a personal computer (PC), is designed to be flexible and to meet a wide range of end-user needs. Embedded systems control many devices in common use today.

Physically, embedded systems range from portable devices such as digital watches and MP3 players, to large stationary installations like traffic lights, factory controllers, or the systems controlling nuclear power plants.

Characteristics

Embedded systems are designed to do some specific task, rather than be a generalpurpose computer for multiple tasks. Some also have real-time performance constraints that must be met, for reasons such as safety and usability; others may have low or no performance requirements, allowing the system hardware to be simplified to reduce costs.

Embedded systems are not always standalone devices. Many embedded systems consist of small, computerized parts within a larger device that serves a more general purpose. An embedded system in an automobile provides a specific function as a subsystem of the car itself.

The program instructions written for embedded systems are referred to as firmware, and are stored in read-only memory or Flash memory chips. They run with limited computer hardware resources: little memory, small or non-existent keyboard and/or screen.

User interface: Embedded systems range from no user interface at all dedicated only to one task to complex graphical user interfaces that resemble modern computer desktop operating systems. Simple embedded devices use buttons, LEDs, graphic or character LCDs with a simple menu system. A more sophisticated devices use graphical screen with touch sensing or screen-edge buttons provide flexibility while minimizing space used: the meaning of the buttons can change with the screen, and selection involves the natural behavior of pointing at what's desired. Handheld systems often have a screen with a "joystick button" for a pointing device.

Processors in embedded systems: Embedded processors can be broken into two broad categories: ordinary microprocessors and microcontrollers, which have many more peripherals on chip, reducing cost and size.

URL
In computing, a Uniform Resource Locator (URL) is a subset of the Uniform Resource Identifier (URI) that specifies where an identified resource is available and the mechanism for retrieving it.

Every URL consists of some of the following: the scheme name (commonly called protocol), followed by a colon, then, depending on scheme, a hostname (alternatively, IP address), a port number, the path of the resource to be fetched

or the program to be run, then, for programs such as Common Gateway Interface (CGI) scripts, a query string, and with HTML documents, an anchor (optional) for where the page should start to be displayed.

The combined syntax is

resource_type://username:password@domain:port/path?query_string#anchor

The scheme name, or resource type, defines its namespace, purpose, and the syntax of the remaining part of the URL.. For example, a Web browser will usually dereference the URL http://example.org:80 by performing an HTTP request to the host example.org, at the port number 80. Other examples of scheme names include https: gopher:, wais:, ftp:. URLs that specify https as a scheme (such as https://example.com/) denote a secure website.

The registered domain name or IP address gives the destination location for the URL. The domain google.com, or its IP address 72.14.207.99, is the address of Google's website. The hostname and domain name portion of a URL are caseinsensitive since the DNS is specified to ignore case. http://en.wikipedia.org/ and HTTP://EN.WIKIPEDIA.ORG/ both open the same page.

The port number is optional; if omitted, the default for the scheme is used. For example, if http://myvncserver.no-ip.org:5800 is typed into the address bar of a browser it will connect to port 5800 of myvncserver.no-ip.org; this port is used by the VNC remote control program and would set up a remote control session. If the port number is omitted a browser will connect to port 80, the default HTTP port.

The path is used to find the resource specified. It is case-sensitive, though it may be treated as case-insensitive by some servers, especially those based on Microsoft Windows. If the server is case sensitive and http://en.wikipedia.org/wiki/URL is correct, http://en.wikipedia.org/WIKI/URL/ or http://en.wikipedia.org/wiki/url/ will display an HTTP 404 error page.

The query string contains data to be passed to web applications such as CGI programs. The query string contains name/value pairs separated by ampersands,

with names and values in each pair being separated by equal signs, for example first_name=John&last_name=Doe.

The anchor part when used with HTTP specifies a location on the page. For example, http://en.wikipedia.org/wiki/URL#Syntax addresses the beginning of the Syntax section of the page.

IT APLICATION LANDSCAPE IN A COMPANY

1. Applications at HO = Applications in all Regional Offices + Applications at HO 2. Applications at a R.O. = Applications in one regional office + Applications at all

Branches
3. Applications at a branch

Applications at Branch:

1. Invoicing Accounting System 2. Sales Accounting System 3. Cash/Bank Accounting System 4. Stock Accounting System 5. Attendance Accounting System

Applications at RO

1. Applications at branches getting consolidated at RO Level , PLUS 2. Regional warehousing accounting system 3. Payroll accounting system

Applications at HO

Applications at ROs getting consolidated at HO Level , PLUS Purchase accounting system Payroll accounting system Inventory Management Systems Financial Accounting Systems

VALUE CHAIN ANALYSIS 1980

The value chain is a systematic approach to examining the development of competitive advantage. It was created by M. E. Porter in his book, Competitive Advantage (1980). The chain consists of a series of activities that create and build value. They culminate in the total value delivered by an organization. The organization is split into 'primary activities' and 'support activities.'

Primary Activities.

Inbound Logistics: Here goods are received from a company's suppliers. They are stored until they are needed on the production/assembly line. Goods are moved around the organization. Operations: This is where goods are manufactured or assembled. Individual operations could include room service in an hotel, packing of books/videos/games by an online retailer, or the final tune for a new car's engine. Outbound Logistics: The goods are now finished, and they need to be sent along the supply chain to wholesalers, retailers or the final consumer. Marketing and Sales: In true customer orientated fashion, at this stage the organization prepares the offering to meet the needs of targeted customers. This area focuses strongly upon marketing communications and the promotions mix. Service: This includes all areas of service such as installation, after-sales service, complaints handling, training and so on.

Support Activities.

Procurement: This function is responsible for all purchasing of goods, services and materials. The aim is to secure the lowest possible price for purchases of the highest possible quality. They will be responsible for outsourcing (components or operations that would normally be done in-house are done by other organizations), and ePurchasing (using IT and web-based technologies to achieve procurement aims).

Technology Development: Technology is an important source of competitive advantage. Companies need to innovate to reduce costs and to protect and sustain competitive advantage. This could include production technology, Internet marketing activities, lean manufacturing, Customer Relationship Management (CRM), and many other technological developments. Human Resource Management (HRM): Employees are an expensive and vital resource. An organization would manage recruitment and s election, training and development, and rewards and remuneration. The mission and objectives of the organization would be driving force behind the HRM strategy. Firm Infrastructure: This activity includes and is driven by corporate or strategic planning. It includes the Management Information System (MIS) and other mechanisms for planning and control such as the accounting department.

KAUSHIK FIRST CLASS DEFINITIONS

1) Definition of Universal Computer: The defining feature of a universal computer is programmability, which allows the computer to emulate any other calculating machine by changing a stored sequence of instructions.

2) Definition of Programme: An organized list of instructions, that when executed, cause the computer to behave in a pre-determined manner.

Program ME Storage of program and data Calculation Logic Data Storage Data Retrieval

3) Counting and calculation devices:


Tally Stick Phoenician Clay Abacus Slide Rule Analog Computers Antikythera Mechanism: Punch Card Multiplicative and Repetitive addition: By John Napier

Father of modern computing: Wilhelm schickard built the first mechanical calculator and this became the father of the computing era. Mark I: The IBM Automatic Sequence Controlled Calculator (ASCC), called the Mark I by Harvard University, was the first large-scale automatic digital computer in the USA. The electromechanical ASCC was devised by Howard H. Aiken, built at IBM and shipped to Harvard in February 1944. It

began computations for the U.S. Navy Bureau of Ships in May and was officially presented to the university on August 7, 1944. The main advantage of the Mark I was that it was fully automaticit didn't need any human intervention once it started. It was the first fully automatic computer to be completed. It was also very reliable, much more so than early electronic computers. It is considered to be "the beginning of the era of the modern computer" and "the real dawn of the computer age". IBM 701: The 701 was formally announced on May 21, 1952. It was the unit of the overall 701 Data Processing System in which actual calculations was performed. That activity involved 274 assemblies executing all the system's computing and control functions by means of electronic pulses emitted at speeds ranging up to one million a second. The 701 contained the arithmetic components, the input and output control circuitry, and the stored program control circuitry. Also mounted on the 701 was the operator's panel. The arithmetic section contained the memory register, accumulator register and the multiplier-quotient register. Each register had a capacity of 35 bits and sign. The accumulator register also had two extra positions called register overflow positions. The control section decoded the stored programs and directed the machine in automatically performing its instructions. Instructions could only be entered into the control section through electrostatic storage or manually from the operator's panel. The entire machine could be manually controlled from the operator's panel through various buttons, keys, switches and signal lights. The operator could manually control the insertion of information into electrostatic storage or the various registers. The contents of the various registers could also be displayed in neon lights for the operator to observe. The operator's panel was used primarily when beginning an operation on the 701 and when initially testing a program for a new operation. Also included with the Analytical Control Unit were the IBM 736 Power Frame #1, 741 Power Frame #2 and the 746 Power Distribution Unit. Those three power units supplied the power for all units in the 701 system. The functional machine cycle of the 701 was 12 microseconds; the time required to execute an instruction or a sequence of instructions was an integral multiple of this cycle or 456 microseconds were required for the execution of a multiply or divide instruction. The 701 could execute 33 different operations. The monthly rental for a 701 unit was approximately $8,100. The 701 was withdrawn from marketing on October 1, 1954.

Integrated Circuits: Our world is full of integrated circuits. You find several of them in computers. For example, most people have probably heard about the microprocessor. The microprocessor is an integrated circuit that processes all information in the computer. It keeps track of what keys are pressed and if the mouse has been moved. It counts numbers and runs

programs, games and the operating system. Integrated circuits are also found in almost every modern electrical device such as cars, television sets, CD players, cellular phones, etc. The integrated circuit is nothing more than a very advanced electric circuit. An electric circuit is made from different electrical components such as transistors, resistors, capacitors and diodes, that are connected to each other in different ways. These components have different behaviors. The transistor acts like a switch. It can turn electricity on or off, or it can amplify current. It is used for example in computers to store information, or in stereo amplifiers to make the sound signal stronger. The resistor limits the flow of electricity and gives us the possibility to control the amount of current that is allowed to pass. Resistors are used, among other things, to control the volume in television sets or radios. The capacitor collects electricity and releases it all in one quick burst; like for instance in cameras where a tiny battery can provide enough energy to fire the flashbulb. The diode stops electricity under some conditions and allows it to pass only when these conditions change. This is used in, for example, photocells where a light beam that is broken triggers the diode to stop electricity from flowing through it. These components are like the building blocks in an electrical construction kit. Depending on how the components are put together when building the circuit, everything from a burglar alarm to a computer microprocessor can be constructed. Of the components mentioned above, the transistor is the most important one for the development of modern computers. Before the transistor, engineers had to use vacuum tubes. Integrated circuits were an essential breakthrough in electronics -- allowing a large amount of circuitry to be mass-produced in reusable components with high levels of functionality. Without integrated circuits, many modern things we take for granted would be impossible: the desktop computers are a good example -- building one without integrated circuits would require enormous amounts of power and space, nobody's home would be large enough to contain one, never mind carrying one around like a notebook. Third Generation Computers (1964-1971): Although transistors were great deal of improvement over the vacuum tubes, they generated heat and damaged the sensitive areas of the computer. The Integrated Circuit(IC) was invented in 1958 by Jack Kilby. It combined electronic components onto a small silicon disc, made from quartz. More advancement made possible the fittings of even more components on a small chip or a semi conductor. Also in third generation computers, the operating systems allowed the machines to run many different applications. These applications were monitored and coordinated by the computer's memory. Third generation computer is a computer built with small-scale integration integrated circuits, designed after the mid-1960s. Third generation computers use semiconductor memories in addition to, and later instead of,

ferrite core memory. The two main types of semiconductor memory are Read-Only Memory (ROM) and read-and-write memories called randomaccess memory (RAM). A technique called microprogramming became widespread and simplified the design of the CPUs and increased their flexibility. This also made possible the development of operating systems as software rather than as hard-wiring. A variety of techniques for improving processing efficiency were invented, such as pipelining, (parallel operation of functional units processing a single instruction), and multiprocessing (concurrent execution of multiple programs). As the execution of a program requires that program to be in memory, the concurrent running of several programs requires that all programs be in memory simultaneously. Thus the development of techniques for concurrent processing was matched by the development of memory management techniques such as dynamic memory allocation, virtual memory, and paging, as well as compilers producing relocatable code. The LILLIAC IV is an example of a third generation computer. The CTSS (Compatible Time-Sharing System) was developed at MIT in the early 1960s and had a considerable influence on the design of subsequent timesharing operating systems. An interesting contrasting development in this generation was the start of mass production of small low-cost "minicomputers".

Von Neumann Architecture: All computers share the same basic architecture, whether it is a multi-million dollar mainframe or a Palm Pilot. All have memory, an I/O system, and arithmetic/logic unit, and a control unit. This type of architecture is named Von Neumann architecture after the mathematician who conceived of the design.

Memory: Computer Memory is that subsystem that serves as temporary storage for all program instructions and data that are being executed by the computer. It is typically called RAM. Memory is divided up into cells, each cell having a unique address so that the data can be fetched.

Input / Output: This is the subsystem that allows the computer to interact with other devices and communicate to the outside world. It also is responsible for program storage, such as hard drive control.

Arithmetic/Logic Unit: This is that subsystem that performs all arithmetic operations and comparisons for equality. In the Von Neumann design, this and the Control Unit are separate components, but in modern systems they are integrated into the processor. The ALU has 3 sections, the register, the ALU circuitry, and the pathways in between. The register is basically a storage cell that works like RAM and holds the results of the calculations. It is much faster than RAM and is addresses differently. The ALU circuitry is that actually performs the calculations. and it is designed from AND, OR, and NOT gates just as any chip. The pathways in between are self-explanatory pathways for electrical current within the ALU.

Control Unit: The control unit has the responsibility of (1) fetching from memory the next program instruction to be run, (2) decode it to determine what needs to be done, then (3) issue the proper command to the ALU, memory and I/O controllers to get the job done. These steps are done continuously until the last line of a program is done, which is usually QUIT or STOP.

At the machine level, the instructions executed by the computer are expressed in machine language. Machine Language is in binary code and is organized by op code and address fields. Op codes are special binary codes that tell the computer what operations to carry out. The address fields are locations in memory on which that particular op code will act. All machine language instructions are organized with the op code first, then the memory addresses following. The set of all operations a processor can do is called its instruction set.

4) Personal Computers:

A single user computer. A computer built around a microprocessor for use by an individual, as in an office or at home or school.. The term was very popular in the 1980s when individuals began to purchase their own computers for the first time in history. "Microcomputer" was another

popular term. Today, the terms PC, desktop, laptop and just plain "computer" are synonymous with personal computer.

Personal Computer Timeline: The industry began in 1977, when Apple, Radio Shack and Commodore introduced the first off-the-shelf computers as consumer products. The first machines used an 8-bit microprocessor with a maximum of 64K of memory and floppy disks for storage. The Apple II, Atari 500, and Commodore 64 became popular home computers, and Apple was successful in companies after the VisiCalc spreadsheet was introduced. However, the business world was soon dominated by the Z80 processor and CP/M operating system, used by many vendors in the early 1980s, such as Vector Graphic, NorthStar, Osborne and Kaypro. By 1983, hard disks began to show up, but CP/M was soon to be history.

Goodbye CP/M, Hello DOS

Early 1980s - dBASE, Lotus and the Clones

Mid-1980s - Apple's Lisa and Mac

Late 1980s - The Mac Gained Ground

The 1990s - The Winner Is Windows: In the early 1990s, Gateway and other mail-order vendors began to slash hardware prices. All the others followed, and the PC price wars began.

The End of the 1990s - Dot-Com Fever

The 21st Century - The Smartphone

When personal computers were introduced in the late 1970s and early 1980s, they were bought to solve individual problems, such as automating a budget or typing a letter. Within a few years, an entire industry sprang up to support them. All of sudden, so it seemed, the personal computer became a desktop appliance in every office throughout the developed world. Networked with the organization's mainframes and departmental computers, it became an integral part of the technology infrastructure of every company, small, medium and large. Evolving into an indispensable appliance in almost every home in the developed world, no single technology has impacted more people than the personal computer.

5) Bits: A bit is the basic unit of information in computing and telecommunications; it is the amount of information that can be stored by a device or other physical system that can normally exist in only two distinct states. These may be the two stable positions of an electrical switch, two distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two directions of magnetization or polarization, etc. In computing, a bit can also be defined as a variable or computed quantity that can have only two possible values. These two values are often interpreted as binary digits and are usually denoted by the Arabic numerical digits 0 and 1. Indeed, the term "bit" is a contraction of binary digit. The two values can also be interpreted as logical values (true/false, yes/no). algebraic signs (+/), activation states (on/off), or any other two-valued attribute. In several popular programming languages, numeric 0 is equivalent (or convertible) to logical false, and 1 to true. The correspondence between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program.

6) QDOS: QDOS was the forerunner of DOS (Disk Operating System), the first widely-used personal computer operating system. In 1980, when IBM was making plans to enter the personal computer market, it asked Bill Gates, the young owner of a small company called Microsoft, if they could locate an operating system for the new PC that IBM was developing. Microsoft, which had previously furnished IBM with a BASIC language product for the IBM PC, looked around and found an operating system called 86-DOS at a small company called Seattle Computer Products.

86-DOS - often referred to as QDOS, or Quick and Dirty Operating System - was written in six weeks by Tim Paterson, based on ideas in CP/M (Control Program for Microcomputers), an operating system

popular with early personal computer users. 86-DOS was designed for use with Seattle Computer's Intel 8086-based computers. It contained about 4,000 lines of assembler language code. Microsoft bought 86-DOS from Seattle Computer Products for $50,000, revised it, renaming it MS-DOS, and then delivered it to IBM for its new PC.

IBM rewrote MS-DOS after finding 300 bugs in it and renamed it PCDOS, which is why both IBM and Microsoft hold a copyright for it. Bill Gates saw the potential for MS-DOS and persuaded IBM to let Microsoft sell it separately from IBM's PC projects. The initial IBM PC actually offered the user a choice of one of three operating systems: PC-DOS, CP/M 86, and UCSD p-System, a Pascal-based system. PCDOS, which was cheaper, proved the most popular and began to come bundled with the IBM PC in its second product release. The IBM PC brought personal computing to the business world for the first time and was successful beyond IBM's imaginings. In 18 months, IBM introduced the PC-XT, which included a hard drive loaded with a newer version of DOS. Microsoft promised a multitasking DOS, but that never happened. Instead, Microsoft developed Windows with multitasking features.

7) Operating System Functions: The operating system is the core software component of your computer. It performs many functions and is, in very basic terms, an interface between your computer and the outside world. In the section about hardware, a computer is described as consisting of several component parts including your monitor, keyboard, mouse, and other parts. The operating system provides an interface to these parts using what is referred to as "drivers". This is why sometimes when you install a new printer or other piece of hardware, your system will ask you to install more software called a driver. A driver is a specially written program which understands the operation of the device it interfaces to, such as a printer, video card, sound card or CD ROM drive. It translates commands from the operating system or user into commands understood by the the component computer part it interfaces with. It also translates responses from the component computer part back to responses that can be understood by the operating system, application program, or user. The below diagram gives a graphical depiction of the interfaces between the operating system and the computer component.

Other Operating System Functions

The operating system provides for several other functions including:

System tools (programs) used to monitor computer performance, debug problems, or maintain parts of the system. A set of libraries or functions which programs may use to perform specific tasks especially relating to interfacing with computer system components. The operating system makes these interfacing functions along with its other functions operate smoothly and these functions are mostly transparent to the user.

Operating System Concerns

As mentioned previously, an operating system is a computer program. Operating systems are written by human programmers who make mistakes. Therefore there can be errors in the code even though there may be some testing before the product is released. Some companies have better software quality control and testing than others so you may notice varying levels of quality from operating system to operating system.

Errors in operating systems cause three main types of problems:

System crashes and instabilities - These can happen due to a software bug typically in the operating system, although computer programs being run on the operating system can make the system more unstable or may even crash the system by themselves. This varies depending on the type of operating system. A system crash is the act of a system freezing and becoming unresponsive which would cause the user to need to reboot. Security flaws - Some software errors leave a door open for the system to be broken into by unauthorized intruders. As these flaws are discovered, unauthorized intruders may try to use these to gain illegal access to your system. Patching these flaws often will help keep your computer system secure. How this is done will be explained later.

Sometimes errors in the operating system will cause the computer not to work correctly with some peripheral devices such as printers.

Operating System Types

There are many types of operating systems. The most common is the Microsoft suite of operating systems. They include from most recent to the oldest:

Windows XP Professional Edition - A version used by many businesses on workstations. It has the ability to become a member of a corporate domain. Windows XP Home Edition - A lower cost version of Windows XP which is for home use only and should not be used at a business. Windows 2000 - A better version of the Windows NT operating system which works well both at home and as a workstation at a business. It includes technologies which allow hardware to be automatically detected and other enhancements over Windows NT. Windows ME - A upgraded version from windows 98 but it has been historically plagued with programming errors which may be frustrating for home users. Windows 98 - This was produced in two main versions. The first Windows 98 version was plagued with programming errors but the Windows 98 Second Edition which came out later was much better with many errors resolved. Windows NT - A version of Windows made specifically for businesses offering better control over workstation capabilities to help network administrators. Windows 95 - The first version of Windows after the older Windows 3.x versions offering a better interface and better library functions for programs.

There are other worthwhile types of operating systems not made by Microsoft. The greatest problem with these operating systems lies in the fact that not as many application programs are written for them.

However if you can get the type of application programs you are looking for, one of the systems listed below may be a good choice.

Unix - A system that has been around for many years and it is very stable. It is primary used to be a server rather than a workstation and should not be used by anyone who does not understand the system. It can be difficult to learn. Unix must normally run an a computer made by the same company that produces the software. Linux - Linux is similar to Unix in operation but it is free. It also should not be used by anyone who does not understand the system and can be difficult to learn. Apple MacIntosh - Most recent versions are based on Unix but it has a good graphical interface so it is both stable (does not crash often or have as many software problems as other systems may have) and easy to learn. One drawback to this system is that it can only be run on Apple produced hardware.

CPU
o CPU is the The brain of the computer that takes care of all the computations and processes. The component of CPU include, o o CU: Control Unit Directs and manages the activities of the processor. ALU: Arithmetic and Logic Unit. Performs Arithmetic and Logical operations. (+, -, x, /, >,<, =) FPU: Floating Point Unit. Performs division and large decimal operations. Cache Memory: Predicts and anticipates the data that the processor needs. I/O Unit: Input Output unit. The gateway for the processor. Register : Which hold temporary data for a specific purpose of function

o o

o o o

Microprocessors Speed o o o o o Microprocessor speeds can be measured in a variety of ways: Megahertz MIPS Megaflops Fsb

Types of Processors

Hyper threading

A technology developed by Intel that enables multithreaded(current of data) software applications to execute threads in parallel on a single processor instead of processing threads in a linear fashion. Older systems took advantage of dual-processing threading in software by splitting (dividing) instructions into multiple streams so that more than one processor could act upon (on)them at once.

Load Balancing

Load balancing is a technique to distribute workload evenly across two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The load balancing service is usually provided by a dedicated program or hardware device (such as a multi layer switch or a DNS server). It is commonly used to mediate internal communications in computer clusters, especially high availability clusters. If the load is more on a server, then the secondary server takes some load while the other is still processing requests.

You might also like