You are on page 1of 17

Computers

Computer
A computer is a machine that manipulates data according to a list of instruction
s.
The first devices that resemble modern computers date to the mid-20th century (1
940 1945), although the computer concept and various machines similar to computers
existed earlier. Early electronic computers were the size of a large room, cons
uming as much power as several hundred modern personal computers.[1] Modern comp
uters are based on tiny integrated circuits and are millions to billions of time
s more capable while occupying a fraction of the space.[2] Today, simple compute
rs may be made small enough to fit into a wristwatch and be powered from a watch
battery. Personal computers, in various forms, are icons of the Information Age
and are what most people think of as "a computer"; however, the most common for
m of computer in use today is the embedded computer. Embedded computers are smal
l, simple devices that are used to control other devices
for example, they may b
e found in machines ranging from fighter aircraft to industrial robots, digital
cameras, and children's toys.
The ability to store and execute lists of instructions called programs makes com
puters extremely versatile and distinguishes them from calculators. The Church Tur
ing thesis is a mathematical statement of this versatility: any computer with a
certain minimum capability is, in principle, capable of performing the same task
s that any other computer can perform. Therefore, computers with capability and
complexity ranging from that of a personal digital assistant to a supercomputer
are all able to perform the same computational tasks given enough time and stora
ge capacity.
History of computing
It is difficult to identify any one device as the earliest computer, partly beca
use the term "computer" has been subject to varying interpretations over time. O
riginally, the term "computer" referred to a person who performed numerical calc
ulations (a human computer), often with the aid of a mechanical calculating devi
ce.
The history of the modern computer begins with two separate technologies - that
of automated calculation and that of programmability.
Examples of early mechanical calculating devices included the abacus, the slide
rule and arguably the astrolabe and the Antikythera mechanism (which dates from
about 150-100 BC). Hero of Alexandria (c. 10 70 AD) built a mechanical theater whi
ch performed a play lasting 10 minutes and was operated by a complex system of r
opes and drums that might be considered to be a means of deciding which parts of
the mechanism performed which actions and when.[3] This is the essence of progr
ammability.
The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is cons
idered to be the earliest programmable analog computer.[4] It displayed the zodi
ac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across
a gateway causing automatic doors to open every hour,[5][6] and five robotic mu
sicians who play music when struck by levers operated by a camshaft attached to
a water wheel. The length of day and night could be re-programmed every day in o
rder to account for the changing lengths of day and night throughout the year.[4
]

The end of the Middle Ages saw a re-invigoration of European mathematics and eng
ineering, and Wilhelm Schickard's 1623 device was the first of a number of mecha
nical calculators constructed by European engineers. However, none of those devi
ces fit the modern definition of a computer because they could not be programmed
.
In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used
a series of punched paper cards as a template to allow his loom to weave intric
ate patterns automatically. The resulting Jacquard loom was an important step in
the development of computers because the use of punched cards to define woven p
atterns can be viewed as an early, albeit limited, form of programmability.
It was the fusion of automatic calculation with programmability that produced th
e first recognizable computers. In 1837, Charles Babbage was the first to concep
tualize and design a fully programmable mechanical computer that he called "The
Analytical Engine".[7] Due to limited finances, and an inability to resist tinke
ring with the design, Babbage never actually built his Analytical Engine.
Large-scale automated data processing of punched cards was performed for the U.S
. Census in 1890 by tabulating machines designed by Herman Hollerith and manufac
tured by the Computing Tabulating Recording Corporation, which later became IBM.
By the end of the 19th century a number of technologies that would later prove
useful in the realization of practical computers had begun to appear: the punche
d card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.
During the first half of the 20th century, many scientific computing needs were
met by increasingly sophisticated analog computers, which used a direct mechanic
al or electrical model of the problem as a basis for computation. However, these
were not programmable and generally lacked the versatility and accuracy of mode
rn digital computers.
A succession of steadily more powerful and flexible computing devices were const
ructed in the 1930s and 1940s, gradually adding the key features that are seen i
n modern computers. The use of digital electronics (largely invented by Claude S
hannon in 1937) and more flexible programmability were vitally important steps,
but defining one point along this road as "the first digital electronic computer
" is difficult (Shannon 1940). Notable achievements include:
* Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first
working machine featuring binary arithmetic, including floating point arithmetic
and a measure of programmability. In 1998 the Z3 was proved to be Turing comple
te, therefore being the world's first operational computer.
* The non-programmable Atanasoff Berry Computer (1941) which used vacuum tube
based computation, binary numbers, and regenerative capacitor memory.
* The secret British Colossus computers (1943)[8], which had limited program
mability but demonstrated that a device using thousands of tubes could be reason
ably reliable and electronically reprogrammable. It was used for breaking German
wartime codes.
* The Harvard Mark I (1944), a large-scale electromechanical computer with l
imited programmability.
* The U.S. Army's Ballistics Research Laboratory ENIAC (1946), which used de
cimal arithmetic and is sometimes called the first general purpose electronic co
mputer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronic
s). Initially, however, ENIAC had an inflexible architecture which essentially r
equired rewiring to change its programming.
Several developers of ENIAC, recognizing its flaws, came up with a far more flex
ible and elegant design, which came to be known as the "stored program architect
ure" or von Neumann architecture. This design was first formally described by Jo
hn von Neumann in the paper First Draft of a Report on the EDVAC, distributed in

1945. A number of projects to develop computers based on the stored-program arc


hitecture commenced around this time, the first of these being completed in Grea
t Britain. The first to be demonstrated working was the Manchester Small-Scale E
xperimental Machine (SSEM or "Baby"), while the EDSAC, completed a year after SS
EM, was the first practical implementation of the stored program design. Shortly
thereafter, the machine originally described by von Neumann's paper EDVAC was compl
eted but did not see full-time use for an additional two years.
Nearly all modern computers implement some form of the stored-program architectu
re, making it the single trait by which the word "computer" is now defined. Whil
e the technologies used in computers have changed dramatically since the first e
lectronic, general-purpose computers of the 1940s, most still use the von Neuman
n architecture.
Computers that used vacuum tubes as their electronic elements were in use throug
hout the 1950s. Vacuum tube electronics were largely replaced in the 1960s by tr
ansistor-based electronics, which are smaller, faster, cheaper to produce, requi
re less power, and are more reliable. In the 1970s, integrated circuit technolog
y and the subsequent creation of microprocessors, such as the Intel 4004, furthe
r decreased size and cost and further increased speed and reliability of compute
rs. By the 1980s, computers became sufficiently small and cheap to replace simpl
e mechanical controls in domestic appliances such as washing machines. The 1980s
also witnessed home computers and the now ubiquitous personal computer. With th
e evolution of the Internet, personal computers are becoming as common as the te
levision and the telephone in the household.
Stored program architecture
Main articles: Computer program and Computer programming
The defining feature of modern computers which distinguishes them from all other
machines is that they can be programmed. That is to say that a list of instruct
ions (the program) can be given to the computer and it will store them and carry
them out at some time in the future.
In most cases, computer instructions are simple: add one number to another, move
some data from one location to another, send a message to some external device,
etc. These instructions are read from the computer's memory and are generally c
arried out (executed) in the order they were given. However, there are usually s
pecialized instructions to tell the computer to jump ahead or backwards to some
other place in the program and to carry on executing from there. These are calle
d "jump" instructions (or branches). Furthermore, jump instructions may be made
to happen conditionally so that different sequences of instructions may be used
depending on the result of some previous calculation or some external event. Man
y computers directly support subroutines by providing a type of jump that "remem
bers" the location it jumped from and another instruction to return to the instr
uction following that jump instruction.
Program execution might be likened to reading a book. While a person will normal
ly read each word and line in sequence, they may at times jump back to an earlie
r place in the text or skip sections that are not of interest. Similarly, a comp
uter may sometimes go back and repeat the instructions in some section of the pr
ogram over and over again until some internal condition is met. This is called t
he flow of control within the program and it is what allows the computer to perf
orm tasks repeatedly without human intervention.
Comparatively, a person using a pocket calculator can perform a basic arithmetic
operation such as adding two numbers with just a few button presses. But to add

together all of the numbers from 1 to 1,000 would take thousands of button pres
ses and a lot of time with a near certainty of making a mistake. On the other hand
, a computer may be programmed to do this with just a few simple instructions.

Once told to run this program, the computer will perform the repetitive addition
task without further human intervention. It will almost never make a mistake an
d a modern PC can complete the task in about a millionth of a second.[9]
However, computers cannot "think" for themselves in the sense that they only sol
ve problems in exactly the way they are programmed to. An intelligent human face
d with the above addition task might soon realize that instead of actually addin
g up all the numbers one can simply use the equation
1+2+3+...+n = {{n(n+1)} \over 2}
and arrive at the correct answer (500,500) with little work.[10] In other words,
a computer programmed to add up the numbers one by one as in the example above
would do exactly that without regard to efficiency or alternative solutions.

Programs
In practical terms, a computer program may run from just a few instructions to m
any millions of instructions, as in a program for a word processor or a web brow
ser. A typical modern computer can execute billions of instructions per second (
gigahertz or GHz) and rarely make a mistake over many years of operation. Large
computer programs comprising several million instructions may take teams of prog
rammers years to write, thus the probability of the entire program having been w
ritten without error is highly unlikely.
Errors in computer programs are called "bugs". Bugs may be benign and not affect
the usefulness of the program, or have only subtle effects. But in some cases t
hey may cause the program to "hang" - become unresponsive to input such as mouse
clicks or keystrokes, or to completely fail or "crash". Otherwise benign bugs m
ay sometimes may be harnessed for malicious intent by an unscrupulous user writi
ng an "exploit" - code designed to take advantage of a bug and disrupt a program
's proper execution. Bugs are usually not the fault of the computer. Since compu
ters merely execute the instructions they are given, bugs are nearly always the
result of programmer error or an oversight made in the program's design.[11]
In most computers, individual instructions are stored as machine code with each
instruction being given a unique number (its operation code or opcode for short)
. The command to add two numbers together would have one opcode, the command to
multiply them would have a different opcode and so on. The simplest computers ar
e able to perform any of a handful of different instructions; the more complex c
omputers have several hundred to choose from each with a unique numerical code. Si
nce the computer's memory is able to store numbers, it can also store the instru
ction codes. This leads to the important fact that entire programs (which are ju
st lists of instructions) can be represented as lists of numbers and can themsel
ves be manipulated inside the computer just as if they were numeric data. The fu
ndamental concept of storing programs in the computer's memory alongside the dat
a they operate on is the crux of the von Neumann, or stored program, architectur
e. In some cases, a computer might store some or all of its program in memory th
at is kept separate from the data it operates on. This is called the Harvard arc

hitecture after the Harvard Mark I computer. Modern von Neumann computers displa
y some traits of the Harvard architecture in their designs, such as in CPU cache
s.
While it is possible to write computer programs as long lists of numbers (machin
e language) and this technique was used with many early computers,[12] it is ext
remely tedious to do so in practice, especially for complicated programs. Instea
d, each basic instruction can be given a short name that is indicative of its fu
nction and easy to remember a mnemonic such as ADD, SUB, MULT or JUMP. These mnemo
nics are collectively known as a computer's assembly language. Converting progra
ms written in assembly language into something the computer can actually underst
and (machine language) is usually done by a computer program called an assembler
. Machine languages and the assembly languages that represent them (collectively
termed low-level programming languages) tend to be unique to a particular type
of computer. For instance, an ARM architecture computer (such as may be found in
a PDA or a hand-held videogame) cannot understand the machine language of an In
tel Pentium or the AMD Athlon 64 computer that might be in a PC.[13]
Though considerably easier than in machine language, writing long programs in as
sembly language is often difficult and error prone. Therefore, most complicated
programs are written in more abstract high-level programming languages that are
able to express the needs of the computer programmer more conveniently (and ther
eby help reduce programmer error). High level languages are usually "compiled" i
nto machine language (or sometimes into assembly language and then into machine
language) using another computer program called a compiler.[14] Since high level
languages are more abstract than assembly language, it is possible to use diffe
rent compilers to translate the same high level language program into the machin
e language of many different types of computer. This is part of the means by whi
ch software like video games may be made available for different computer archit
ectures such as personal computers and various video game consoles.
The task of developing large software systems is an immense intellectual effort.
Producing software with an acceptably high reliability on a predictable schedul
e and budget has proved historically to be a great challenge; the academic and p
rofessional discipline of software engineering concentrates specifically on this
problem.
Example
Suppose a computer is being employed to drive a traffic light. A simple stored p
rogram might say:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.

Turn
Turn
Wait
Turn
Turn
Wait
Turn
Turn
Wait
Turn
Jump

off all of the lights


on the red light
for sixty seconds
off the red light
on the green light
for sixty seconds
off the green light
on the yellow light
for two seconds
off the yellow light
to instruction number (2)

With this set of instructions, the computer would cycle the light continually th
rough red, green, yellow and back to red again until told to stop running the pr
ogram.
However, suppose there is a simple on/off switch connected to the computer that
is intended to be used to make the light flash red while some maintenance operat

ion is being performed. The program might then instruct the computer to:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.

Turn off all of the lights


Turn on the red light
Wait for sixty seconds
Turn off the red light
Turn on the green light
Wait for sixty seconds
Turn off the green light
Turn on the yellow light
Wait for two seconds
Turn off the yellow light
If the maintenance switch is NOT turned on then jump to instruction number

2
12.
13.
14.
15.
16.

Turn
Wait
Turn
Wait
Jump

on the red light


for one second
off the red light
for one second
to instruction number 11

In this manner, the computer is either running the instructions from number (2)
to (11) over and over or its running the instructions from (11) down to (16) ove
r and over, depending on the position of the switch.[15]

How computers work


Main articles: Central processing unit and Microprocessor
A general purpose computer has four main sections: the arithmetic and logic unit
(ALU), the control unit, the memory, and the input and output devices (collecti
vely termed I/O). These parts are interconnected by busses, often made of groups
of wires.
The control unit, ALU, registers, and basic I/O (and often other hardware closel
y linked with these) are collectively known as a central processing unit (CPU).
Early CPUs were composed of many separate components but since the mid-1970s CPU
s have typically been constructed on a single integrated circuit called a microp
rocessor.
Control unit
Main articles: CPU design and Control unit
The control unit (often called a control system or central controller) directs t
he various components of a computer. It reads and interprets (decodes) instructi
ons in the program one by one. The control system decodes each instruction and t
urns it into a series of control signals that operate the other parts of the com
puter.[16] Control systems in advanced computers may change the order of some in
structions so as to improve performance.
A key component common to all CPUs is the program counter, a special memory cell
(a register) that keeps track of which location in memory the next instruction
is to be read from.[17]
Diagram showing how a particular MIPS architecture instruction would be decoded
by the control system.
Diagram showing how a particular MIPS architecture instruction would be decoded

by the control system.


The control system's function is as follows note that this is a simplified descrip
tion, and some of these steps may be performed concurrently or in a different or
der depending on the type of CPU:
1. Read the code for the next instruction from the cell indicated by the prog
ram counter.
2. Decode the numerical code for the instruction into a set of commands or si
gnals for each of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perha
ps from an input device). The location of this required data is typically stored
within the instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, in
struct the hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register o
r perhaps an output device.
8. Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it
can be changed by calculations done in the ALU. Adding 100 to the program count
er would cause the next instruction to be read from a place 100 locations furthe
r down the program. Instructions that modify the program counter are often known
as "jumps" and allow for loops (instructions that are repeated by the computer)
and often conditional instruction execution (both examples of control flow).
It is noticeable that the sequence of operations that the control unit goes thro
ugh to process an instruction is in itself like a short computer program - and i
ndeed, in some more complex CPU designs, there is another yet smaller computer c
alled a microsequencer that runs a microcode program that causes all of these ev
ents to happen.

Arithmetic/logic unit (ALU)


The ALU is capable of performing two classes of operations: arithmetic and logic
.
The set of arithmetic operations that a particular ALU supports may be limited t
o adding and subtracting or might include multiplying or dividing, trigonometry
functions (sine, cosine, etc) and square roots. Some can only operate on whole n
umbers (integers) whilst others use floating point to represent real numbers albei
t with limited precision. However, any computer that is capable of performing ju
st the simplest operations can be programmed to break down the more complex oper
ations into simple steps that it can perform. Therefore, any computer can be pro
grammed to perform any arithmetic operation although it will take more time to do
so if its ALU does not directly support the operation. An ALU may also compare n
umbers and return boolean truth values (true or false) depending on whether one
is equal to, greater than or less than the other ("is 64 greater than 65?").
Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be usefu
l both for creating complicated conditional statements and processing boolean lo
gic.
Superscalar computers contain multiple ALUs so that they can process several ins
tructions at the same time. Graphics processors and computers with SIMD and MIMD

features often provide ALUs that can perform arithmetic on vectors and matrices
.
Memory
Main article: Computer storage
Magnetic core memory
il it was completely
Magnetic core memory
il it was completely

was popular
replaced by
was popular
replaced by

main memory for computers through the 1960s unt


semiconductor memory.
main memory for computers through the 1960s unt
semiconductor memory.

A computer's memory can be viewed as a list of cells into which numbers can be p
laced or read. Each cell has a numbered "address" and can store a single number.
The computer can be instructed to "put the number 123 into the cell numbered 13
57" or to "add the number that is in cell 1357 to the number that is in cell 246
8 and put the answer into cell 1595". The information stored in memory may repre
sent practically anything. Letters, numbers, even computer instructions can be p
laced into memory with equal ease. Since the CPU does not differentiate between
different types of information, it is up to the software to give significance to
what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbe
rs in groups of eight bits (called a byte). Each byte is able to represent 256 d
ifferent numbers; either from 0 to 255 or -128 to +127. To store larger numbers,
several consecutive bytes may be used (typically, two, four or eight). When neg
ative numbers are required, they are usually stored in two's complement notation
. Other arrangements are possible, but are usually not seen outside of specializ
ed applications or historical contexts. A computer can store any kind of informa
tion in memory as long as it can be somehow represented in numerical form. Moder
n computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read
and written to much more rapidly than the main memory area. There are typically
between two and one hundred registers depending on the type of CPU. Registers a
re used for the most frequently needed data items to avoid having to access main
memory every time data is needed. Since data is constantly being worked on, red
ucing the need to access main memory (which is often slow compared to the ALU an
d control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties: random access memory or R
AM and read-only memory or ROM. RAM can be read and written to anytime the CPU c
ommands it, but ROM is pre-loaded with data and software that never changes, so
the CPU can only read from it. ROM is typically used to store the computer's ini
tial start-up instructions. In general, the contents of RAM is erased when the p
ower to the computer is turned off while ROM retains its data indefinitely. In a
PC, the ROM contains a specialized program called the BIOS that orchestrates lo
ading the computer's operating system from the hard disk drive into RAM whenever
the computer is turned on or reset. In embedded computers, which frequently do
not have disk drives, all of the software required to perform the task may be st
ored in ROM. Software that is stored in ROM is often called firmware because it
is notionally more like hardware than software. Flash memory blurs the distincti
on between ROM and RAM by retaining data when turned off but being rewritable li
ke RAM. However, flash memory is typically much slower than conventional ROM and
RAM so its use is restricted to applications where high speeds are not required
.[18]
In more sophisticated computers there may be one or more RAM cache memories whic
h are slower than registers but faster than main memory. Generally computers wit
h this sort of cache are designed to move frequently needed data into the cache

automatically, often without the need for any intervention on the programmer's p
art.
Input/output (I/O)
I/O is the means by which a computer receives information from the outside world
and sends results back. Devices that provide input or output to the computer ar
e called peripherals. On a typical personal computer, peripherals include input
devices like the keyboard and mouse, and output devices such as the display and
printer. Hard disk drives, floppy disk drives and optical disc drives serve as b
oth input and output devices. Computer networking is another form of I/O.
Often, I/O devices are complex computers in their own right with their own CPU a
nd memory. A graphics processing unit might contain fifty or more tiny computers
that perform the calculations necessary to display 3D graphics[citation needed]
. Modern desktop computers contain many smaller computers that assist the main C
PU in performing I/O.
Multitasking
This section does not cite any references or sources.
Please help improve this section by adding citations to reliable sources. Unveri
fiable material may be challenged and removed. (July 2008)
Main article: Computer multitasking
While a computer may be viewed as running one gigantic program stored in its mai
n memory, in some systems it is necessary to give the appearance of running seve
ral programs simultaneously. This is achieved by having the computer switch rapi
dly between running each program in turn. One means by which this is done is wit
h a special signal called an interrupt which can periodically cause the computer
to stop executing instructions where it was and do something else instead. By r
emembering where it was executing prior to the interrupt, the computer can retur
n to that task later. If several programs are running "at the same time", then t
he interrupt generator might be causing several hundred interrupts per second, c
ausing a program switch each time. Since modern computers typically execute inst
ructions several orders of magnitude faster than human perception, it may appear
that many programs are running at the same time even though only one is ever ex
ecuting in any given instant. This method of multitasking is sometimes termed "t
ime-sharing" since each program is allocated a "slice" of time in turn.
Before the era of cheap computers, the principle use for multitasking was to all
ow many people to share the same computer.
Seemingly, multitasking would cause a computer that is switching between several
programs to run more slowly - in direct proportion to the number of programs it
is running. However, most programs spend much of their time waiting for slow in
put/output devices to complete their tasks. If a program is waiting for the user
to click on the mouse or press a key on the keyboard, then it will not take a "
time slice" until the event it is waiting for has occurred. This frees up time f
or other programs to execute so that many programs may be run at the same time w
ithout unacceptable speed loss.
Multiprocessing
Main article: Multiprocessing
Cray designed many supercomputers that used multiprocessing heavily.
Cray designed many supercomputers that used multiprocessing heavily.

Some computers may divide their work between one or more separate CPUs, creating
a multiprocessing configuration. Traditionally, this technique was utilized onl
y in large and powerful computers such as supercomputers, mainframe computers an
d servers. However, multiprocessor and multi-core (multiple CPUs on a single int
egrated circuit) personal and laptop computers have become widely available and
are beginning to see increased usage in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ
significantly from the basic stored-program architecture and from general purpos
e computers.[19] They often feature thousands of CPUs, customized high-speed int
erconnects, and specialized computing hardware. Such designs tend to be useful o
nly for specialized tasks due to the large scale of program organization require
d to successfully utilize most of the available resources at once. Supercomputer
s usually see usage in large-scale simulation, graphics rendering, and cryptogra
phy applications, as well as with other so-called "embarrassingly parallel" task
s.
Networking and the Internet
Computers have been used to coordinate information between multiple locations si
nce the 1950s. The U.S. military's SAGE system was the first large-scale example
of such a system, which led to a number of special-purpose commercial systems l
ike Sabre.
In the 1970s, computer engineers at research institutions throughout the United
States began to link their computers together using telecommunications technolog
y. This effort was funded by ARPA (now DARPA), and the computer network that it
produced was called the ARPANET. The technologies that made the Arpanet possible
spread and evolved. In time, the network spread beyond academic and military in
stitutions and became known as the Internet. The emergence of networking involve
d a redefinition of the nature and boundaries of the computer. Computer operatin
g systems and applications were modified to include the ability to define and ac
cess the resources of other computers on the network, such as peripheral devices
, stored information, and the like, as extensions of the resources of an individ
ual computer. Initially these facilities were available primarily to people work
ing in high-tech environments, but in the 1990s the spread of applications like
e-mail and the World Wide Web, combined with the development of cheap, fast netw
orking technologies like Ethernet and ADSL saw computer networking become almost
ubiquitous. In fact, the number of computers that are networked is growing phen
omenally. A very large proportion of personal computers regularly connect to the
Internet to communicate and receive information. "Wireless" networking, often u
tilizing mobile phone networks, has meant networking is becoming increasingly ub
iquitous even in mobile computing environments.
Computer software
Computer software, or just software is a general term used to describe a collect
ion of computer programs, procedures and documentation that perform some tasks o
n a computer system.[1] The term includes application software such as word proc
essors which perform productive tasks for users, system software such as operati
ng systems, which interface with hardware to provide the necessary services for
application software, and middleware which controls and co-ordinates distributed
systems. Software includes websites, programs, video games etc. that are coded
by programming languages like C, C++, etc.
"Software" is sometimes used in a broader context to mean anything which is not
hardware but which is used with hardware, such as film, tapes and records.[2]
Overview

Computer software is usually regarded as anything but hardware, meaning that the
"hard" are the parts that are tangible (able to hold) while the "soft" part is
the intangible objects inside the computer. Software encompasses an extremely wi
de array of products and technologies developed using different techniques like
programming languages, scripting languages etc. The types of software include we
b pages developed by technologies like HTML, PHP, Perl, JSP, ASP.NET, XML, and d
esktop applications like Microsoft Word, OpenOffice developed by technologies li
ke C, C++, Java, C#, etc. Software usually runs on an underlying operating syste
m (which is a software also) like Microsoft Windows, Linux (running GNOME and KD
E), Sun Solaris etc. Software also includes video games like the Super Mario, Ca
ll of Duty for personal computers or video game consoles. These games can be cre
ated using CGI (computer generated imagery) that can be designed by applications
like Maya, 3ds Max etc.
Also a software usually runs on a software platform like Java and .NET so that f
or instance, Microsoft Windows software will not be able to run on Mac OS becaus
e how the software is written is different between the systems (platforms). Thes
e applications can work using software porting, interpreters or re-writing the s
ource code for that platform.
[edit] Relationship to computer hardware
Main article: Computer hardware
Computer software is so called to distinguish it from computer hardware, which e
ncompasses the physical interconnections and devices required to store and execu
te (or run) the software. At the lowest level, software consists of a machine la
nguage specific to an individual processor. A machine language consists of group
s of binary values signifying processor instructions which change the state of t
he computer from its preceding state. Software is an ordered sequence of instruc
tions for changing the state of the computer hardware in a particular sequence.
It is usually written in high-level programming languages that are easier and mo
re efficient for humans to use (closer to natural language) than machine languag
e. High-level languages are compiled or interpreted into machine language object
code. Software may also be written in an assembly language, essentially, a mnem
onic representation of a machine language using a natural language alphabet. Ass
embly language must be assembled into object code via an assembler.
The term "software" was first used in this sense by John W. Tukey in 1958.[3] In
computer science and software engineering, computer software is all computer pr
ograms. The theory that is the basis for most modern software was first proposed
by Alan Turing in his 1935 essay Computable numbers with an application to the
Entscheidungsproblem.[4]
[edit] Types
Practical computer systems divide software systems into three major classes: sys
tem software, programming software and application software, although the distin
ction is arbitrary, and often blurred.
* System software helps run the computer hardware and computer system. It in
cludes operating systems, device drivers, diagnostic tools, servers, windowing s
ystems, utilities and more. The purpose of systems software is to insulate the a
pplications programmer as much as possible from the details of the particular co
mputer complex being used, especially memory and other hardware features, and su
ch as accessory devices as communications, printers, readers, displays, keyboard
s, etc.
* Programming software usually provides tools to assist a programmer in writ
ing computer programs, and software using different programming languages in a m

ore convenient way. The tools include text editors, compilers, interpreters, lin
kers, debuggers, and so on. An Integrated development environment (IDE) merges t
hose tools into a software bundle, and a programmer may not need to type multipl
e commands for compiling, interpreting, debugging, tracing, and etc., because th
e IDE usually has an advanced graphical user interface, or GUI.
* Application software allows end users to accomplish one or more specific (
non-computer related) tasks. Typical applications include industrial automation,
business software, educational software, medical software, databases, and compu
ter games. Businesses are probably the biggest users of application software, bu
t almost every field of human activity now uses some form of application softwar
e.
[edit] Program and library
A program may not be sufficiently complete for execution by a computer. In parti
cular, it may require additional software from a software library in order to be
complete. Such a library may include software components used by stand-alone pr
ograms, but which cannot work on their own. Thus, programs may include standard
routines that are common to many programs, extracted from these libraries. Libra
ries may also include 'stand-alone' programs which are activated by some compute
r event and/or perform some function (e.g., of computer 'housekeeping') but do n
ot return data to their calling program. Libraries may be called by one to many
other programs; programs may call zero to many other programs.
Three layers
Users often see things differently than programmers. People who use modern gener
al purpose computers (as opposed to embedded systems, analog computers, supercom
puters, etc.) usually see three layers of software performing a variety of tasks
: platform, application, and user software.
Platform software
Platform includes the firmware, device drivers, an operating system, and typ
ically a graphical user interface which, in total, allow a user to interact with
the computer and its peripherals (associated equipment). Platform software ofte
n comes bundled with the computer. On a PC you will usually have the ability to
change the platform software.
Application software
Application software or Applications are what most people think of when they
think of software. Typical examples include office suites and video games. Appl
ication software is often purchased separately from computer hardware. Sometimes
applications are bundled with the computer, but that does not change the fact t
hat they run as independent applications. Applications are almost always indepen
dent programs from the operating system, though they are often tailored for spec
ific platforms. Most users think of compilers, databases, and other "system soft
ware" as applications.
User-written software
End-user development tailors systems to meet users' specific needs. User sof
tware include spreadsheet templates, word processor macros, scientific simulatio
ns, and scripts for graphics and animations. Even email filters are a kind of us
er software. Users create this software themselves and often overlook how import
ant it is. Depending on how competently the user-written software has been integ
rated into purchased application packages, many users may not be aware of the di
stinction between the purchased packages, and what has been added by fellow co-w
orkers.

[edit] Creation
Main article: Computer programming
[edit] Operation
Computer software has to be "loaded" into the computer's storage (such as a hard
drive, memory, or RAM). Once the software has loaded, the computer is able to e
xecute the software. This involves passing instructions from the application sof
tware, through the system software, to the hardware which ultimately receives th
e instruction as machine code. Each instruction causes the computer to carry out
an operation -- moving data, carrying out a computation, or altering the contro
l flow of instructions.
Data movement is typically from one place in memory to another. Sometimes it inv
olves moving data between memory and registers which enable high-speed data acce
ss in the CPU. Moving data, especially large amounts of it, can be costly. So, t
his is sometimes avoided by using "pointers" to data instead. Computations inclu
de simple operations such as incrementing the value of a variable data element.
More complex computations may involve many operations and data elements together
.
Instructions may be performed sequentially, conditionally, or iteratively. Seque
ntial instructions are those operations that are performed one after another. Co
nditional instructions are performed such that different sets of instructions ex
ecute depending on the value(s) of some data. In some languages this is known as
an "if" statement. Iterative instructions are performed repetitively and may de
pend on some data value. This is sometimes called a "loop." Often, one instructi
on may "call" another set of instructions that are defined in some other program
or module. When more than one computer processor is used, instructions may be e
xecuted simultaneously.
A simple example of the way software operates is what happens when a user select
s an entry such as "Copy" from a menu. In this case, a conditional instruction i
s executed to copy text from data in a 'document' area residing in memory, perha
ps to an intermediate storage area known as a 'clipboard' data area. If a differ
ent menu entry such as "Paste" is chosen, the software may execute the instructi
ons to copy the text from the clipboard data area to a specific location in the
same or another document in memory.
Depending on the application, even the example above could become complicated. T
he field of software engineering endeavors to manage the complexity of how softw
are operates. This is especially true for software that operates in the context
of a large or powerful computer system.
Currently, almost the only limitations on the use of computer software in applic
ations is the ingenuity of the designer/programmer. Consequently, large areas of
activities (such as playing grand master level chess) formerly assumed to be in
capable of software simulation are now routinely programmed. The only area that
has so far proved reasonably secure from software simulation is the realm of hum
an art especially, pleasing music and literature.[citation needed]
Kinds of software by operation: computer program as executable, source code or s
cript, configuration.
[edit] Quality and reliability
Software reliability considers the errors, faults, and failures related to the d
esign, implementation and operation of software.

See Software auditing, Software quality, Software testing, and Software reliabil
ity.
[edit] License
The software's license gives the user the right to use the software in the licen
sed environment. Some software comes with the license when purchased off the she
lf, or an OEM license when bundled with hardware. Other software comes with a fr
ee software license, granting the recipient the rights to modify and redistribut
e the software. Software can also be in the form of freeware or shareware. See a
lso License Management.
Computer hardware
Typical PC hardware
A typical personal computer consists of a case or chassis in a tower shape (desk
top) and the following parts:
Motherboard
* Motherboard - It is the "body" or mainframe of the computer, through which
all other components interface.
* Central processing unit (CPU) - Performs most of the calculations which en
able a computer to function, sometimes referred to as the "brain" of the compute
r.
o Computer fan - Used to lower the temperature of the computer; a fan
is almost always attached to the CPU, and the computer case will generally have
several fans to maintain a constant airflow. Liquid cooling can also be used to
cool a computer, though it focuses more on individual parts rather than the over
all temperature inside the chassis.
* Random Access Memory (RAM) -It is also known as the physical memory of the
computer. Fast-access memory that is cleared when the computer is powered-down.
RAM attaches directly to the motherboard, and is used to store programs that ar
e currently running.
* Firmware is loaded from the Read only memory ROM run from the Basic InputOutput System (BIOS) or in newer systems Extensible Firmware Interface (EFI) com
pliant
* Internal Buses - Connections to various internal components.
o PCI (being phased out for graphic cards but still used for other use
s)
o PCI-E
o USB
o HyperTransport
o CSI (expected in 2008)
o AGP (being phased out)
o VLB (outdated)
* External Bus Controllers - used to connect to external peripherals, such a
s printers and input devices. These ports may also be based upon expansion cards
, attached to the internal buses.
[edit] Power supply
Main article: Computer power supply
A case control, and (usually) a cooling fan, and supplies power to run the rest
of the computer, the most common types of power supplies are AT and BabyAT (old)
but the standard for PCs actually are ATX and Micro ATX.

[edit] Storage controllers


Controllers for hard disk, CD-ROM and other drives like internal Zip and Jaz con
ventionally for a PC are IDE/ATA; the controllers sit directly on the motherboar
d (on-board) or on expansion cards, such as a Disk array controller. IDE is usua
lly integrated, unlike SCSI Small Computer System Interface which can be found i
n some servers. The floppy drive interface is a legacy MFM interface which is no
w slowly disappearing. All these interfaces are gradually being phased out to be
replaced by SATA and SAS.
[edit] Video display controller
Main article: Graphics card
Produces the output for the visual display unit. This will either be built into
the motherboard or attached in its own separate slot (PCI, PCI-E, PCI-E 2.0, or
AGP), in the form of a Graphics Card.
[edit] Removable media devices
Main article: Computer storage
* CD (compact disc) - the most common type of removable media, inexpensive b
ut has a short life-span.
o CD-ROM Drive - a device used for reading data from a CD.
o CD Writer - a device used for both reading and writing data to and f
rom a CD.
* DVD (digital versatile disc) - a popular type of removable media that is t
he same dimensions as a CD but stores up to 6 times as much information. It is t
he most common way of transferring digital video.
o DVD-ROM Drive - a device used for reading data from a DVD.
o DVD Writer - a device used for both reading and writing data to and
from a DVD.
o DVD-RAM Drive - a device used for rapid writing and reading of data
from a special type of DVD.
* Blu-ray - a high-density optical disc format for the storage of digital in
formation, including high-definition video.
o BD-ROM Drive - a device used for reading data from a Blu-ray disc.
o BD Writer - a device used for both reading and writing data to and f
rom a Blu-ray disc.
* HD DVD - a high-density optical disc format and successor to the standard
DVD. It was a discontinued competitor to the Blu-ray format.
* Floppy disk - an outdated storage device consisting of a thin disk of a fl
exible magnetic storage medium.
* Zip drive - an outdated medium-capacity removable disk storage system, fir
st introduced by Iomega in 1994.
* USB flash drive - a flash memory data storage device integrated with a USB
interface, typically small, lightweight, removable, and rewritable.
* Tape drive - a device that reads and writes data on a magnetic tape,used f
or long term storage.
[edit] Internal storage
Hardware that keeps data inside the computer for later use and remains persisten
t even when the computer has no power.
* Hard disk - for medium-term storage of data.
* Solid-state drive - a device similar to hard disk, but containing no movin
g parts.

* Disk array controller - a device to manage several hard disks, to achieve


performance or reliability improvement.
[edit] Sound card
Main article: Sound card
Enables the computer to output sound to audio devices, as well as accept input f
rom a microphone. Most modern computers have sound cards built-in to the motherb
oard, though it is common for a user to install a separate sound card as an upgr
ade.
[edit] Networking
Main article: Computer networks
Connects the computer to the Internet and/or other computers.
* Modem - for dial-up connections
* Network card - for DSL/Cable internet, and/or connecting to other computer
s.
* Direct Cable Connection - Use of a null modem, connecting two computers to
gether using their serial ports or a Laplink Cable, connecting two computers tog
ether with their parallel ports.
dial up connections broad band connections
[edit] Other peripherals
Main article: Peripheral
In addition, hardware devices can include external components of a computer syst
em. The following are either standard or very common.
Wheel mouse
Wheel mouse
Includes various input and output devices, usually external to the computer syst
em
[edit] Input
Main article: Input
* Text input devices
o Keyboard - a device to input text and characters by depressing butto
ns (referred to as keys), similar to a typewriter. The most common English-langu
age key layout is the QWERTY layout.
* Pointing devices
o Mouse - a pointing device that detects two dimensional motion relati
ve to its supporting surface.
o Trackball - a pointing device consisting of an exposed protruding ba
ll housed in a socket that detects rotation about two axes.
o Xbox 360 Controller - A controller used for Xbox 360, which can be u
sed as an additional pointing device with the left or right thumbstick with the
use of the application Switchblade(tm), .
* Gaming devices
o Joystick - a general control device that consists of a handheld stic
k that pivots around one end, to detect angles in two or three dimensions.
o Gamepad - a general handheld game controller that relies on the digi
ts (especially thumbs) to provide input.

o Game controller - a specific type of controller specialized for cert


ain gaming purposes.
* Image, Video input devices
o Image scanner - a device that provides input by analyzing images, pr
inted text, handwriting, or an object.
o Webcam - a low resolution video camera used to provide visual input
that can be easily transferred over the internet.
* Audio input devices
o Microphone - an acoustic sensor that provides input by converting so
und into electrical signals
[edit] Output
Main article: Output
* Image, Video output devices
o Printer
o Monitor
* Audio output devices
o Speakers
o Headset

You might also like