You are on page 1of 10

Stored program architecture

Main articles: Computer program and Computer programming


The defining feature of modern computers which distinguishes them from all other
machines is that they can be programmed. That is to say that a list of instruct
ions (the program) can be given to the computer and it will store them and carry
them out at some time in the future.
In most cases, computer instructions are simple: add one number to another, move
some data from one location to another, send a message to some external device,
etc. These instructions are read from the computer's memory and are generally c
arried out (executed) in the order they were given. However, there are usually s
pecialized instructions to tell the computer to jump ahead or backwards to some
other place in the program and to carry on executing from there. These are calle
d "jump" instructions (or branches). Furthermore, jump instructions may be made
to happen conditionally so that different sequences of instructions may be used
depending on the result of some previous calculation or some external event. Man
y computers directly support subroutines by providing a type of jump that "remem
bers" the location it jumped from and another instruction to return to the instr
uction following that jump instruction.
Program execution might be likened to reading a book. While a person will normal
ly read each word and line in sequence, they may at times jump back to an earlie
r place in the text or skip sections that are not of interest. Similarly, a comp
uter may sometimes go back and repeat the instructions in some section of the pr
ogram over and over again until some internal condition is met. This is called t
he flow of control within the program and it is what allows the computer to perf
orm tasks repeatedly without human intervention.
Comparatively, a person using a pocket calculator can perform a basic arithmetic
operation such as adding two numbers with just a few button presses. But to add
together all of the numbers from 1 to 1,000 would take thousands of button pres
ses and a lot of time with a near certainty of making a mistake. On the other hand
, a computer may be programmed to do this with just a few simple instructions.

Once told to run this program, the computer will perform the repetitive addition
task without further human intervention. It will almost never make a mistake an
d a modern PC can complete the task in about a millionth of a second.[9]
However, computers cannot "think" for themselves in the sense that they only sol
ve problems in exactly the way they are programmed to. An intelligent human face
d with the above addition task might soon realize that instead of actually addin
g up all the numbers one can simply use the equation
1+2+3+...+n = {{n(n+1)} \over 2}
and arrive at the correct answer (500,500) with little work.[10] In other words,
a computer programmed to add up the numbers one by one as in the example above
would do exactly that without regard to efficiency or alternative solutions.

Programs

In practical terms, a computer program may run from just a few instructions to m
any millions of instructions, as in a program for a word processor or a web brow
ser. A typical modern computer can execute billions of instructions per second (
gigahertz or GHz) and rarely make a mistake over many years of operation. Large
computer programs comprising several million instructions may take teams of prog
rammers years to write, thus the probability of the entire program having been w
ritten without error is highly unlikely.
Errors in computer programs are called "bugs". Bugs may be benign and not affect
the usefulness of the program, or have only subtle effects. But in some cases t
hey may cause the program to "hang" - become unresponsive to input such as mouse
clicks or keystrokes, or to completely fail or "crash". Otherwise benign bugs m
ay sometimes may be harnessed for malicious intent by an unscrupulous user writi
ng an "exploit" - code designed to take advantage of a bug and disrupt a program
's proper execution. Bugs are usually not the fault of the computer. Since compu
ters merely execute the instructions they are given, bugs are nearly always the
result of programmer error or an oversight made in the program's design.[11]
In most computers, individual instructions are stored as machine code with each
instruction being given a unique number (its operation code or opcode for short)
. The command to add two numbers together would have one opcode, the command to
multiply them would have a different opcode and so on. The simplest computers ar
e able to perform any of a handful of different instructions; the more complex c
omputers have several hundred to choose from each with a unique numerical code. Si
nce the computer's memory is able to store numbers, it can also store the instru
ction codes. This leads to the important fact that entire programs (which are ju
st lists of instructions) can be represented as lists of numbers and can themsel
ves be manipulated inside the computer just as if they were numeric data. The fu
ndamental concept of storing programs in the computer's memory alongside the dat
a they operate on is the crux of the von Neumann, or stored program, architectur
e. In some cases, a computer might store some or all of its program in memory th
at is kept separate from the data it operates on. This is called the Harvard arc
hitecture after the Harvard Mark I computer. Modern von Neumann computers displa
y some traits of the Harvard architecture in their designs, such as in CPU cache
s.
While it is possible to write computer programs as long lists of numbers (machin
e language) and this technique was used with many early computers,[12] it is ext
remely tedious to do so in practice, especially for complicated programs. Instea
d, each basic instruction can be given a short name that is indicative of its fu
nction and easy to remember a mnemonic such as ADD, SUB, MULT or JUMP. These mnemo
nics are collectively known as a computer's assembly language. Converting progra
ms written in assembly language into something the computer can actually underst
and (machine language) is usually done by a computer program called an assembler
. Machine languages and the assembly languages that represent them (collectively
termed low-level programming languages) tend to be unique to a particular type
of computer. For instance, an ARM architecture computer (such as may be found in
a PDA or a hand-held videogame) cannot understand the machine language of an In
tel Pentium or the AMD Athlon 64 computer that might be in a PC.[13]
Though considerably easier than in machine language, writing long programs in as
sembly language is often difficult and error prone. Therefore, most complicated
programs are written in more abstract high-level programming languages that are
able to express the needs of the computer programmer more conveniently (and ther
eby help reduce programmer error). High level languages are usually "compiled" i
nto machine language (or sometimes into assembly language and then into machine
language) using another computer program called a compiler.[14] Since high level
languages are more abstract than assembly language, it is possible to use diffe
rent compilers to translate the same high level language program into the machin
e language of many different types of computer. This is part of the means by whi

ch software like video games may be made available for different computer archit
ectures such as personal computers and various video game consoles.
The task of developing large software systems is an immense intellectual effort.
Producing software with an acceptably high reliability on a predictable schedul
e and budget has proved historically to be a great challenge; the academic and p
rofessional discipline of software engineering concentrates specifically on this
problem.
Example
Suppose a computer is being employed to drive a traffic light. A simple stored p
rogram might say:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.

Turn
Turn
Wait
Turn
Turn
Wait
Turn
Turn
Wait
Turn
Jump

off all of the lights


on the red light
for sixty seconds
off the red light
on the green light
for sixty seconds
off the green light
on the yellow light
for two seconds
off the yellow light
to instruction number (2)

With this set of instructions, the computer would cycle the light continually th
rough red, green, yellow and back to red again until told to stop running the pr
ogram.
However, suppose there is a simple on/off switch connected to the computer that
is intended to be used to make the light flash red while some maintenance operat
ion is being performed. The program might then instruct the computer to:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.

Turn off all of the lights


Turn on the red light
Wait for sixty seconds
Turn off the red light
Turn on the green light
Wait for sixty seconds
Turn off the green light
Turn on the yellow light
Wait for two seconds
Turn off the yellow light
If the maintenance switch is NOT turned on then jump to instruction number

2
12.
13.
14.
15.
16.

Turn
Wait
Turn
Wait
Jump

on the red light


for one second
off the red light
for one second
to instruction number 11

In this manner, the computer is either running the instructions from number (2)
to (11) over and over or its running the instructions from (11) down to (16) ove
r and over, depending on the position of the switch.[15]

How computers work


Main articles: Central processing unit and Microprocessor
A general purpose computer has four main sections: the arithmetic and logic unit
(ALU), the control unit, the memory, and the input and output devices (collecti
vely termed I/O). These parts are interconnected by busses, often made of groups
of wires.
The control unit, ALU, registers, and basic I/O (and often other hardware closel
y linked with these) are collectively known as a central processing unit (CPU).
Early CPUs were composed of many separate components but since the mid-1970s CPU
s have typically been constructed on a single integrated circuit called a microp
rocessor.
Control unit
Main articles: CPU design and Control unit
The control unit (often called a control system or central controller) directs t
he various components of a computer. It reads and interprets (decodes) instructi
ons in the program one by one. The control system decodes each instruction and t
urns it into a series of control signals that operate the other parts of the com
puter.[16] Control systems in advanced computers may change the order of some in
structions so as to improve performance.
A key component common to all CPUs is the program counter, a special memory cell
(a register) that keeps track of which location in memory the next instruction
is to be read from.[17]
Diagram showing how a particular MIPS architecture instruction would be decoded
by the control system.
Diagram showing how a particular MIPS architecture instruction would be decoded
by the control system.
The control system's function is as follows note that this is a simplified descrip
tion, and some of these steps may be performed concurrently or in a different or
der depending on the type of CPU:
1. Read the code for the next instruction from the cell indicated by the prog
ram counter.
2. Decode the numerical code for the instruction into a set of commands or si
gnals for each of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perha
ps from an input device). The location of this required data is typically stored
within the instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, in
struct the hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register o
r perhaps an output device.
8. Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it
can be changed by calculations done in the ALU. Adding 100 to the program count
er would cause the next instruction to be read from a place 100 locations furthe
r down the program. Instructions that modify the program counter are often known
as "jumps" and allow for loops (instructions that are repeated by the computer)
and often conditional instruction execution (both examples of control flow).

It is noticeable that the sequence of operations that the control unit goes thro
ugh to process an instruction is in itself like a short computer program - and i
ndeed, in some more complex CPU designs, there is another yet smaller computer c
alled a microsequencer that runs a microcode program that causes all of these ev
ents to happen.

Arithmetic/logic unit (ALU)


The ALU is capable of performing two classes of operations: arithmetic and logic
.
The set of arithmetic operations that a particular ALU supports may be limited t
o adding and subtracting or might include multiplying or dividing, trigonometry
functions (sine, cosine, etc) and square roots. Some can only operate on whole n
umbers (integers) whilst others use floating point to represent real numbers albei
t with limited precision. However, any computer that is capable of performing ju
st the simplest operations can be programmed to break down the more complex oper
ations into simple steps that it can perform. Therefore, any computer can be pro
grammed to perform any arithmetic operation although it will take more time to do
so if its ALU does not directly support the operation. An ALU may also compare n
umbers and return boolean truth values (true or false) depending on whether one
is equal to, greater than or less than the other ("is 64 greater than 65?").
Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be usefu
l both for creating complicated conditional statements and processing boolean lo
gic.
Superscalar computers contain multiple ALUs so that they can process several ins
tructions at the same time. Graphics processors and computers with SIMD and MIMD
features often provide ALUs that can perform arithmetic on vectors and matrices
.
Memory
Main article: Computer storage
Magnetic core memory
il it was completely
Magnetic core memory
il it was completely

was popular
replaced by
was popular
replaced by

main memory for computers through the 1960s unt


semiconductor memory.
main memory for computers through the 1960s unt
semiconductor memory.

A computer's memory can be viewed as a list of cells into which numbers can be p
laced or read. Each cell has a numbered "address" and can store a single number.
The computer can be instructed to "put the number 123 into the cell numbered 13
57" or to "add the number that is in cell 1357 to the number that is in cell 246
8 and put the answer into cell 1595". The information stored in memory may repre
sent practically anything. Letters, numbers, even computer instructions can be p
laced into memory with equal ease. Since the CPU does not differentiate between
different types of information, it is up to the software to give significance to
what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbe
rs in groups of eight bits (called a byte). Each byte is able to represent 256 d
ifferent numbers; either from 0 to 255 or -128 to +127. To store larger numbers,
several consecutive bytes may be used (typically, two, four or eight). When neg
ative numbers are required, they are usually stored in two's complement notation
. Other arrangements are possible, but are usually not seen outside of specializ

ed applications or historical contexts. A computer can store any kind of informa


tion in memory as long as it can be somehow represented in numerical form. Moder
n computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read
and written to much more rapidly than the main memory area. There are typically
between two and one hundred registers depending on the type of CPU. Registers a
re used for the most frequently needed data items to avoid having to access main
memory every time data is needed. Since data is constantly being worked on, red
ucing the need to access main memory (which is often slow compared to the ALU an
d control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties: random access memory or R
AM and read-only memory or ROM. RAM can be read and written to anytime the CPU c
ommands it, but ROM is pre-loaded with data and software that never changes, so
the CPU can only read from it. ROM is typically used to store the computer's ini
tial start-up instructions. In general, the contents of RAM is erased when the p
ower to the computer is turned off while ROM retains its data indefinitely. In a
PC, the ROM contains a specialized program called the BIOS that orchestrates lo
ading the computer's operating system from the hard disk drive into RAM whenever
the computer is turned on or reset. In embedded computers, which frequently do
not have disk drives, all of the software required to perform the task may be st
ored in ROM. Software that is stored in ROM is often called firmware because it
is notionally more like hardware than software. Flash memory blurs the distincti
on between ROM and RAM by retaining data when turned off but being rewritable li
ke RAM. However, flash memory is typically much slower than conventional ROM and
RAM so its use is restricted to applications where high speeds are not required
.[18]
In more sophisticated computers there may be one or more RAM cache memories whic
h are slower than registers but faster than main memory. Generally computers wit
h this sort of cache are designed to move frequently needed data into the cache
automatically, often without the need for any intervention on the programmer's p
art.
Input/output (I/O)
I/O is the means by which a computer receives information from the outside world
and sends results back. Devices that provide input or output to the computer ar
e called peripherals. On a typical personal computer, peripherals include input
devices like the keyboard and mouse, and output devices such as the display and
printer. Hard disk drives, floppy disk drives and optical disc drives serve as b
oth input and output devices. Computer networking is another form of I/O.
Often, I/O devices are complex computers in their own right with their own CPU a
nd memory. A graphics processing unit might contain fifty or more tiny computers
that perform the calculations necessary to display 3D graphics[citation needed]
. Modern desktop computers contain many smaller computers that assist the main C
PU in performing I/O.
Multitasking
This section does not cite any references or sources.
Please help improve this section by adding citations to reliable sources. Unveri
fiable material may be challenged and removed. (July 2008)
Main article: Computer multitasking
While a computer may be viewed as running one gigantic program stored in its mai
n memory, in some systems it is necessary to give the appearance of running seve

ral programs simultaneously. This is achieved by having the computer switch rapi
dly between running each program in turn. One means by which this is done is wit
h a special signal called an interrupt which can periodically cause the computer
to stop executing instructions where it was and do something else instead. By r
emembering where it was executing prior to the interrupt, the computer can retur
n to that task later. If several programs are running "at the same time", then t
he interrupt generator might be causing several hundred interrupts per second, c
ausing a program switch each time. Since modern computers typically execute inst
ructions several orders of magnitude faster than human perception, it may appear
that many programs are running at the same time even though only one is ever ex
ecuting in any given instant. This method of multitasking is sometimes termed "t
ime-sharing" since each program is allocated a "slice" of time in turn.
Before the era of cheap computers, the principle use for multitasking was to all
ow many people to share the same computer.
Seemingly, multitasking would cause a computer that is switching between several
programs to run more slowly - in direct proportion to the number of programs it
is running. However, most programs spend much of their time waiting for slow in
put/output devices to complete their tasks. If a program is waiting for the user
to click on the mouse or press a key on the keyboard, then it will not take a "
time slice" until the event it is waiting for has occurred. This frees up time f
or other programs to execute so that many programs may be run at the same time w
ithout unacceptable speed loss.
Multiprocessing
Main article: Multiprocessing
Cray designed many supercomputers that used multiprocessing heavily.
Cray designed many supercomputers that used multiprocessing heavily.
Some computers may divide their work between one or more separate CPUs, creating
a multiprocessing configuration. Traditionally, this technique was utilized onl
y in large and powerful computers such as supercomputers, mainframe computers an
d servers. However, multiprocessor and multi-core (multiple CPUs on a single int
egrated circuit) personal and laptop computers have become widely available and
are beginning to see increased usage in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ
significantly from the basic stored-program architecture and from general purpos
e computers.[19] They often feature thousands of CPUs, customized high-speed int
erconnects, and specialized computing hardware. Such designs tend to be useful o
nly for specialized tasks due to the large scale of program organization require
d to successfully utilize most of the available resources at once. Supercomputer
s usually see usage in large-scale simulation, graphics rendering, and cryptogra
phy applications, as well as with other so-called "embarrassingly parallel" task
s.
Networking and the Internet
Computers have been used to coordinate information between multiple locations si
nce the 1950s. The U.S. military's SAGE system was the first large-scale example
of such a system, which led to a number of special-purpose commercial systems l
ike Sabre.
In the 1970s, computer engineers at research institutions throughout the United
States began to link their computers together using telecommunications technolog
y. This effort was funded by ARPA (now DARPA), and the computer network that it
produced was called the ARPANET. The technologies that made the Arpanet possible

spread and evolved. In time, the network spread beyond academic and military in
stitutions and became known as the Internet. The emergence of networking involve
d a redefinition of the nature and boundaries of the computer. Computer operatin
g systems and applications were modified to include the ability to define and ac
cess the resources of other computers on the network, such as peripheral devices
, stored information, and the like, as extensions of the resources of an individ
ual computer. Initially these facilities were available primarily to people work
ing in high-tech environments, but in the 1990s the spread of applications like
e-mail and the World Wide Web, combined with the development of cheap, fast netw
orking technologies like Ethernet and ADSL saw computer networking become almost
ubiquitous. In fact, the number of computers that are networked is growing phen
omenally. A very large proportion of personal computers regularly connect to the
Internet to communicate and receive information. "Wireless" networking, often u
tilizing mobile phone networks, has meant networking is becoming increasingly ub
iquitous even in mobile computing environments.
Computer software
Computer software, or just software is a general term used to describe a collect
ion of computer programs, procedures and documentation that perform some tasks o
n a computer system.[1] The term includes application software such as word proc
essors which perform productive tasks for users, system software such as operati
ng systems, which interface with hardware to provide the necessary services for
application software, and middleware which controls and co-ordinates distributed
systems. Software includes websites, programs, video games etc. that are coded
by programming languages like C, C++, etc.
"Software" is sometimes used in a broader context to mean anything which is not
hardware but which is used with hardware, such as film, tapes and records.[2]
Overview
Computer software is usually regarded as anything but hardware, meaning that the
"hard" are the parts that are tangible (able to hold) while the "soft" part is
the intangible objects inside the computer. Software encompasses an extremely wi
de array of products and technologies developed using different techniques like
programming languages, scripting languages etc. The types of software include we
b pages developed by technologies like HTML, PHP, Perl, JSP, ASP.NET, XML, and d
esktop applications like Microsoft Word, OpenOffice developed by technologies li
ke C, C++, Java, C#, etc. Software usually runs on an underlying operating syste
m (which is a software also) like Microsoft Windows, Linux (running GNOME and KD
E), Sun Solaris etc. Software also includes video games like the Super Mario, Ca
ll of Duty for personal computers or video game consoles. These games can be cre
ated using CGI (computer generated imagery) that can be designed by applications
like Maya, 3ds Max etc.
Also a software usually runs on a software platform like Java and .NET so that f
or instance, Microsoft Windows software will not be able to run on Mac OS becaus
e how the software is written is different between the systems (platforms). Thes
e applications can work using software porting, interpreters or re-writing the s
ource code for that platform.
[edit] Relationship to computer hardware
Main article: Computer hardware
Computer software is so called to distinguish it from computer hardware, which e
ncompasses the physical interconnections and devices required to store and execu
te (or run) the software. At the lowest level, software consists of a machine la

nguage specific to an individual processor. A machine language consists of group


s of binary values signifying processor instructions which change the state of t
he computer from its preceding state. Software is an ordered sequence of instruc
tions for changing the state of the computer hardware in a particular sequence.
It is usually written in high-level programming languages that are easier and mo
re efficient for humans to use (closer to natural language) than machine languag
e. High-level languages are compiled or interpreted into machine language object
code. Software may also be written in an assembly language, essentially, a mnem
onic representation of a machine language using a natural language alphabet. Ass
embly language must be assembled into object code via an assembler.
The term "software" was first used in this sense by John W. Tukey in 1958.[3] In
computer science and software engineering, computer software is all computer pr
ograms. The theory that is the basis for most modern software was first proposed
by Alan Turing in his 1935 essay Computable numbers with an application to the
Entscheidungsproblem.[4]
[edit] Types
Practical computer systems divide software systems into three major classes: sys
tem software, programming software and application software, although the distin
ction is arbitrary, and often blurred.
* System software helps run the computer hardware and computer system. It in
cludes operating systems, device drivers, diagnostic tools, servers, windowing s
ystems, utilities and more. The purpose of systems software is to insulate the a
pplications programmer as much as possible from the details of the particular co
mputer complex being used, especially memory and other hardware features, and su
ch as accessory devices as communications, printers, readers, displays, keyboard
s, etc.
* Programming software usually provides tools to assist a programmer in writ
ing computer programs, and software using different programming languages in a m
ore convenient way. The tools include text editors, compilers, interpreters, lin
kers, debuggers, and so on. An Integrated development environment (IDE) merges t
hose tools into a software bundle, and a programmer may not need to type multipl
e commands for compiling, interpreting, debugging, tracing, and etc., because th
e IDE usually has an advanced graphical user interface, or GUI.
* Application software allows end users to accomplish one or more specific (
non-computer related) tasks. Typical applications include industrial automation,
business software, educational software, medical software, databases, and compu
ter games. Businesses are probably the biggest users of application software, bu
t almost every field of human activity now uses some form of application softwar
e.
[edit] Program and library
A program may not be sufficiently complete for execution by a computer. In parti
cular, it may require additional software from a software library in order to be
complete. Such a library may include software components used by stand-alone pr
ograms, but which cannot work on their own. Thus, programs may include standard
routines that are common to many programs, extracted from these libraries. Libra
ries may also include 'stand-alone' programs which are activated by some compute
r event and/or perform some function (e.g., of computer 'housekeeping') but do n
ot return data to their calling program. Libraries may be called by one to many
other programs; programs may call zero to many other programs.
Three layers

Users often see things differently than programmers. People who use modern gener
al purpose computers (as opposed to embedded systems, analog computers, supercom
puters, etc.) usually see three layers of software performing a variety of tasks
: platform, application, and user software.
Platform software
Platform includes the firmware, device drivers, an operating system, and typ
ically a graphical user interface which, in total, allow a user to interact with
the computer and its peripherals (associated equipment). Platform software ofte
n comes bundled with the computer. On a PC you will usually have the ability to
change the platform software.
Application software
Application software or Applications are what most people think of when they
think of software. Typical examples include office suites and video games. Appl
ication software is often purchased separately from computer hardware. Sometimes
applications are bundled with the computer, but that does not change the fact t
hat they run as independent applications. Applications are almost always indepen
dent programs from the operating system, though they are often tailored for spec
ific platforms. Most users think of compilers, databases, and other "system soft
ware" as applications.
User-written software
End-user development tailors systems to meet users' specific needs. User sof
tware include spreadsheet templates, word processor macros, scientific simulatio
ns, and scripts for graphics and animations. Even email filters are a kind of us
er software. Users create this software themselves and often overlook how import
ant it is. Depending on how competently the user-written software has been integ
rated into purchased application packages, many users may not be aware of the di
stinction between the purchased packages, and what has been added by fellow co-w
orkers.

You might also like