Professional Documents
Culture Documents
Computer
A computer is a machine that manipulates data according to a list of instruction
s.
The first devices that resemble modern computers date to the mid-20th century (1
940 1945), although the computer concept and various machines similar to computers
existed earlier. Early electronic computers were the size of a large room, cons
uming as much power as several hundred modern personal computers.[1] Modern comp
uters are based on tiny integrated circuits and are millions to billions of time
s more capable while occupying a fraction of the space.[2] Today, simple compute
rs may be made small enough to fit into a wristwatch and be powered from a watch
battery. Personal computers, in various forms, are icons of the Information Age
and are what most people think of as "a computer"; however, the most common for
m of computer in use today is the embedded computer. Embedded computers are smal
l, simple devices that are used to control other devices
for example, they may b
e found in machines ranging from fighter aircraft to industrial robots, digital
cameras, and children's toys.
The ability to store and execute lists of instructions called programs makes com
puters extremely versatile and distinguishes them from calculators. The Church Tur
ing thesis is a mathematical statement of this versatility: any computer with a
certain minimum capability is, in principle, capable of performing the same task
s that any other computer can perform. Therefore, computers with capability and
complexity ranging from that of a personal digital assistant to a supercomputer
are all able to perform the same computational tasks given enough time and stora
ge capacity.
History of computing
It is difficult to identify any one device as the earliest computer, partly beca
use the term "computer" has been subject to varying interpretations over time. O
riginally, the term "computer" referred to a person who performed numerical calc
ulations (a human computer), often with the aid of a mechanical calculating devi
ce.
The history of the modern computer begins with two separate technologies - that
of automated calculation and that of programmability.
Examples of early mechanical calculating devices included the abacus, the slide
rule and arguably the astrolabe and the Antikythera mechanism (which dates from
about 150-100 BC). Hero of Alexandria (c. 10 70 AD) built a mechanical theater whi
ch performed a play lasting 10 minutes and was operated by a complex system of r
opes and drums that might be considered to be a means of deciding which parts of
the mechanism performed which actions and when.[3] This is the essence of progr
ammability.
The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is cons
idered to be the earliest programmable analog computer.[4] It displayed the zodi
ac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across
a gateway causing automatic doors to open every hour,[5][6] and five robotic mu
sicians who play music when struck by levers operated by a camshaft attached to
a water wheel. The length of day and night could be re-programmed every day in o
rder to account for the changing lengths of day and night throughout the year.[4
]
The end of the Middle Ages saw a re-invigoration of European mathematics and eng
ineering, and Wilhelm Schickard's 1623 device was the first of a number of mecha
nical calculators constructed by European engineers. However, none of those devi
ces fit the modern definition of a computer because they could not be programmed
.
In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used
a series of punched paper cards as a template to allow his loom to weave intric
ate patterns automatically. The resulting Jacquard loom was an important step in
the development of computers because the use of punched cards to define woven p
atterns can be viewed as an early, albeit limited, form of programmability.
It was the fusion of automatic calculation with programmability that produced th
e first recognizable computers. In 1837, Charles Babbage was the first to concep
tualize and design a fully programmable mechanical computer that he called "The
Analytical Engine".[7] Due to limited finances, and an inability to resist tinke
ring with the design, Babbage never actually built his Analytical Engine.
Large-scale automated data processing of punched cards was performed for the U.S
. Census in 1890 by tabulating machines designed by Herman Hollerith and manufac
tured by the Computing Tabulating Recording Corporation, which later became IBM.
By the end of the 19th century a number of technologies that would later prove
useful in the realization of practical computers had begun to appear: the punche
d card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.
During the first half of the 20th century, many scientific computing needs were
met by increasingly sophisticated analog computers, which used a direct mechanic
al or electrical model of the problem as a basis for computation. However, these
were not programmable and generally lacked the versatility and accuracy of mode
rn digital computers.
A succession of steadily more powerful and flexible computing devices were const
ructed in the 1930s and 1940s, gradually adding the key features that are seen i
n modern computers. The use of digital electronics (largely invented by Claude S
hannon in 1937) and more flexible programmability were vitally important steps,
but defining one point along this road as "the first digital electronic computer
" is difficult (Shannon 1940). Notable achievements include:
* Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first
working machine featuring binary arithmetic, including floating point arithmetic
and a measure of programmability. In 1998 the Z3 was proved to be Turing comple
te, therefore being the world's first operational computer.
* The non-programmable Atanasoff Berry Computer (1941) which used vacuum tube
based computation, binary numbers, and regenerative capacitor memory.
* The secret British Colossus computers (1943)[8], which had limited program
mability but demonstrated that a device using thousands of tubes could be reason
ably reliable and electronically reprogrammable. It was used for breaking German
wartime codes.
* The Harvard Mark I (1944), a large-scale electromechanical computer with l
imited programmability.
* The U.S. Army's Ballistics Research Laboratory ENIAC (1946), which used de
cimal arithmetic and is sometimes called the first general purpose electronic co
mputer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronic
s). Initially, however, ENIAC had an inflexible architecture which essentially r
equired rewiring to change its programming.
Several developers of ENIAC, recognizing its flaws, came up with a far more flex
ible and elegant design, which came to be known as the "stored program architect
ure" or von Neumann architecture. This design was first formally described by Jo
hn von Neumann in the paper First Draft of a Report on the EDVAC, distributed in
together all of the numbers from 1 to 1,000 would take thousands of button pres
ses and a lot of time with a near certainty of making a mistake. On the other hand
, a computer may be programmed to do this with just a few simple instructions.
Once told to run this program, the computer will perform the repetitive addition
task without further human intervention. It will almost never make a mistake an
d a modern PC can complete the task in about a millionth of a second.[9]
However, computers cannot "think" for themselves in the sense that they only sol
ve problems in exactly the way they are programmed to. An intelligent human face
d with the above addition task might soon realize that instead of actually addin
g up all the numbers one can simply use the equation
1+2+3+...+n = {{n(n+1)} \over 2}
and arrive at the correct answer (500,500) with little work.[10] In other words,
a computer programmed to add up the numbers one by one as in the example above
would do exactly that without regard to efficiency or alternative solutions.
Programs
In practical terms, a computer program may run from just a few instructions to m
any millions of instructions, as in a program for a word processor or a web brow
ser. A typical modern computer can execute billions of instructions per second (
gigahertz or GHz) and rarely make a mistake over many years of operation. Large
computer programs comprising several million instructions may take teams of prog
rammers years to write, thus the probability of the entire program having been w
ritten without error is highly unlikely.
Errors in computer programs are called "bugs". Bugs may be benign and not affect
the usefulness of the program, or have only subtle effects. But in some cases t
hey may cause the program to "hang" - become unresponsive to input such as mouse
clicks or keystrokes, or to completely fail or "crash". Otherwise benign bugs m
ay sometimes may be harnessed for malicious intent by an unscrupulous user writi
ng an "exploit" - code designed to take advantage of a bug and disrupt a program
's proper execution. Bugs are usually not the fault of the computer. Since compu
ters merely execute the instructions they are given, bugs are nearly always the
result of programmer error or an oversight made in the program's design.[11]
In most computers, individual instructions are stored as machine code with each
instruction being given a unique number (its operation code or opcode for short)
. The command to add two numbers together would have one opcode, the command to
multiply them would have a different opcode and so on. The simplest computers ar
e able to perform any of a handful of different instructions; the more complex c
omputers have several hundred to choose from each with a unique numerical code. Si
nce the computer's memory is able to store numbers, it can also store the instru
ction codes. This leads to the important fact that entire programs (which are ju
st lists of instructions) can be represented as lists of numbers and can themsel
ves be manipulated inside the computer just as if they were numeric data. The fu
ndamental concept of storing programs in the computer's memory alongside the dat
a they operate on is the crux of the von Neumann, or stored program, architectur
e. In some cases, a computer might store some or all of its program in memory th
at is kept separate from the data it operates on. This is called the Harvard arc
hitecture after the Harvard Mark I computer. Modern von Neumann computers displa
y some traits of the Harvard architecture in their designs, such as in CPU cache
s.
While it is possible to write computer programs as long lists of numbers (machin
e language) and this technique was used with many early computers,[12] it is ext
remely tedious to do so in practice, especially for complicated programs. Instea
d, each basic instruction can be given a short name that is indicative of its fu
nction and easy to remember a mnemonic such as ADD, SUB, MULT or JUMP. These mnemo
nics are collectively known as a computer's assembly language. Converting progra
ms written in assembly language into something the computer can actually underst
and (machine language) is usually done by a computer program called an assembler
. Machine languages and the assembly languages that represent them (collectively
termed low-level programming languages) tend to be unique to a particular type
of computer. For instance, an ARM architecture computer (such as may be found in
a PDA or a hand-held videogame) cannot understand the machine language of an In
tel Pentium or the AMD Athlon 64 computer that might be in a PC.[13]
Though considerably easier than in machine language, writing long programs in as
sembly language is often difficult and error prone. Therefore, most complicated
programs are written in more abstract high-level programming languages that are
able to express the needs of the computer programmer more conveniently (and ther
eby help reduce programmer error). High level languages are usually "compiled" i
nto machine language (or sometimes into assembly language and then into machine
language) using another computer program called a compiler.[14] Since high level
languages are more abstract than assembly language, it is possible to use diffe
rent compilers to translate the same high level language program into the machin
e language of many different types of computer. This is part of the means by whi
ch software like video games may be made available for different computer archit
ectures such as personal computers and various video game consoles.
The task of developing large software systems is an immense intellectual effort.
Producing software with an acceptably high reliability on a predictable schedul
e and budget has proved historically to be a great challenge; the academic and p
rofessional discipline of software engineering concentrates specifically on this
problem.
Example
Suppose a computer is being employed to drive a traffic light. A simple stored p
rogram might say:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
Turn
Turn
Wait
Turn
Turn
Wait
Turn
Turn
Wait
Turn
Jump
With this set of instructions, the computer would cycle the light continually th
rough red, green, yellow and back to red again until told to stop running the pr
ogram.
However, suppose there is a simple on/off switch connected to the computer that
is intended to be used to make the light flash red while some maintenance operat
ion is being performed. The program might then instruct the computer to:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
2
12.
13.
14.
15.
16.
Turn
Wait
Turn
Wait
Jump
In this manner, the computer is either running the instructions from number (2)
to (11) over and over or its running the instructions from (11) down to (16) ove
r and over, depending on the position of the switch.[15]
features often provide ALUs that can perform arithmetic on vectors and matrices
.
Memory
Main article: Computer storage
Magnetic core memory
il it was completely
Magnetic core memory
il it was completely
was popular
replaced by
was popular
replaced by
A computer's memory can be viewed as a list of cells into which numbers can be p
laced or read. Each cell has a numbered "address" and can store a single number.
The computer can be instructed to "put the number 123 into the cell numbered 13
57" or to "add the number that is in cell 1357 to the number that is in cell 246
8 and put the answer into cell 1595". The information stored in memory may repre
sent practically anything. Letters, numbers, even computer instructions can be p
laced into memory with equal ease. Since the CPU does not differentiate between
different types of information, it is up to the software to give significance to
what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbe
rs in groups of eight bits (called a byte). Each byte is able to represent 256 d
ifferent numbers; either from 0 to 255 or -128 to +127. To store larger numbers,
several consecutive bytes may be used (typically, two, four or eight). When neg
ative numbers are required, they are usually stored in two's complement notation
. Other arrangements are possible, but are usually not seen outside of specializ
ed applications or historical contexts. A computer can store any kind of informa
tion in memory as long as it can be somehow represented in numerical form. Moder
n computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read
and written to much more rapidly than the main memory area. There are typically
between two and one hundred registers depending on the type of CPU. Registers a
re used for the most frequently needed data items to avoid having to access main
memory every time data is needed. Since data is constantly being worked on, red
ucing the need to access main memory (which is often slow compared to the ALU an
d control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties: random access memory or R
AM and read-only memory or ROM. RAM can be read and written to anytime the CPU c
ommands it, but ROM is pre-loaded with data and software that never changes, so
the CPU can only read from it. ROM is typically used to store the computer's ini
tial start-up instructions. In general, the contents of RAM is erased when the p
ower to the computer is turned off while ROM retains its data indefinitely. In a
PC, the ROM contains a specialized program called the BIOS that orchestrates lo
ading the computer's operating system from the hard disk drive into RAM whenever
the computer is turned on or reset. In embedded computers, which frequently do
not have disk drives, all of the software required to perform the task may be st
ored in ROM. Software that is stored in ROM is often called firmware because it
is notionally more like hardware than software. Flash memory blurs the distincti
on between ROM and RAM by retaining data when turned off but being rewritable li
ke RAM. However, flash memory is typically much slower than conventional ROM and
RAM so its use is restricted to applications where high speeds are not required
.[18]
In more sophisticated computers there may be one or more RAM cache memories whic
h are slower than registers but faster than main memory. Generally computers wit
h this sort of cache are designed to move frequently needed data into the cache
automatically, often without the need for any intervention on the programmer's p
art.
Input/output (I/O)
I/O is the means by which a computer receives information from the outside world
and sends results back. Devices that provide input or output to the computer ar
e called peripherals. On a typical personal computer, peripherals include input
devices like the keyboard and mouse, and output devices such as the display and
printer. Hard disk drives, floppy disk drives and optical disc drives serve as b
oth input and output devices. Computer networking is another form of I/O.
Often, I/O devices are complex computers in their own right with their own CPU a
nd memory. A graphics processing unit might contain fifty or more tiny computers
that perform the calculations necessary to display 3D graphics[citation needed]
. Modern desktop computers contain many smaller computers that assist the main C
PU in performing I/O.
Multitasking
This section does not cite any references or sources.
Please help improve this section by adding citations to reliable sources. Unveri
fiable material may be challenged and removed. (July 2008)
Main article: Computer multitasking
While a computer may be viewed as running one gigantic program stored in its mai
n memory, in some systems it is necessary to give the appearance of running seve
ral programs simultaneously. This is achieved by having the computer switch rapi
dly between running each program in turn. One means by which this is done is wit
h a special signal called an interrupt which can periodically cause the computer
to stop executing instructions where it was and do something else instead. By r
emembering where it was executing prior to the interrupt, the computer can retur
n to that task later. If several programs are running "at the same time", then t
he interrupt generator might be causing several hundred interrupts per second, c
ausing a program switch each time. Since modern computers typically execute inst
ructions several orders of magnitude faster than human perception, it may appear
that many programs are running at the same time even though only one is ever ex
ecuting in any given instant. This method of multitasking is sometimes termed "t
ime-sharing" since each program is allocated a "slice" of time in turn.
Before the era of cheap computers, the principle use for multitasking was to all
ow many people to share the same computer.
Seemingly, multitasking would cause a computer that is switching between several
programs to run more slowly - in direct proportion to the number of programs it
is running. However, most programs spend much of their time waiting for slow in
put/output devices to complete their tasks. If a program is waiting for the user
to click on the mouse or press a key on the keyboard, then it will not take a "
time slice" until the event it is waiting for has occurred. This frees up time f
or other programs to execute so that many programs may be run at the same time w
ithout unacceptable speed loss.
Multiprocessing
Main article: Multiprocessing
Cray designed many supercomputers that used multiprocessing heavily.
Cray designed many supercomputers that used multiprocessing heavily.
Some computers may divide their work between one or more separate CPUs, creating
a multiprocessing configuration. Traditionally, this technique was utilized onl
y in large and powerful computers such as supercomputers, mainframe computers an
d servers. However, multiprocessor and multi-core (multiple CPUs on a single int
egrated circuit) personal and laptop computers have become widely available and
are beginning to see increased usage in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ
significantly from the basic stored-program architecture and from general purpos
e computers.[19] They often feature thousands of CPUs, customized high-speed int
erconnects, and specialized computing hardware. Such designs tend to be useful o
nly for specialized tasks due to the large scale of program organization require
d to successfully utilize most of the available resources at once. Supercomputer
s usually see usage in large-scale simulation, graphics rendering, and cryptogra
phy applications, as well as with other so-called "embarrassingly parallel" task
s.
Networking and the Internet
Computers have been used to coordinate information between multiple locations si
nce the 1950s. The U.S. military's SAGE system was the first large-scale example
of such a system, which led to a number of special-purpose commercial systems l
ike Sabre.
In the 1970s, computer engineers at research institutions throughout the United
States began to link their computers together using telecommunications technolog
y. This effort was funded by ARPA (now DARPA), and the computer network that it
produced was called the ARPANET. The technologies that made the Arpanet possible
spread and evolved. In time, the network spread beyond academic and military in
stitutions and became known as the Internet. The emergence of networking involve
d a redefinition of the nature and boundaries of the computer. Computer operatin
g systems and applications were modified to include the ability to define and ac
cess the resources of other computers on the network, such as peripheral devices
, stored information, and the like, as extensions of the resources of an individ
ual computer. Initially these facilities were available primarily to people work
ing in high-tech environments, but in the 1990s the spread of applications like
e-mail and the World Wide Web, combined with the development of cheap, fast netw
orking technologies like Ethernet and ADSL saw computer networking become almost
ubiquitous. In fact, the number of computers that are networked is growing phen
omenally. A very large proportion of personal computers regularly connect to the
Internet to communicate and receive information. "Wireless" networking, often u
tilizing mobile phone networks, has meant networking is becoming increasingly ub
iquitous even in mobile computing environments.
Computer software
Computer software, or just software is a general term used to describe a collect
ion of computer programs, procedures and documentation that perform some tasks o
n a computer system.[1] The term includes application software such as word proc
essors which perform productive tasks for users, system software such as operati
ng systems, which interface with hardware to provide the necessary services for
application software, and middleware which controls and co-ordinates distributed
systems. Software includes websites, programs, video games etc. that are coded
by programming languages like C, C++, etc.
"Software" is sometimes used in a broader context to mean anything which is not
hardware but which is used with hardware, such as film, tapes and records.[2]
Overview
Computer software is usually regarded as anything but hardware, meaning that the
"hard" are the parts that are tangible (able to hold) while the "soft" part is
the intangible objects inside the computer. Software encompasses an extremely wi
de array of products and technologies developed using different techniques like
programming languages, scripting languages etc. The types of software include we
b pages developed by technologies like HTML, PHP, Perl, JSP, ASP.NET, XML, and d
esktop applications like Microsoft Word, OpenOffice developed by technologies li
ke C, C++, Java, C#, etc. Software usually runs on an underlying operating syste
m (which is a software also) like Microsoft Windows, Linux (running GNOME and KD
E), Sun Solaris etc. Software also includes video games like the Super Mario, Ca
ll of Duty for personal computers or video game consoles. These games can be cre
ated using CGI (computer generated imagery) that can be designed by applications
like Maya, 3ds Max etc.
Also a software usually runs on a software platform like Java and .NET so that f
or instance, Microsoft Windows software will not be able to run on Mac OS becaus
e how the software is written is different between the systems (platforms). Thes
e applications can work using software porting, interpreters or re-writing the s
ource code for that platform.
[edit] Relationship to computer hardware
Main article: Computer hardware
Computer software is so called to distinguish it from computer hardware, which e
ncompasses the physical interconnections and devices required to store and execu
te (or run) the software. At the lowest level, software consists of a machine la
nguage specific to an individual processor. A machine language consists of group
s of binary values signifying processor instructions which change the state of t
he computer from its preceding state. Software is an ordered sequence of instruc
tions for changing the state of the computer hardware in a particular sequence.
It is usually written in high-level programming languages that are easier and mo
re efficient for humans to use (closer to natural language) than machine languag
e. High-level languages are compiled or interpreted into machine language object
code. Software may also be written in an assembly language, essentially, a mnem
onic representation of a machine language using a natural language alphabet. Ass
embly language must be assembled into object code via an assembler.
The term "software" was first used in this sense by John W. Tukey in 1958.[3] In
computer science and software engineering, computer software is all computer pr
ograms. The theory that is the basis for most modern software was first proposed
by Alan Turing in his 1935 essay Computable numbers with an application to the
Entscheidungsproblem.[4]
[edit] Types
Practical computer systems divide software systems into three major classes: sys
tem software, programming software and application software, although the distin
ction is arbitrary, and often blurred.
* System software helps run the computer hardware and computer system. It in
cludes operating systems, device drivers, diagnostic tools, servers, windowing s
ystems, utilities and more. The purpose of systems software is to insulate the a
pplications programmer as much as possible from the details of the particular co
mputer complex being used, especially memory and other hardware features, and su
ch as accessory devices as communications, printers, readers, displays, keyboard
s, etc.
* Programming software usually provides tools to assist a programmer in writ
ing computer programs, and software using different programming languages in a m
ore convenient way. The tools include text editors, compilers, interpreters, lin
kers, debuggers, and so on. An Integrated development environment (IDE) merges t
hose tools into a software bundle, and a programmer may not need to type multipl
e commands for compiling, interpreting, debugging, tracing, and etc., because th
e IDE usually has an advanced graphical user interface, or GUI.
* Application software allows end users to accomplish one or more specific (
non-computer related) tasks. Typical applications include industrial automation,
business software, educational software, medical software, databases, and compu
ter games. Businesses are probably the biggest users of application software, bu
t almost every field of human activity now uses some form of application softwar
e.
[edit] Program and library
A program may not be sufficiently complete for execution by a computer. In parti
cular, it may require additional software from a software library in order to be
complete. Such a library may include software components used by stand-alone pr
ograms, but which cannot work on their own. Thus, programs may include standard
routines that are common to many programs, extracted from these libraries. Libra
ries may also include 'stand-alone' programs which are activated by some compute
r event and/or perform some function (e.g., of computer 'housekeeping') but do n
ot return data to their calling program. Libraries may be called by one to many
other programs; programs may call zero to many other programs.
Three layers
Users often see things differently than programmers. People who use modern gener
al purpose computers (as opposed to embedded systems, analog computers, supercom
puters, etc.) usually see three layers of software performing a variety of tasks
: platform, application, and user software.
Platform software
Platform includes the firmware, device drivers, an operating system, and typ
ically a graphical user interface which, in total, allow a user to interact with
the computer and its peripherals (associated equipment). Platform software ofte
n comes bundled with the computer. On a PC you will usually have the ability to
change the platform software.
Application software
Application software or Applications are what most people think of when they
think of software. Typical examples include office suites and video games. Appl
ication software is often purchased separately from computer hardware. Sometimes
applications are bundled with the computer, but that does not change the fact t
hat they run as independent applications. Applications are almost always indepen
dent programs from the operating system, though they are often tailored for spec
ific platforms. Most users think of compilers, databases, and other "system soft
ware" as applications.
User-written software
End-user development tailors systems to meet users' specific needs. User sof
tware include spreadsheet templates, word processor macros, scientific simulatio
ns, and scripts for graphics and animations. Even email filters are a kind of us
er software. Users create this software themselves and often overlook how import
ant it is. Depending on how competently the user-written software has been integ
rated into purchased application packages, many users may not be aware of the di
stinction between the purchased packages, and what has been added by fellow co-w
orkers.
[edit] Creation
Main article: Computer programming
[edit] Operation
Computer software has to be "loaded" into the computer's storage (such as a hard
drive, memory, or RAM). Once the software has loaded, the computer is able to e
xecute the software. This involves passing instructions from the application sof
tware, through the system software, to the hardware which ultimately receives th
e instruction as machine code. Each instruction causes the computer to carry out
an operation -- moving data, carrying out a computation, or altering the contro
l flow of instructions.
Data movement is typically from one place in memory to another. Sometimes it inv
olves moving data between memory and registers which enable high-speed data acce
ss in the CPU. Moving data, especially large amounts of it, can be costly. So, t
his is sometimes avoided by using "pointers" to data instead. Computations inclu
de simple operations such as incrementing the value of a variable data element.
More complex computations may involve many operations and data elements together
.
Instructions may be performed sequentially, conditionally, or iteratively. Seque
ntial instructions are those operations that are performed one after another. Co
nditional instructions are performed such that different sets of instructions ex
ecute depending on the value(s) of some data. In some languages this is known as
an "if" statement. Iterative instructions are performed repetitively and may de
pend on some data value. This is sometimes called a "loop." Often, one instructi
on may "call" another set of instructions that are defined in some other program
or module. When more than one computer processor is used, instructions may be e
xecuted simultaneously.
A simple example of the way software operates is what happens when a user select
s an entry such as "Copy" from a menu. In this case, a conditional instruction i
s executed to copy text from data in a 'document' area residing in memory, perha
ps to an intermediate storage area known as a 'clipboard' data area. If a differ
ent menu entry such as "Paste" is chosen, the software may execute the instructi
ons to copy the text from the clipboard data area to a specific location in the
same or another document in memory.
Depending on the application, even the example above could become complicated. T
he field of software engineering endeavors to manage the complexity of how softw
are operates. This is especially true for software that operates in the context
of a large or powerful computer system.
Currently, almost the only limitations on the use of computer software in applic
ations is the ingenuity of the designer/programmer. Consequently, large areas of
activities (such as playing grand master level chess) formerly assumed to be in
capable of software simulation are now routinely programmed. The only area that
has so far proved reasonably secure from software simulation is the realm of hum
an art especially, pleasing music and literature.[citation needed]
Kinds of software by operation: computer program as executable, source code or s
cript, configuration.
[edit] Quality and reliability
Software reliability considers the errors, faults, and failures related to the d
esign, implementation and operation of software.
See Software auditing, Software quality, Software testing, and Software reliabil
ity.
[edit] License
The software's license gives the user the right to use the software in the licen
sed environment. Some software comes with the license when purchased off the she
lf, or an OEM license when bundled with hardware. Other software comes with a fr
ee software license, granting the recipient the rights to modify and redistribut
e the software. Software can also be in the form of freeware or shareware. See a
lso License Management.
Computer hardware
Typical PC hardware
A typical personal computer consists of a case or chassis in a tower shape (desk
top) and the following parts:
Motherboard
* Motherboard - It is the "body" or mainframe of the computer, through which
all other components interface.
* Central processing unit (CPU) - Performs most of the calculations which en
able a computer to function, sometimes referred to as the "brain" of the compute
r.
o Computer fan - Used to lower the temperature of the computer; a fan
is almost always attached to the CPU, and the computer case will generally have
several fans to maintain a constant airflow. Liquid cooling can also be used to
cool a computer, though it focuses more on individual parts rather than the over
all temperature inside the chassis.
* Random Access Memory (RAM) -It is also known as the physical memory of the
computer. Fast-access memory that is cleared when the computer is powered-down.
RAM attaches directly to the motherboard, and is used to store programs that ar
e currently running.
* Firmware is loaded from the Read only memory ROM run from the Basic InputOutput System (BIOS) or in newer systems Extensible Firmware Interface (EFI) com
pliant
* Internal Buses - Connections to various internal components.
o PCI (being phased out for graphic cards but still used for other use
s)
o PCI-E
o USB
o HyperTransport
o CSI (expected in 2008)
o AGP (being phased out)
o VLB (outdated)
* External Bus Controllers - used to connect to external peripherals, such a
s printers and input devices. These ports may also be based upon expansion cards
, attached to the internal buses.
[edit] Power supply
Main article: Computer power supply
A case control, and (usually) a cooling fan, and supplies power to run the rest
of the computer, the most common types of power supplies are AT and BabyAT (old)
but the standard for PCs actually are ATX and Micro ATX.