You are on page 1of 54

History of Computers and Networks

Webster's Dictionary defines "computer" as any programmable electronic device


that can store, retrieve, and process data. The basic idea of computing
develops in the 1200's when a Moslem cleric proposes solving problems
with a series of written procedures.

As early as the 1640's mechanical calculators are manufactured for sale.


Records exist of earlier machines, but Blaise Pascal invents the first
commercial calculator, a hand powered adding machine. Although attempts
to multiply mechanically were made by Gottfried Liebnitz in the 1670s the
first true multiplying calculator appears in Germany shortly before the
American Revolution.

In 1801 a Frenchman, Joseph-Marie Jacquard builds a loom that weaves by


reading punched holes stored on small sheets of hardwood. These plates are
then inserted into the loom which reads (retrieves) the pattern and
creates(process) the weave. Powered by water, this "machine" came 140
years before the development of the modern computer.

Shortly after the first mass-produced calculator(1820), Charles Babbage


begins his lifelong quest for a programmable machine. Although Babbage
was a poor communicator and record-keeper, his difference engine is
sufficiently developed by 1842 that Ada Lovelace uses it to mechanically
translate a short written work. She is generally regarded as the first
programmer. Twelve years later George Boole, while professor of
Mathematics at Cork University, writes An Investigation of the Laws of
Thought(1854), and is generally recognized as the father of computer
science.

The 1890 census is tabulated on punch cards similar to the ones used 90 years
earlier to create weaves. Developed by Herman Hollerith of MIT, the
system uses electric power(non-mechanical). The Hollerith Tabulating
Company is a forerunner of today's IBM.

Just prior to the introduction of Hollerith's machine the first printing


calculator is introduced. In 1892 William Burroughs, a sickly ex-teller,
introduces a commercially successful printing calculator. Although hand-
powered, Burroughs quickly introduces an electronic model.

In 1925, unaware of the work of Charles Babbage, Vannevar Bush of MIT


builds a machine he calls the differential analyzer. Using a set of gears and
shafts, much like Babbage, the machine can handle simple calculus
problems, but accuracy is a problem.
The period from 1935 through 1952 gets murky with claims and
counterclaims of who invents what and when. Part of the problem lies in the
international situation that makes much of the research secret. Other
problems include poor record-keeping, deception and lack of definition.

In 1935, Konrad Zuse, a German construction engineer, builds a


mechanical calculator to handle the math involved in his profession. Shortly
after completion, Zuse starts on a programmable electronic device which he
completes in 1938.

John Vincent Atanasoff begins work on a digital computer in 1936 in the


basement of the Physics building on the campus of Iowa State. A graduate
student, Clifford (John) Berry assists. The "ABC" is designed to solve
linear equations common in physics. It displays some early features of later
computers including electronic calculations. He shows it to others in 1939
and leaves the patent application with attorneys for the school when he
leaves for a job in Washington during World War II. Unimpressed, the
school never files and ABC is cannibalized by students.

The Enigma, a complex mechanical encoder is used by the Germans and


they believe it to be unbreakable. Several people involved, most notably
Alan Turing, conceive machines to handle the problem, but none are
technically feasible. Turing proposes a "Universal Machine" capable of
"computing" any algorithm in 1937. That same year George Steblitz
creates his Model K(itchen), a conglomeration of otherwise useless and
leftover material, to solve complex calculations. He improves the design
while working at Bell Labs and on September 11, 1940, Steblitz uses a
teletype machine at Dartmouth College in New Hampshire to transmit a
problem to his Complex Number Calculator in New York and receives the
results. It is the first example of a network.

First in Poland, and later in Great Britain and the United States, the
Enigma code is broken. Information gained by this shortens the war. To
break the code, the British, led by Touring, build the Colossus Mark I. The
existence of this machine is a closely guarded secret of the British
Government until 1970. The United States Navy, aided to some extent by
the British, builds a machine capable of breaking not only the German code
but the Japanese code as well.

In 1943 development begins on the Electronic Numerical Integrator And


Computer (ENIAC) in earnest at Penn State. Designed by John Mauchly
and J. Presper Eckert of the Moore School, they get help from John von
Neumann and others. In 1944, the Havard Mark I is introduced. Based on a
series of proposals from Howard Aiken in the late 1930's, the Mark I
computes complex tables for the U.S. Navy. It uses a paper tape to store
instructions and Aiken hires Grace Hopper("Amazing Grace") as one of
three programmers working on the machine. Thomas J. Watson Sr. plays a
pivotal role involving his company, IBM, in the machine's development.

Early in 1945, with the Mark I stopped for repairs, Hopper notices a moth
in one of the relays, possibly causing the problem. From this day on,
Hopper refers to fixing the system as "debugging". The same year Von
Neumann proposes the concept of a "stored program" in a paper that is
never officially published.

Work completes on ENIAC in 1946. Although only three years old the
machine is woefully behind on technology, but the inventors opt to continue
while working on a more modern machine, the EDVAC. Programming
ENIAC requires it to be rewired. A later version eliminates this problem.
To make the machine appear more impressive to reporters during its
unveiling, a team member (possibly Eckert) puts translucent
spheres(halved ping pong balls) over the lights. The US patent office will
later recognize this as the first computer.

The next year scientists employed by Bell Labs complete work on the
transistor (John Bardeen, Walter Brattain and William Shockley receive
the Nobel Prize in Physics in 1956), and by 1948 teams around the world
work on a "stored program" machine. The first, nicknamed "Baby", is a
prototype of a much larger machine under construction in Britain and is
shown in June 1948.

The impetus over the next 5 years for advances in computers is mostly the
government and military. UNIVAC, delivered in 1951 to the Census
Bureau, results in a tremendous financial loss to its manufacturer,
Remington-Rand. The next year Grace Hopper, now an employee of that
company proposes "reuseable software," code segments that could be
extracted and assembled according to instructions in a "higher level
language." The concept of compiling is born. Hopper would revise this
concept over the next twenty years and her ideas would become an integral
part of all modern computers. CBS uses one of the 46 UNIVAC computers
produced to predict the outcome of the 1952 Presidential Election. They do
not air the prediction for 3 hours because they do not trust the machine.

IBM introduces the 701 the following year. It is the first commercially
successful computer. In 1956 FORTRAN is introduced(proposed 1954, it
takes nearly 3 years to develop the compiler). Two additional languages,
LISP and COBOL, are added in 1957 and 1958. Other early languages
include ALGOL and BASIC. Although never widely used, ALGOL is the
basis for many of today's languages.

With the introduction of Control Data's CDC1604 in 1958, the first


transistor powered computer, a new age dawns. Brilliant scientist Seymour
Cray heads the development team. This year integrated circuits are
introduced by two men, Jack Kilby and John Noyce, working
independently. The second network is developed at MIT. Over the next
three years computers begin affecting the day-to-day lives of most
Americans. The addition of MICR characters at the bottom of checks is
common.

In 1961 Fairchild Semiconductor introduces the integrated circuit. Within


ten years all computers use these instead of the transistor. Formally
building sized computers are now room-sized, and are considerably more
powerful. The following year the Atlas becomes operational, displaying
many of the features that make today's systems so powerful including
virtual memory, pipeline instruction execution and paging. Designed at the
University of Manchester, some of the people who developed Colossus
thirty years earlier make contributions.

On April 7, 1964, IBM introduces the System/360. While a technical


marvel, the main feature of this machine is business oriented...IBM
guarantees the "upward compatibility" of the system, reducing the risk
that a business would invest in outdated technology. Dartmouth College,
where the first network was demonstrated 25 years earlier, moves to the
forefront of the "computer age" with the introduction of TSS(Time Share
System) a crude(by today's standards) networking system. It is the first
Wide Area Network. In three years Randy Golden, President and Founder
of Golden Ink, would begin working on this network.

Within a year MIT returns to the top of the intellectual computer


community with the introduction of a greatly refined network that features
shared resources and uses the first minicomputer(DEC's PDP-8) to manage
telephone lines. Bell Labs and GE play major roles in its design.

In 1969 Bell Labs, unhappy with the direction of the MIT project, leaves
and develops its own operating system, UNIX. One of the many precursors
to today's Internet, ARPANet, is quietly launched. Alan Keys, who will
later become a designer for Apple, proposes the "personal computer." Also
in 1969, unhappy with Fairchild Semiconductor, a group of technicians
begin discussing forming their own company. This company, formed the
next year, would be known as Intel. The movie Colossus: The Forbin
Project has a supercomputer as the villain. Next year, The Computer Wore
Tennis Shoes was the first feature length movie with the word computer in
the title. In 1971, Texas Instruments introduces the first "pocket
calculator." It weighs 2.5 pounds.

With the country embroiled in a crisis of confidence known as Watergate,


in 1973 a little publicized judicial decision takes the patent for the computer
away from Mauchly and Eckert and awards it to Atanasoff. Xerox
introduces the mouse. Proposals are made for the first local area networks.

In 1975 the first personal computer is marketed in kit form. The Altair
features 256 bytes of memory. Bill Gates, with others, writes a BASIC
compiler for the machine. The next year Apple begins to market PC's, also
in kit form. It includes a monitor and keyboard. The earliest RISC platforms
become stable. In 1976, Queen Elizabeth goes on-line with the first royal
email message.

During the next few years the personal computer explodes on the American
scene. Microsoft, Apple and many smaller PC related companies form (and
some die). By 1977 stores begin to sell PC's. Continuing today, companies
strive to reduce the size and price of PC's while increasing capacity.
Entering the fray, IBM introduces it's PC in 1981(it's actually IBM's second
attempt, but the first failed miserably). Time selects the computer as its
Man of the Year in 1982. Tron, a computer-generated special effects
extravaganza is released the same year.

Illustrated History of Computers


Part 1
________________________________
___
John Kopplin © 2002

The first computers were people! That is, electronic computers (and the earlier
mechanical computers) were given this name because they performed the work
that had previously been assigned to people. "Computer" was originally a job
title: it was used to describe those human beings (predominantly women) whose
job it was to perform the repetitive calculations required to compute such things
as navigational tables, tide charts, and planetary positions for astronomical
almanacs. Imagine you had a job where hour after hour, day after day, you were
to do nothing but compute multiplications. Boredom would quickly set in, leading
to carelessness, leading to mistakes. And even on your best days you wouldn't
be producing answers very fast. Therefore, inventors have been searching for
hundreds of years for a way to mechanize (that is, find a mechanism that can
perform) this task.
This picture shows what were known as "counting tables" [photo
courtesy IBM]

A typical computer operation back when computers were people.

The abacus was an early aid for mathematical computations. Its only value is
that it aids the memory of the human performing the calculation. A skilled abacus
operator can work on addition and subtraction problems at the speed of a person
equipped with a hand calculator (multiplication and division are slower). The
abacus is often wrongly attributed to China. In fact, the oldest surviving abacus
was used in 300 B.C. by the Babylonians. The abacus is still in use today,
principally in the far east. A modern abacus consists of rings that slide over rods,
but the older one pictured below dates from the time when pebbles were used for
counting (the word "calculus" comes from the Latin word for pebble).

A very old abacus

A more modern abacus. Note how the abacus is really just a


representation of the human fingers: the 5 lower rings on
each rod represent the 5 fingers and the 2 upper rings
represent the 2 hands.

In 1617 an eccentric (some say mad) Scotsman named John Napier invented
logarithms, which are a technology that allows multiplication to be performed via
addition. The magic ingredient is the logarithm of each operand, which was
originally obtained from a printed table. But Napier also invented an alternative to
tables, where the logarithm values were carved on ivory sticks which are now
called Napier's Bones.

An original set of Napier's Bones [photo courtesy IBM]

A more modern set of Napier's Bones

Napier's invention led directly to the slide rule, first built in England in 1632 and
still in use in the 1960's by the NASA engineers of the Mercury, Gemini, and
Apollo programs which landed men on the moon.
A slide rule

Leonardo da Vinci (1452-1519) made drawings of gear-driven calculating


machines but apparently never built any.

A Leonardo da Vinci drawing showing gears arranged for computing

The first gear-driven calculating machine to actually be built was probably the
calculating clock, so named by its inventor, the German professor Wilhelm
Schickard in 1623. This device got little publicity because Schickard died soon
afterward in the bubonic plague.

Schickard's Calculating Clock


In 1642 Blaise Pascal, at age 19, invented the Pascaline as an aid for his father
who was a tax collector. Pascal built 50 of this gear-driven one-function
calculator (it could only add) but couldn't sell many because of their exorbitant
cost and because they really weren't that accurate (at that time it was not
possible to fabricate gears with the required precision). Up until the present age
when car dashboards went digital, the odometer portion of a car's speedometer
used the very same mechanism as the Pascaline to increment the next wheel
after each full revolution of the prior wheel. Pascal was a child prodigy. At the
age of 12, he was discovered doing his version of Euclid's thirty-second
proposition on the kitchen floor. Pascal went on to invent probability theory, the
hydraulic press, and the syringe. Shown below is an 8 digit version of the
Pascaline, and two views of a 6 digit version:

Pascal's Pascaline [photo © 2002 IEEE]


A 6 digit model for those who couldn't afford the 8 digit model

A Pascaline opened up so you can observe the gears and cylinders


which rotated to display the numerical result

Just a few years after Pascal, the German Gottfried Wilhelm Leibniz (co-inventor
with Newton of calculus) managed to build a four-function (addition, subtraction,
multiplication, and division) calculator that he called the stepped reckoner
because, instead of gears, it employed fluted drums having ten flutes arranged
around their circumference in a stair-step fashion. Although the stepped reckoner
employed the decimal number system (each drum had 10 flutes), Leibniz was the
first to advocate use of the binary number system which is fundamental to the
operation of modern computers. Leibniz is considered one of the greatest of the
philosophers but he died poor and alone.

Leibniz's Stepped Reckoner (have you ever heard "calculating"


referred to as "reckoning"?)

In 1801 the Frenchman Joseph Marie Jacquard invented a power loom that could
base its weave (and hence the design on the fabric) upon a pattern automatically
read from punched wooden cards, held together in a long row by rope.
Descendents of these punched cards have been in use ever since (remember
the "hanging chad" from the Florida presidential ballots of the year 2000?).

Jacquard's Loom showing the threads and the punched cards


By selecting particular cards for Jacquard's loom you defined the
woven pattern [photo © 2002 IEEE]
A close-up of a Jacquard card

This tapestry was woven by a Jacquard loom

Jacquard's technology was a real boon to mill owners, but put many loom
operators out of work. Angry mobs smashed Jacquard looms and once attacked
Jacquard himself. History is full of examples of labor unrest following
technological innovation yet most studies show that, overall, technology has
actually increased the number of jobs.
By 1822 the English mathematician Charles Babbage was proposing a steam
driven calculating machine the size of a room, which he called the Difference
Engine. This machine would be able to compute tables of numbers, such as
logarithm tables. He obtained government funding for this project due to the
importance of numeric tables in ocean navigation. By promoting their commercial
and military navies, the British government had managed to become the earth's
greatest empire. But in that time frame the British government was publishing a
seven volume set of navigation tables which came with a companion volume of
corrections which showed that the set had over 1000 numerical errors. It was
hoped that Babbage's machine could eliminate errors in these types of tables.
But construction of Babbage's Difference Engine proved exceedingly difficult and
the project soon became the most expensive government funded project up to
that point in English history. Ten years later the device was still nowhere near
complete, acrimony abounded between all involved, and funding dried up. The
device was never finished.
A small section of the type of mechanism employed in Babbage's
Difference Engine [photo © 2002 IEEE]

Babbage was not deterred, and by then was on to his next brainstorm, which he
called the Analytic Engine. This device, large as a house and powered by 6
steam engines, would be more general purpose in nature because it would be
programmable, thanks to the punched card technology of Jacquard. But it was
Babbage who made an important intellectual leap regarding the punched cards.
In the Jacquard loom, the presence or absence of each hole in the card
physically allows a colored thread to pass or stops that thread (you can see this
clearly in the earlier photo). Babbage saw that the pattern of holes could be used
to represent an abstract idea such as a problem statement or the raw data
required for that problem's solution. Babbage saw that there was no requirement
that the problem matter itself physically pass thru the holes.

Furthermore, Babbage realized that punched paper could be employed as a


storage mechanism, holding computed numbers for future reference. Because of
the connection to the Jacquard loom, Babbage called the two main parts of his
Analytic Engine the "Store" and the "Mill", as both terms are used in the weaving
industry. The Store was where numbers were held and the Mill was where they
were "woven" into new results. In a modern computer these same parts are
called the memory unit and the central processing unit (CPU).

The Analytic Engine also had a key function that distinguishes computers from
calculators: the conditional statement. A conditional statement allows a program
to achieve different results each time it is run. Based on the conditional
statement, the path of the program (that is, what statements are executed next)
can be determined based upon a condition or situation that is detected at the
very moment the program is running.

You have probably observed that a modern stoplight at an intersection between a


busy street and a less busy street will leave the green light on the busy street
until a car approaches on the less busy street. This type of street light is
controlled by a computer program that can sense the approach of cars on the
less busy street. That moment when the light changes from green to red is not
fixed in the program but rather varies with each traffic situation. The conditional
statement in the stoplight program would be something like, "if a car approaches
on the less busy street and the more busy street has already enjoyed the green
light for at least a minute then move the green light to the less busy street". The
conditional statement also allows a program to react to the results of its own
calculations. An example would be the program that the I.R.S uses to detect tax
fraud. This program first computes a person's tax liability and then decides
whether to alert the police based upon how that person's tax payments compare
to his obligations.

Babbage befriended Ada Byron, the daughter of the famous poet Lord Byron
(Ada would later become the Countess Lady Lovelace by marriage). Though she
was only 19, she was fascinated by Babbage's ideas and thru letters and
meetings with Babbage she learned enough about the design of the Analytic
Engine to begin fashioning programs for the still unbuilt machine. While Babbage
refused to publish his knowledge for another 30 years, Ada wrote a series of
"Notes" wherein she detailed sequences of instructions she had prepared for the
Analytic Engine. The Analytic Engine remained unbuilt (the British government
refused to get involved with this one) but Ada earned her spot in history as the
first computer programmer. Ada invented the subroutine and was the first to
recognize the importance of looping. Babbage himself went on to invent the
modern postal system, cowcatchers on trains, and the ophthalmoscope, which is
still used today to treat the eye.

The next breakthrough occurred in America. The U.S. Constitution states that a
census should be taken of all U.S. citizens every 10 years in order to determine
the representation of the states in Congress. While the very first census of 1790
had only required 9 months, by 1880 the U.S. population had grown so much that
the count for the 1880 census took 7.5 years. Automation was clearly needed for
the next census. The census bureau offered a prize for an inventor to help with
the 1890 census and this prize was won by Herman Hollerith, who proposed and
then successfully adopted Jacquard's punched cards for the purpose of
computation.

Hollerith's invention, known as the Hollerith desk, consisted of a card reader


which sensed the holes in the cards, a gear driven mechanism which could count
(using Pascal's mechanism which we still see in car odometers), and a large wall
of dial indicators (a car speedometer is a dial indicator) to display the results of
the count.

An operator working at a Hollerith Desk like the one below


Preparation of punched cards for the U.S. census
A few Hollerith desks still exist today [photo courtesy The Computer
Museum]

The patterns on Jacquard's cards were determined when a tapestry was


designed and then were not changed. Today, we would call this a read-only
form of information storage. Hollerith had the insight to convert punched cards to
what is today called a read/write technology. While riding a train, he observed
that the conductor didn't merely punch each ticket, but rather punched a
particular pattern of holes whose positions indicated the approximate height,
weight, eye color, etc. of the ticket owner. This was done to keep anyone else
from picking up a discarded ticket and claiming it was his own (a train ticket did
not lose all value when it was punched because the same ticket was used for
each leg of a trip). Hollerith realized how useful it would be to punch (write) new
cards based upon an analysis (reading) of some other set of cards. Complicated
analyses, too involved to be accomplished during a single pass thru the cards,
could be accomplished via multiple passes thru the cards using newly printed
cards to remember the intermediate results. Unknown to Hollerith, Babbage had
proposed this long before.
Hollerith's technique was successful and the 1890 census was completed in only
3 years at a savings of 5 million dollars. Interesting aside: the reason that a
person who removes inappropriate content from a book or movie is called a
censor, as is a person who conducts a census, is that in Roman society the
public official called the "censor" had both of these jobs.

Hollerith built a company, the Tabulating Machine Company which, after a few
buyouts, eventually became International Business Machines, known today as
IBM. IBM grew rapidly and punched cards became ubiquitous. Your gas bill
would arrive each month with a punch card you had to return with your payment.
This punch card recorded the particulars of your account: your name, address,
gas usage, etc. (I imagine there were some "hackers" in these days who would
alter the punch cards to change their bill). As another example, when you
entered a toll way (a highway that collects a fee from each driver) you were given
a punch card that recorded where you started and then when you exited from the
toll way your fee was computed based upon the miles you drove. When you
voted in an election the ballot you were handed was a punch card. The little
pieces of paper that are punched out of the card are called "chad" and were
thrown as confetti at weddings. Until recently all Social Security and other checks
issued by the Federal government were actually punch cards. The check-out slip
inside a library book was a punch card. Written on all these cards was a phrase
as common as "close cover before striking": "do not fold, spindle, or mutilate". A
spindle was an upright spike on the desk of an accounting clerk. As he
completed processing each receipt he would impale it on this spike. When the
spindle was full, he'd run a piece of string through the holes, tie up the bundle,
and ship it off to the archives. You occasionally still see spindles at restaurant
cash registers.

Two types of computer punch cards


Incidentally, the Hollerith census machine was the first machine to ever be
featured on a magazine cover.
IBM continued to develop mechanical calculators for sale to businesses to help
with financial accounting and inventory accounting. One characteristic of both
financial accounting and inventory accounting is that although you need to
subtract, you don't need negative numbers and you really don't have to multiply
since multiplication can be accomplished via repeated addition.

But the U.S. military desired a mechanical calculator more optimized for scientific
computation. By World War II the U.S. had battleships that could lob shells
weighing as much as a small car over distances up to 25 miles. Physicists could
write the equations that described how atmospheric drag, wind, gravity, muzzle
velocity, etc. would determine the trajectory of the shell. But solving such
equations was extremely laborious. This was the work performed by the human
computers. Their results would be published in ballistic "firing tables" published in
gunnery manuals. During World War II the U.S. military scoured the country
looking for (generally female) math majors to hire for the job of computing these
tables. But not enough humans could be found to keep up with the need for new
tables. Sometimes artillery pieces had to be delivered to the battlefield without
the necessary firing tables and this meant they were close to useless because
they couldn't be aimed properly. Faced with this situation, the U.S. military was
willing to invest in even hair-brained schemes to automate this type of
computation.

One early success was the Harvard Mark I computer which was built as a
partnership between Harvard and IBM in 1944. This was the first programmable
digital computer made in the U.S. But it was not a purely electronic computer.
Instead the Mark I was constructed out of switches, relays, rotating shafts, and
clutches. The machine weighed 5 tons, incorporated 500 miles of wire, was 8
feet tall and 51 feet long, and had a 50 ft rotating shaft running its length, turned
by a 5 horsepower electric motor. The Mark I ran non-stop for 15 years, sounding
like a roomful of ladies knitting. To appreciate the scale of this machine note the
four typewriters in the foreground of the following photo.
The Harvard Mark I: an electro-mechanical computer

You can see the 50 ft rotating shaft in the bottom of the prior photo. This shaft
was a central power source for the entire machine. This design feature was
reminiscent of the days when waterpower was used to run a machine shop and
each lathe or other tool was driven by a belt connected to a single overhead shaft
which was turned by an outside waterwheel.
A central shaft driven by an outside waterwheel and connected to
each machine by overhead belts was the customary power
source for all the machines in a factory

Here's a close-up of one of the Mark I's four paper tape readers. A paper tape
was an improvement over a box of punched cards as anyone who has ever
dropped -- and thus shuffled -- his "stack" knows.
One of the four paper tape readers on the Harvard Mark I (you can
observe the punched paper roll emerging from the bottom)

One of the primary programmers for the Mark I was a woman, Grace Hopper.
Hopper found the first computer "bug": a dead moth that had gotten into the Mark
I and whose wings were blocking the reading of the holes in the paper tape. The
word "bug" had been used to describe a defect since at least 1889 but Hopper is
credited with coining the word "debugging" to describe the work to eliminate
program faults.

The first computer bug [photo © 2002 IEEE]

In 1953 Grace Hopper invented the first high-level language, "Flow-matic". This
language eventually became COBOL which was the language most affected by
the infamous Y2K problem. A high-level language is designed to be more
understandable by humans than is the binary language understood by the
computing machinery. A high-level language is worthless without a program --
known as a compiler -- to translate it into the binary language of the computer
and hence Grace Hopper also constructed the world's first compiler. Grace
remained active as a Rear Admiral in the Navy Reserves until she was 79
(another record).
The Mark I operated on numbers that were 23 digits wide. It could add or
subtract two of these numbers in three-tenths of a second, multiply them in four
seconds, and divide them in ten seconds. Forty-five years later computers could
perform an addition in a billionth of a second! Even though the Mark I had three
quarters of a million components, it could only store 72 numbers! Today, home
computers can store 30 million numbers in RAM and another 10 billion numbers
on their hard disk. Today, a number can be pulled from RAM after a delay of only
a few billionths of a second, and from a hard disk after a delay of only a few
thousandths of a second. This kind of speed is obviously impossible for a
machine which must move a rotating shaft and that is why electronic computers
killed off their mechanical predecessors.

On a humorous note, the principal designer of the Mark I, Howard Aiken of


Harvard, estimated in 1947 that six electronic digital computers would be
sufficient to satisfy the computing needs of the entire United States. IBM had
commissioned this study to determine whether it should bother developing this
new invention into one of its standard products (up until then computers were
one-of-a-kind items built by special arrangement). Aiken's prediction wasn't
actually so bad as there were very few institutions (principally, the government
and military) that could afford the cost of what was called a computer in 1947. He
just didn't foresee the micro-electronics revolution which would allow something
like an IBM Stretch computer of 1959:
(that's just the operator's console, here's the rest of its 33 foot length:)
to be bested by a home computer of 1976 such as this Apple I which sold for
only $600:

The Apple 1 which was sold as a do-it-yourself kit (without the lovely
case seen here)

Computers had been incredibly expensive because they required so much hand
assembly, such as the wiring seen in this CDC 7600:
Typical wiring in an early mainframe computer [photo courtesy The
Computer Museum]
The microelectronics revolution is what allowed the amount of hand-crafted
wiring seen in the prior photo to be mass-produced as an integrated circuit
which is a small sliver of silicon the size of your thumbnail .
An integrated circuit ("silicon chip") [photo courtesy of IBM]
The primary advantage of an integrated circuit is not that the transistors
(switches) are miniscule (that's the secondary advantage), but rather that millions
of transistors can be created and interconnected in a mass-production process.
All the elements on the integrated circuit are fabricated simultaneously via a
small number (maybe 12) of optical masks that define the geometry of each
layer. This speeds up the process of fabricating the computer -- and hence
reduces its cost -- just as Gutenberg's printing press sped up the fabrication of
books and thereby made them affordable to all.

The IBM Stretch computer of 1959 needed its 33 foot length to hold the 150,000
transistors it contained. These transistors were tremendously smaller than the
vacuum tubes they replaced, but they were still individual elements requiring
individual assembly. By the early 1980s this many transistors could be
simultaneously fabricated on an integrated circuit. Today's Pentium 4
microprocessor contains 42,000,000 transistors in this same thumbnail sized
piece of silicon.

It's humorous to remember that in between the Stretch machine (which would be
called a mainframe today) and the Apple I (a desktop computer) there was an
entire industry segment referred to as mini-computers such as the following
PDP-12 computer of 1969:
The DEC PDP-12
Sure looks "mini", huh? But we're getting ahead of our story.

One of the earliest attempts to build an all-electronic (that is, no gears, cams,
belts, shafts, etc.) digital computer occurred in 1937 by J. V. Atanasoff, a
professor of physics and mathematics at Iowa State University. By 1941 he and
his graduate student, Clifford Berry, had succeeded in building a machine that
could solve 29 simultaneous equations with 29 unknowns. This machine was the
first to store data as a charge on a capacitor, which is how today's computers
store information in their main memory (DRAM or dynamic RAM). As far as its
inventors were aware, it was also the first to employ binary arithmetic. However,
the machine was not programmable, it lacked a conditional branch, its design
was appropriate for only one type of mathematical problem, and it was not further
pursued after World War II. It's inventors didn't even bother to preserve the
machine and it was dismantled by those who moved into the room where it lay
abandoned.

The Atanasoff-Berry Computer [photo © 2002 IEEE]

Another candidate for granddaddy of the modern computer was Colossus, built
during World War II by Britain for the purpose of breaking the cryptographic
codes used by Germany. Britain led the world in designing and building
electronic machines dedicated to code breaking, and was routinely able to read
coded Germany radio transmissions. But Colossus was definitely not a general
purpose, reprogrammable machine. Note the presence of pulleys in the two
photos of Colossus below:
Two views of the code-breaking Colossus of Great Britain

The Harvard Mark I, the Atanasoff-Berry computer, and the British Colossus all
made important contributions. American and British computer pioneers were still
arguing over who was first to do what, when in 1965 the work of the German
Konrad Zuse was published for the first time in English. Scooped! Zuse had built
a sequence of general purpose computers in Nazi Germany. The first, the Z1,
was built between 1936 and 1938 in the parlor of his parent's home.
The Zuse Z1 in its residential setting

Zuse's third machine, the Z3, built in 1941, was probably the first operational,
general-purpose, programmable (that is, software controlled) digital computer.
Without knowledge of any calculating machine inventors since Leibniz (who lived
in the 1600's), Zuse reinvented Babbage's concept of programming and decided
on his own to employ binary representation for numbers (Babbage had
advocated decimal). The Z3 was destroyed by an Allied bombing raid. The Z1
and Z2 met the same fate and the Z4 survived only because Zuse hauled it in a
wagon up into the mountains. Zuse's accomplishments are all the more
incredible given the context of the material and manpower shortages in Germany
during World War II. Zuse couldn't even obtain paper tape so he had to make his
own by punching holes in discarded movie film. Because these machines were
unknown outside Germany, they did not influence the path of computing in
America. But their architecture is identical to that still in use today: an arithmetic
unit to do the calculations, a memory for storing numbers, a control system to
supervise operations, and input and output devices to connect to the external
world. Zuse also invented what might be the first high-level computer language,
"Plankalkul", though it too was unknown outside Germany.

The title of forefather of today's all-electronic digital computers is usually awarded


to ENIAC, which stood for Electronic Numerical Integrator and Calculator. ENIAC
was built at the University of Pennsylvania between 1943 and 1945 by two
professors, John Mauchly and the 24 year old J. Presper Eckert, who got
funding from the war department after promising they could build a machine that
would replace all the "computers", meaning the women who were employed
calculating the firing tables for the army's artillery guns. The day that Mauchly
and Eckert saw the first small piece of ENIAC work, the persons they ran to bring
to their lab to show off their progress were some of these female computers (one
of whom remarked, "I was astounded that it took all this equipment to multiply 5
by 1000").

ENIAC filled a 20 by 40 foot room, weighed 30 tons, and used more than 18,000
vacuum tubes. Like the Mark I, ENIAC employed paper card readers obtained
from IBM (these were a regular product for IBM, as they were a long established
part of business accounting machines, IBM's forte). When operating, the ENIAC
was silent but you knew it was on as the 18,000 vacuum tubes each generated
waste heat like a light bulb and all this heat (174,000 watts of heat) meant that
the computer could only be operated in a specially designed room with its own
heavy duty air conditioning system. Only the left half of ENIAC is visible in the
first picture, the right half was basically a mirror image of what's visible.
Two views of ENIAC: the "Electronic Numerical Integrator and
Calculator" (note that it wasn't even given the name of
computer since "computers" were people) [U.S. Army photo]

To reprogram the ENIAC you had to rearrange the patch cords that you can
observe on the left in the prior photo, and the settings of 3000 switches that you
can observe on the right. To program a modern computer, you type out a
program with statements like:

Circumference = 3.14 * diameter

To perform this computation on ENIAC you had to rearrange a large number of


patch cords and then locate three particular knobs on that vast wall of knobs and
set them to 3, 1, and 4.
Reprogramming ENIAC involved a hike [U.S. Army photo]

Once the army agreed to fund ENIAC, Mauchly and Eckert worked around the
clock, seven days a week, hoping to complete the machine in time to contribute
to the war. Their war-time effort was so intense that most days they ate all 3
meals in the company of the army Captain who was their liaison with their military
sponsors. They were allowed a small staff but soon observed that they could hire
only the most junior members of the University of Pennsylvania staff because the
more experienced faculty members knew that their proposed machine would
never work.

One of the most obvious problems was that the design would require 18,000
vacuum tubes to all work simultaneously. Vacuum tubes were so notoriously
unreliable that even twenty years later many neighborhood drug stores provided
a "tube tester" that allowed homeowners to bring in the vacuum tubes from their
television sets and determine which one of the tubes was causing their TV to fail.
And television sets only incorporated about 30 vacuum tubes. The device that
used the largest number of vacuum tubes was an electronic organ: it
incorporated 160 tubes. The idea that 18,000 tubes could function together was
considered so unlikely that the dominant vacuum tube supplier of the day, RCA,
refused to join the project (but did supply tubes in the interest of "wartime
cooperation"). Eckert solved the tube reliability problem through extremely careful
circuit design. He was so thorough that before he chose the type of wire cabling
he would employ in ENIAC he first ran an experiment where he starved lab rats
for a few days and then gave them samples of all the available types of cable to
determine which they least liked to eat. Here's a look at a small number of the
vacuum tubes in ENIAC:
Even with 18,000 vacuum tubes, ENIAC could only hold 20 numbers at a time.
However, thanks to the elimination of moving parts it ran much faster than the
Mark I: a multiplication that required 6 seconds on the Mark I could be performed
on ENIAC in 2.8 thousandths of a second. ENIAC's basic clock speed was
100,000 cycles per second. Today's home computers employ clock speeds of
1,000,000,000 cycles per second. Built with $500,000 from the U.S. Army,
ENIAC's first task was to compute whether or not it was possible to build a
hydrogen bomb (the atomic bomb was completed during the war and hence is
older than ENIAC). The very first problem run on ENIAC required only 20
seconds and was checked against an answer obtained after forty hours of work
with a mechanical calculator. After chewing on half a million punch cards for six
weeks, ENIAC did humanity no favor when it declared the hydrogen bomb
feasible. This first ENIAC program remains classified even today.

Once ENIAC was finished and proved worthy of the cost of its development, its
designers set about to eliminate the obnoxious fact that reprogramming the
computer required a physical modification of all the patch cords and switches. It
took days to change ENIAC's program. Eckert and Mauchly's next teamed up
with the mathematician John von Neumann to design EDVAC, which pioneered
the stored program. Because he was the first to publish a description of this
new computer, von Neumann is often wrongly credited with the realization that
the program (that is, the sequence of computation steps) could be represented
electronically just as the data was. But this major breakthrough can be found in
Eckert's notes long before he ever started working with von Neumann. Eckert
was no slouch: while in high school Eckert had scored the second highest math
SAT score in the entire country.

After ENIAC and EDVAC came other computers with humorous names such as
ILLIAC, JOHNNIAC, and, of course, MANIAC. ILLIAC was built at the University
of Illinois at Champaign-Urbana, which is probably why the science fiction author
Arthur C. Clarke chose to have the HAL computer of his famous book "2001: A
Space Odyssey" born at Champaign-Urbana. Have you ever noticed that you can
shift each of the letters of IBM backward by one alphabet position and get HAL?

ILLIAC II built at the University of Illinois (it is a good thing


computers were one-of-a-kind creations in these days, can
you imagine being asked to duplicate this?)
HAL from the movie "2001: A Space Odyssey". Look at the previous
picture to understand why the movie makers in 1968
assumed computers of the future would be things you walk
into.

JOHNNIAC was a reference to John von Neumann, who was unquestionably a


genius. At age 6 he could tell jokes in classical Greek. By 8 he was doing
calculus. He could recite books he had read years earlier word for word. He
could read a page of the phone directory and then recite it backwards. On one
occasion it took von Neumann only 6 minutes to solve a problem in his head that
another professor had spent hours on using a mechanical calculator. Von
Neumann is perhaps most famous (infamous?) as the man who worked out the
complicated method needed to detonate an atomic bomb.

Once the computer's program was represented electronically, modifications to


that program could happen as fast as the computer could compute. In fact,
computer programs could now modify themselves while they ran (such programs
are called self-modifying programs). This introduced a new way for a program to
fail: faulty logic in the program could cause it to damage itself. This is one source
of the general protection fault famous in MS-DOS and the blue screen of
death famous in Windows.

Today, one of the most notable characteristics of a computer is the fact that its
ability to be reprogrammed allows it to contribute to a wide variety of endeavors,
such as the following completely unrelated fields:
• the creation of special effects for movies,
• the compression of music to allow more minutes of music to fit within the
limited memory of an MP3 player,
• the observation of car tire rotation to detect and prevent skids in an anti-
lock braking system (ABS),
• the analysis of the writing style in Shakespeare's work with the goal of
proving whether a single individual really was responsible for all these
pieces.

By the end of the 1950's computers were no longer one-of-a-kind hand built
devices owned only by universities and government research labs. Eckert and
Mauchly left the University of Pennsylvania over a dispute about who owned the
patents for their invention. They decided to set up their own company. Their first
product was the famous UNIVAC computer, the first commercial (that is, mass
produced) computer. In the 50's, UNIVAC (a contraction of "Universal Automatic
Computer") was the household word for "computer" just as "Kleenex" is for
"tissue". The first UNIVAC was sold, appropriately enough, to the Census
bureau. UNIVAC was also the first computer to employ magnetic tape. Many
people still confuse a picture of a reel-to-reel tape recorder with a picture of a
mainframe computer.
A reel-to-reel tape drive [photo courtesy of The Computer Museum]

ENIAC was unquestionably the origin of the U.S. commercial computer industry,
but its inventors, Mauchly and Eckert, never achieved fortune from their work and
their company fell into financial problems and was sold at a loss. By 1955 IBM
was selling more computers than UNIVAC and by the 1960's the group of eight
companies selling computers was known as "IBM and the seven dwarfs". IBM
grew so dominant that the federal government pursued anti-trust proceedings
against them from 1969 to 1982 (notice the pace of our country's legal system).
You might wonder what type of event is required to dislodge an industry
heavyweight. In IBM's case it was their own decision to hire an unknown but
aggressive firm called Microsoft to provide the software for their personal
computer (PC). This lucrative contract allowed Microsoft to grow so dominant
that by the year 2000 their market capitalization (the total value of their stock)
was twice that of IBM and they were convicted in Federal Court of running an
illegal monopoly.

If you learned computer programming in the 1970's, you dealt with what today
are called mainframe computers, such as the IBM 7090 (shown below), IBM
360, or IBM 370.

The IBM 7094, a typical mainframe computer [photo courtesy of IBM]


There were 2 ways to interact with a mainframe. The first was called time
sharing because the computer gave each user a tiny sliver of time in a round-
robin fashion. Perhaps 100 users would be simultaneously logged on, each
typing on a teletype such as the following:
The Teletype was the standard mechanism used to interact with a
time-sharing computer

A teletype was a motorized typewriter that could transmit your keystrokes to the
mainframe and then print the computer's response on its roll of paper. You typed
a single line of text, hit the carriage return button, and waited for the teletype to
begin noisily printing the computer's response (at a whopping 10 characters per
second). On the left-hand side of the teletype in the prior picture you can observe
a paper tape reader and writer (i.e., puncher). Here's a close-up of paper tape:

Three views of paper tape


After observing the holes in paper tape it is perhaps obvious why all computers
use binary numbers to represent data: a binary bit (that is, one digit of a binary
number) can only have the value of 0 or 1 (just as a decimal digit can only have
the value of 0 thru 9). Something which can only take two states is very easy to
manufacture, control, and sense. In the case of paper tape, the hole has either
been punched or it has not. Electro-mechanical computers such as the Mark I
used relays to represent data because a relay (which is just a motor driven
switch) can only be open or closed. The earliest all-electronic computers used
vacuum tubes as switches: they too were either open or closed. Transistors
replaced vacuum tubes because they too could act as switches but were smaller,
cheaper, and consumed less power.

Paper tape has a long history as well. It was first used as an information storage
medium by Sir Charles Wheatstone, who used it to store Morse code that was
arriving via the newly invented telegraph (incidentally, Wheatstone was also the
inventor of the accordion).

The alternative to time sharing was batch mode processing, where the
computer gives its full attention to your program. In exchange for getting the
computer's full attention at run-time, you had to agree to prepare your program
off-line on a key punch machine which generated punch cards.
An IBM Key Punch machine which operates like a typewriter except it
produces punched cards rather than a printed sheet of paper

University students in the 1970's bought blank cards a linear foot at a time from
the university bookstore. Each card could hold only 1 program statement. To
submit your program to the mainframe, you placed your stack of cards in the
hopper of a card reader. Your program would be run whenever the computer
made it that far. You often submitted your deck and then went to dinner or to bed
and came back later hoping to see a successful printout showing your results.
Obviously, a program run in batch mode could not be interactive.

But things changed fast. By the 1990's a university student would typically own
his own computer and have exclusive use of it in his dorm room.
The original IBM Personal Computer (PC)

This transformation was a result of the invention of the microprocessor. A


microprocessor (uP) is a computer that is fabricated on an integrated circuit (IC).
Computers had been around for 20 years before the first microprocessor was
developed at Intel in 1971. The micro in the name microprocessor refers to the
physical size. Intel didn't invent the electronic computer. But they were the first to
succeed in cramming an entire computer on a single chip (IC). Intel was started
in 1968 and initially produced only semiconductor memory (Intel invented both
the DRAM and the EPROM, two memory technologies that are still going strong
today). In 1969 they were approached by Busicom, a Japanese manufacturer of
high performance calculators (these were typewriter sized units, the first shirt-
pocket sized scientific calculator was the Hewlett-Packard HP35 introduced in
1972). Busicom wanted Intel to produce 12 custom calculator chips: one chip
dedicated to the keyboard, another chip dedicated to the display, another for the
printer, etc. But integrated circuits were (and are) expensive to design and this
approach would have required Busicom to bear the full expense of developing 12
new chips since these 12 chips would only be of use to them.
A typical Busicom desk calculator

But a new Intel employee (Ted Hoff) convinced Busicom to instead accept a
general purpose computer chip which, like all computers, could be
reprogrammed for many different tasks (like controlling a keyboard, a display, a
printer, etc.). Intel argued that since the chip could be reprogrammed for
alternative purposes, the cost of developing it could be spread out over more
users and hence would be less expensive to each user. The general purpose
computer is adapted to each new purpose by writing a program which is a
sequence of instructions stored in memory (which happened to be Intel's forte).
Busicom agreed to pay Intel to design a general purpose chip and to get a price
break since it would allow Intel to sell the resulting chip to others. But
development of the chip took longer than expected and Busicom pulled out of the
project. Intel knew it had a winner by that point and gladly refunded all of
Busicom's investment just to gain sole rights to the device which they finished on
their own.

Thus became the Intel 4004, the first microprocessor (uP). The 4004 consisted of
2300 transistors and was clocked at 108 kHz (i.e., 108,000 times per second).
Compare this to the 42 million transistors and the 2 GHz clock rate (i.e.,
2,000,000,000 times per second) used in a Pentium 4. One of Intel's 4004 chips
still functions aboard the Pioneer 10 spacecraft, which is now the man-made
object farthest from the earth. Curiously, Busicom went bankrupt and never
ended up using the ground-breaking microprocessor.

Intel followed the 4004 with the 8008 and 8080. Intel priced the 8080
microprocessor at $360 dollars as an insult to IBM's famous 360 mainframe
which cost millions of dollars. The 8080 was employed in the MITS Altair
computer, which was the world's first personal computer (PC). It was personal
all right: you had to build it yourself from a kit of parts that arrived in the mail. This
kit didn't even include an enclosure and that is the reason the unit shown below
doesn't match the picture on the magazine cover.
The Altair 8800, the first PC

A Harvard freshman by the name of Bill Gates decided to drop out of college so
he could concentrate all his time writing programs for this computer. This early
experienced put Bill Gates in the right place at the right time once IBM decided to
standardize on the Intel microprocessors for their line of PCs in 1981. The Intel
Pentium 4 used in today's PCs is still compatible with the Intel 8088 used in
IBM's first PC.

If you've enjoyed this history of computers, I encourage you to try your own hand
at programming a computer. That is the only way you will really come to
understand the concepts of looping, subroutines, high and low-level languages,
bits and bytes, etc. I have written a number of Windows programs which teach
computer programming in a fun, visually-engaging setting. I start my students on
a programmable RPN calculator where we learn about programs, statements,
program and data memory, subroutines, logic and syntax errors, stacks, etc.
Then we move on to an 8051 microprocessor (which happens to be the most
widespread microprocessor on earth) where we learn about microprocessors,
bits and bytes, assembly language, addressing modes, etc. Finally, we graduate
to the most powerful language in use today: C++ (pronounced "C plus plus").
These Windows programs are accompanied by a book's worth of on-line
documentation which serves as a self-study guide, allowing you to teach yourself
computer programming! The home page (URL) for this collection of software is
http://www.computersciencelab.com/.

You might also like