You are on page 1of 8

FIVE GENERATION OF COMPUTER

1. First Generation (1940-1956) Vacuum Tubes

The first computers used vacuum tubes for circuitry and magnetic drums for memory,
and were often enormous, taking up entire rooms. They were very expensive to operate
and in addition to using a great deal of electricity, generated a lot of heat, which was
often the cause of malfunctions.

First generation computers relied on machine language, the lowest-level programming


language understood by computers, to perform operations, and they could only solve
one problem at a time. Input was based on punched cards and paper tape, and output
was displayed on printouts.

The UNIVAC and ENIAC computers are examples of first-generation computing devices.
The UNIVAC was the first commercial computer delivered to a business client, the U.S.
Census Bureau in 1951.

2. Second Generation (1956-1963) Transistors

Transistors replaced vacuum tubes and ushered in the second generation of computers.
The transistor was invented in 1947 but did not see widespread use in computers until
the late 1950s. The transistor was far superior to the vacuum tube, allowing computers
to become smaller, faster, cheaper, more energy-efficient and more reliable than their
first-generation predecessors. Though the transistor still generated a great deal of heat
that subjected the computer to damage, it was a vast improvement over the vacuum
tube. Second-generation computers still relied on punched cards for input and printouts
for output.

Second-generation computers moved from cryptic binary machine language to symbolic,


or assembly, languages, which allowed programmers to specify instructions in words.
High-level programming languages were also being developed at this time, such as early
versions of COBOL and FORTRAN. These were also the first computers that stored their
instructions in their memory, which moved from a magnetic drum to magnetic core
technology.

The first computers of this generation were developed for the atomic energy industry.

3. Third Generation (1964-1971) Integrated Circuits

The development of the integrated circuit was the hallmark of the third generation of
computers. Transistors were miniaturized and placed on silicon chips, called
semiconductors, which drastically increased the speed and efficiency of computers.

Instead of punched cards and printouts, users interacted with third generation
computers through keyboards and monitors and interfaced with an operating system,
which allowed the device to run many different applications at one time with a central
program that monitored the memory. Computers for the first time became accessible to
a mass audience because they were smaller and cheaper than their predecessors.

4. Fourth Generation (1971-Present) Microprocessors

The microprocessor brought the fourth generation of computers, as thousands of


integrated circuits were built onto a single silicon chip. What in the first generation filled
an entire room could now fit in the palm of the hand. The Intel 4004 chip, developed in
1971, located all the components of the computer—from the central processing unit
and memory to input/output controls—on a single chip.

In 1981 IBM introduced its first computer for the home user, and in 1984 Apple
introduced the Macintosh. Microprocessors also moved out of the realm of desktop
computers and into many areas of life as more and more everyday products began to
use microprocessors.
As these small computers became more powerful, they could be linked together to form
networks, which eventually led to the development of the Internet. Fourth generation
computers also saw the development of GUIs, the mouse and handheld devices.

5. Fifth Generation (Present and Beyond) Artificial Intelligence

Fifth generation computing devices, based on artificial intelligence, are still in


development, though there are some applications, such as voice recognition, that are
being used today. The use of parallel processing and superconductors is helping to make
artificial intelligence a reality. Quantum computation and molecular and
nanotechnology will radically change the face of computers in years to come. The goal
of fifth-generation computing is to develop devices that respond to natural language
input and are capable of learning and self-organization.

INTRODUCTION OF PROTOCOL

In a diplomatic context the word protocol refers to a diplomatic document or a rule,


guideline etc which guides diplomatic behavior. Synonyms are procedure and policy.
While there is no generally accepted formal definition of "protocol" in computer
science, an informal definition, based on the previous, could be "a description of a set of
procedures to be followed when communicating". In computer science the word
algorithm is a synonym for the word procedure, so a protocol is to communications
what an algorithm is to computations.

A protocol describes the syntax, semantics, and synchronization of communication. A


programming language describes the same for computations, so there is a close analogy
between protocols and programming languages: protocols are to communications what
programming languages are to computations.
Diplomatic documents build on each other, thus creating document-trees. The way the
sub-documents making up a document-tree are written has an impact on the complexity
of the tree. By imposing a development model on the documents, overall readibility and
complexity can be reduced. An effective model to this end is the layering scheme or
model. In a layering scheme the documents making up the tree are thought to belong to
classes, called layers. The distance of a sub-document to its root-document is called its
level. The level of a sub-document determines the class it belongs to. The sub-
documents belonging to a class all provide similar functionality and, when form follows
function, have similar form.
The communications protocols in use on the Internet are designed to function in very
complex and diverse settings, so they tend to be very complex. For this reason
communications protocols are also structured using a layering scheme as a basis. The
layering scheme in use on the Internet is called the TCP/IP model. The actual protocols
are collectively called the Internet protocol suite. The group responsible for this design
is called the Internet Engineering Task Force (IETF).

Obviously the number of layers of a layering scheme and the way the layers are defined
can have a drastic impact on the protocols involved. This is where the analogies come
into play for the TCP/IP model, because the designers of TCP/IP employed the same
techniques used to conquer the complexity of programming language compilers (design
by analogy) in the implementation of its protocols and its layering scheme.[3]

Like diplomatic protocols, communications protocols have to be agreed upon by the


parties involved. To reach agreement a protocol is developed into a technical standard.
International standards are developed by the International Organization for
Standardization (ISO).

HTML
HTML, which stands for HyperText Markup Language, is the predominant markup
language for web pages. A markup language is a set of markup tags, and HTML uses
markup tags to describe web pages.

HTML is written in the form of HTML elements consisting of "tags" surrounded by angle
brackets (like <html>) within the web page content. HTML tags normally come in pairs
like <b> and </b>. The first tag in a pair is the start tag, the second tag is the end tag
(they are also called opening tags and closing tags).

The purpose of a web browser is to read HTML documents and display them as web
pages. The browser does not display the HTML tags, but uses the tags to interpret the
content of the page.

HTML elements form the building blocks of all websites. HTML allows images and
objects to be embedded and can be used to create interactive forms. It provides a
means to create structured documents by denoting structural semantics for text such as
headings, paragraphs, lists, links, quotes and other items. It can embed scripts in
languages such as JavaScript which affect the behavior of HTML WebPages.

HTML can also be used to include Cascading Style Sheets (CSS) to define the appearance
and layout of text and other material. The W3C, maintainer of both HTML and CSS
standards, encourages the use of CSS over explicit presentational markup.

HTTP
HTTP functions as a request-response protocol in the client-server computing model. In
HTTP, a web browser, for example, acts as a client, while an application running on a
computer hosting a web site functions as a server. The client submits an HTTP request
message to the server. The server, which stores content, or provides resources, such as
HTML files and images, or generates such content as required, or performs other
functions on behalf of the client, returns a response message to the client. A response
contains completion status information about the request and may contain any content
requested by the client in its message body.

A client is often referred to as a user agent (UA). A web crawler (spider) is another
example of a common type of client or user agent.

The HTTP protocol is designed to permit intermediate network elements to improve or


enable communications between clients and servers. High-traffic websites often benefit
from web cache servers that deliver content on behalf of the original, so-called origin
server to improve response time. HTTP proxy servers at network boundaries facilitate
communication when clients without a globally routable address are located in private
networks by relaying the requests and responses between clients and servers.

HTTP is an Application Layer protocol designed within the framework of the Internet
Protocol Suite. The protocol definitions presume a reliable Transport Layer protocol for
host-to-host data transfer.[2] The Transmission Control Protocol (TCP) is the dominant
protocol in use for this purpose. However, HTTP has found application even with
unreliable protocols, such as the User Datagram Protocol (UDP) in methods such as the
Simple Service Discovery Protocol (SSDP).

HTTP Resources are identified and located on the network by Uniform Resource
Identifiers (URIs)—or, more specifically, Uniform Resource Locators (URLs)—using the
http or https URI schemes. URIs and the Hypertext Markup Language (HTML), form a
system of inter-linked resources, called hypertext documents, on the Internet, that led
to the establishment of the World Wide Web in 1990 by English physicist Tim Berners-
Lee.

The original version of HTTP (HTTP/1.0) was revised in HTTP/1.1. HTTP/1.0 uses a
separate connection to the same server for every request-response transaction, while
HTTP/1.1 can reuse a connection multiple times, to download, for instance, images for a
just delivered page. Hence HTTP/1.1 communications experience less latency as the
establishment of TCP connections presents considerable overhead.

ROUTER

A router is an electronic device that intercepts signals on a computer network. The


router determines where the signals have to go. Each signal it receives is called a data
packet. The packet contains address information that the router uses to divert signals
appropriately.

When multiple routers are used in a large collection of interconnected networks, the
routers exchange information, so that each router can build up a reference table
showing the preferred paths between any two systems on the interconnected networks.

A router can have many interface connections, for different physical types of network
(such as copper cables, fiber optic, or wireless transmission). It may contain firmware for
different networking protocol standards. Each network interface device is specialized to
convert computer signals from one protocol standard to another.

Routers can be used to connect two or more logical subnets, each having a different
network address. The subnets addresses in the router do not necessarily map directly to
the physical interfaces of the router.The term "layer 3 switching" is often used
interchangeably with the term "routing". The term switching is generally used to refer to
data forwarding between two network devices with the same network address. This is
also called layer 2 switching or LAN switching.

Conceptually, a router operates in two operational planes (or sub-systems):

 Control plane: where a router builds an address table (called routing table) that
records where a packet should be forwarded, and through which physical
interface.It does this by using either statically configured statements (called
static routes), or alternatively, by exchanging information with other routers in
the network through a dynamical routing protocol.
 Forwarding plane: The router actually forwards traffic, (called data packets in
Internet Protocol language) from incoming interfaces to outgoing interfaces
destination addresses that the packet header contains. It performs this function
by following rules derived from the routing table that has been recorded in the
control plane.

FTP

File Transfer Protocol (FTP) is a standard network protocol used to copy a file from one
host to another over a TCP/IP-based network, such as the Internet. FTP is built on a
client-server architecture and utilizes separate control and data connections between
the client and server.[1] FTP users may authenticate themselves using a clear-text sign-in
protocol but can connect anonymously if the server is configured to allow it.

The first FTP client applications were interactive command-line tools, implementing
standard commands and syntax. Graphical user interface clients have since been
developed for many of the popular desktop operating systems in use today.

You might also like