Professional Documents
Culture Documents
The first computers used vacuum tubes for circuitry and magnetic drums for memory,
and were often enormous, taking up entire rooms. They were very expensive to operate
and in addition to using a great deal of electricity, generated a lot of heat, which was
often the cause of malfunctions.
The UNIVAC and ENIAC computers are examples of first-generation computing devices.
The UNIVAC was the first commercial computer delivered to a business client, the U.S.
Census Bureau in 1951.
Transistors replaced vacuum tubes and ushered in the second generation of computers.
The transistor was invented in 1947 but did not see widespread use in computers until
the late 1950s. The transistor was far superior to the vacuum tube, allowing computers
to become smaller, faster, cheaper, more energy-efficient and more reliable than their
first-generation predecessors. Though the transistor still generated a great deal of heat
that subjected the computer to damage, it was a vast improvement over the vacuum
tube. Second-generation computers still relied on punched cards for input and printouts
for output.
The first computers of this generation were developed for the atomic energy industry.
The development of the integrated circuit was the hallmark of the third generation of
computers. Transistors were miniaturized and placed on silicon chips, called
semiconductors, which drastically increased the speed and efficiency of computers.
Instead of punched cards and printouts, users interacted with third generation
computers through keyboards and monitors and interfaced with an operating system,
which allowed the device to run many different applications at one time with a central
program that monitored the memory. Computers for the first time became accessible to
a mass audience because they were smaller and cheaper than their predecessors.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple
introduced the Macintosh. Microprocessors also moved out of the realm of desktop
computers and into many areas of life as more and more everyday products began to
use microprocessors.
As these small computers became more powerful, they could be linked together to form
networks, which eventually led to the development of the Internet. Fourth generation
computers also saw the development of GUIs, the mouse and handheld devices.
INTRODUCTION OF PROTOCOL
Obviously the number of layers of a layering scheme and the way the layers are defined
can have a drastic impact on the protocols involved. This is where the analogies come
into play for the TCP/IP model, because the designers of TCP/IP employed the same
techniques used to conquer the complexity of programming language compilers (design
by analogy) in the implementation of its protocols and its layering scheme.[3]
HTML
HTML, which stands for HyperText Markup Language, is the predominant markup
language for web pages. A markup language is a set of markup tags, and HTML uses
markup tags to describe web pages.
HTML is written in the form of HTML elements consisting of "tags" surrounded by angle
brackets (like <html>) within the web page content. HTML tags normally come in pairs
like <b> and </b>. The first tag in a pair is the start tag, the second tag is the end tag
(they are also called opening tags and closing tags).
The purpose of a web browser is to read HTML documents and display them as web
pages. The browser does not display the HTML tags, but uses the tags to interpret the
content of the page.
HTML elements form the building blocks of all websites. HTML allows images and
objects to be embedded and can be used to create interactive forms. It provides a
means to create structured documents by denoting structural semantics for text such as
headings, paragraphs, lists, links, quotes and other items. It can embed scripts in
languages such as JavaScript which affect the behavior of HTML WebPages.
HTML can also be used to include Cascading Style Sheets (CSS) to define the appearance
and layout of text and other material. The W3C, maintainer of both HTML and CSS
standards, encourages the use of CSS over explicit presentational markup.
HTTP
HTTP functions as a request-response protocol in the client-server computing model. In
HTTP, a web browser, for example, acts as a client, while an application running on a
computer hosting a web site functions as a server. The client submits an HTTP request
message to the server. The server, which stores content, or provides resources, such as
HTML files and images, or generates such content as required, or performs other
functions on behalf of the client, returns a response message to the client. A response
contains completion status information about the request and may contain any content
requested by the client in its message body.
A client is often referred to as a user agent (UA). A web crawler (spider) is another
example of a common type of client or user agent.
HTTP is an Application Layer protocol designed within the framework of the Internet
Protocol Suite. The protocol definitions presume a reliable Transport Layer protocol for
host-to-host data transfer.[2] The Transmission Control Protocol (TCP) is the dominant
protocol in use for this purpose. However, HTTP has found application even with
unreliable protocols, such as the User Datagram Protocol (UDP) in methods such as the
Simple Service Discovery Protocol (SSDP).
HTTP Resources are identified and located on the network by Uniform Resource
Identifiers (URIs)—or, more specifically, Uniform Resource Locators (URLs)—using the
http or https URI schemes. URIs and the Hypertext Markup Language (HTML), form a
system of inter-linked resources, called hypertext documents, on the Internet, that led
to the establishment of the World Wide Web in 1990 by English physicist Tim Berners-
Lee.
The original version of HTTP (HTTP/1.0) was revised in HTTP/1.1. HTTP/1.0 uses a
separate connection to the same server for every request-response transaction, while
HTTP/1.1 can reuse a connection multiple times, to download, for instance, images for a
just delivered page. Hence HTTP/1.1 communications experience less latency as the
establishment of TCP connections presents considerable overhead.
ROUTER
When multiple routers are used in a large collection of interconnected networks, the
routers exchange information, so that each router can build up a reference table
showing the preferred paths between any two systems on the interconnected networks.
A router can have many interface connections, for different physical types of network
(such as copper cables, fiber optic, or wireless transmission). It may contain firmware for
different networking protocol standards. Each network interface device is specialized to
convert computer signals from one protocol standard to another.
Routers can be used to connect two or more logical subnets, each having a different
network address. The subnets addresses in the router do not necessarily map directly to
the physical interfaces of the router.The term "layer 3 switching" is often used
interchangeably with the term "routing". The term switching is generally used to refer to
data forwarding between two network devices with the same network address. This is
also called layer 2 switching or LAN switching.
Control plane: where a router builds an address table (called routing table) that
records where a packet should be forwarded, and through which physical
interface.It does this by using either statically configured statements (called
static routes), or alternatively, by exchanging information with other routers in
the network through a dynamical routing protocol.
Forwarding plane: The router actually forwards traffic, (called data packets in
Internet Protocol language) from incoming interfaces to outgoing interfaces
destination addresses that the packet header contains. It performs this function
by following rules derived from the routing table that has been recorded in the
control plane.
FTP
File Transfer Protocol (FTP) is a standard network protocol used to copy a file from one
host to another over a TCP/IP-based network, such as the Internet. FTP is built on a
client-server architecture and utilizes separate control and data connections between
the client and server.[1] FTP users may authenticate themselves using a clear-text sign-in
protocol but can connect anonymously if the server is configured to allow it.
The first FTP client applications were interactive command-line tools, implementing
standard commands and syntax. Graphical user interface clients have since been
developed for many of the popular desktop operating systems in use today.