You are on page 1of 11

First Generation (1940-1956) Vacuum

Tubes
The first computers used vacuum tubes for circuitry and magnetic drums for memory,
and were often enormous, taking up entire rooms. They were very expensive to
operate and in addition to using a great deal of electricity, generated a lot of
heat,which was often the cause of malfunctions.

First generation computers relied on machine language, the lowest-level programming


language understood by computers, to perform operations, and they could only solve
one problem at a time. Input was based on punched cards and paper tape, and output
was displayed on printouts.

The UNIVAC and ENIAC computers are examples of first-generation computing


devices. The UNIVAC was the first commercial computer delivered to a business
client, the U.S. Census Bureau in 1951.

Second Generation (1956-1963) Transistors


Transistors replaced vacuum tubes and ushered in the second generation of computers. The
transistor was invented in 1947 but did not see widespread use in computers until the late 1950s.
The transistor was far superior to the vacuum tube, allowing computers to become smaller,
faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors.
Though the transistor still generated a great deal of heat that subjected the computer to damage,
it was a vast improvement over the vacuum tube. Second-generation computers still relied on
punched cards for input and printouts for output.

Second-generation computers moved from cryptic binary machine language to symbolic, or


assembly, languages, which allowed programmers to specify instructions in words. High-level
programming languages were also being developed at this time, such as early versions of
COBOL and FORTRAN. These were also the first computers that stored their instructions in
their memory, which moved from a magnetic drum to magnetic core technology.

The first computers of this generation were developed for the atomic energy industry.

Third Generation (1964-1971) Integrated


Circuits
The development of the integrated circuit was the hallmark of the third generation of computers.
Transistors were miniaturized and placed on silicon chips, called semiconductors, which
drastically increased the speed and efficiency of computers.

Instead of punched cards and printouts, users interacted with third generation computers through
keyboards and monitors and interfaced with an operating system, which allowed the device to
run many different applications at one time with a central program that monitored the memory.
Computers for the first time became accessible to a mass audience because they were smaller
and cheaper than their predecessors.

Fourth Generation (1971-Present)


Microprocessors
The microprocessor brought the fourth generation of computers, as thousands of integrated
circuits were built onto a single silicon chip. What in the first generation filled an entire room
could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the
components of the computer—from the central processing unit and memory to input/output
controls—on a single chip.

In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the
Macintosh. Microprocessors also moved out of the realm of desktop computers and into many
areas of life as more and more everyday products began to use microprocessors.

As these small computers became more powerful, they could be linked together to form
networks, which eventually led to the development of the Internet. Fourth generation computers
also saw the development of GUIs, the mouse and handheld devices.

Fifth Generation (Present and Beyond)


Artificial Intelligence
Fifth generation computing devices, based on artificial intelligence, are still in development,
though there are some applications, such as voice recognition, that are being used today. The use
of parallel processing and superconductors is helping to make artificial intelligence a reality.
Quantum computation and molecular and nanotechnology will radically change the face of
computers in years to come. The goal of fifth-generation computing is to develop devices that
respond to natural language input and are capable of learning and self-organization.
Introduction to Data Communications
4. Data Communications
 

4. Data Communications

Data Communications is the transfer of data or information between a source and a receiver. The
source transmits the data and the receiver receives it. The actual generation of the information is
not part of Data Communications nor is the resulting action of the information at the receiver.
Data Communication is interested in the transfer of data, the method of transfer and the
preservation of the data during the transfer process.

In Local Area Networks, we are interested in "connectivity", connecting computers together to


share resources. Even though the computers can have different disk operating systems,
languages, cabling and locations, they still can communicate to one another and share resources.

The purpose of Data Communications is to provide the rules and regulations that allow
computers with different disk operating systems, languages, cabling and locations to share
resources. The rules and regulations are called protocols and standards in Data Communications.

Source: It is the transmitter of data. Examples are:

 Terminal,
 Computer,
 Mainframe

Medium: The communications stream through which the data is being transmitted. Examples
are:

 Cabling,
 Microwave,
 Fibre optics,
 Radio Frequencies (RF),
 Infrared Wireless

Receiver: The receiver of the data transmitted. Examples are:

 Printer,
 Terminal,
 Mainframe,
 Computer,

DCE: The interface between the Source & the Medium, and the Medium & the Receiver is called
the DCE (Data Communication Equipment) and is a physical piece of equipment.

DTE: Data Terminal Equipment is the Telecommunication name given to the Source and
Receiver's equipment.

HTTP (Hyper Text Transfer Protocol)

Hypertext transfer protocol is a method of transmitting the information on the web. HTTP
basically publishes and retrieves the HTTP pages on the World Wide Web. HTTP is a language
that is used to communicate between the browser and web server. The information that is
transferred using HTTP can be plain text, audio, video, images, and hypertext. HTTP is a
request/response protocol between the client and server. Many proxies, tunnels, and gateways
can be existing between the web browser (client) and server (web server). An HTTP client
initializes a request by establishing a TCP connection to a particular port on the remote host
(typically 80 or 8080). An HTTP server listens to that port and receives a request message from
the client. Upon receiving the request, server sends back 200 OK messages, its own message, an
error message or other message.

POP3 (Post Office Protocol)

In computing, e-mail clients such as (MS outlook, outlook express and thunderbird) use Post
office Protocol to retreive emails from the remote server over the TCP/IP connection. Nearly all
the users of the Internet service providers use POP 3 in the email clients to retrieve the emails
from the email servers. Most email applications use POP protocol.

SMTP (Simple Mail Transfer Protocol)

Simple Mail Transfer Protocol is a protocol that is used to send the email messages between the
servers. Most email systems and email clients use the SMTP protocol to send messages to one
server to another. In configuring an email application, you need to configure POP, SMTP and
IMAP protocols in your email software. SMTP is a simple, text based protocol and one or more
recipient of the message is specified and then the message is transferred. SMTP connection is
easily tested by the Telnet utility. SMTP uses the by default TCP port number 25

FTP (File Transfer Protocol)

FTP or file transfer protocol is used to transfer (upload/download) data from one computer to
another over the internet or through or computer network. FTP is a most commonly
communication protocol for transferring the files over the internet. Typically, there are two
computers are involved in the transferring the files a server and a client. The client computer that
is running FTP client software such as Cuteftp and AceFTP etc initiates a connection with the
remote computer (server). After successfully connected with the server, the client computer can
perform a number of the operations like downloading the files, uploading, renaming and deleting
the files, creating the new folders etc. Virtually operating system supports FTP protocols.

IP (Internet Protocol)

An Internet protocol (IP) is a unique address or identifier of each computer or communication


devices on the network and internet. Any participating computer networking device such as
routers, computers, printers, internet fax machines and switches may have their own unique IP
address. Personal information about someone can be found by the IP address. Every domain on
the internet must have a unique or shared IP address.

DHCP (Dynamic Host Configuration Protocol)

The DHCP or Dynamic Host Configuration Protocol is a set of rules used by a communication
device such as router, computer or network adapter to allow the device to request and obtain and
IP address from a server which has a list of the larger number of addresses. DHCP is a protocol
that is used by the network computers to obtain the IP addresses and other settings such as
gateway, DNS, subnet mask from the DHCP server. DHCP ensures that all the IP addresses are
unique and the IP address management is done by the server and not by the human. The
assignment of the IP addresses is expires after the predetermined period of time. DHCP works in
four phases known as DORA such as Discover, Observe, Request and Authorize

IMAP (Internet Message Access Protocol)

The Internet Message Access Protocol known as IMAP is an application layer protocol that is
used to access to access the emails on the remote servers. POP3 and IMAP are the two most
commonly used email retrieval protocols. Most of the email clients such as outlook express,
thunderbird and MS outlooks support POP3 and IMAP. The email messages are generally stored
on the email server and the users generally retreive these messages whether by the web browser
or email clients. IMAP is generally used in the large networks. IMAP allows users to access their
messages instantly on their systems.

ARCNET
ARCNET is a local area network technology that uses token bus scheme for managing line
sharing among the workstations. When a device on a network wants to send a message, it inserts
a token that is set to 1 and when a destination device reads the message it resets the token to 0 so
that the frame can be used by another device.

FDDI

Fiber distributed data interface (FDDI) provides a standard for data transmission in a local area
network that can extend a range of 200 kilometers. The FDDI uses token ring protocol as its
basis. FDDI local area network can support a large number of users and can cover a large
geographical area. FDDI uses fiber optic as a standard communication medium. FDDI uses dual
attached token ring topology. A FDDI network contains two token rings and the primary ring
offers the capacity of 100 Mbits/s. FDDI is an ANSI standard network and it can support 500
stations in 2 kilometers.

UDP

The user datagram protocol is a most important protocol of the TCP/IP suite and is used to send
the short messages known as datagram. Common network applications that uses UDP are DNS,
online games, IPTV, TFTP and VOIP. UDP is very fast and light weight. UDP is an unreliable
connectionless protocol that operates on the transport layer and it is sometimes called Universal
Datagram Protocol.

X.25

X.25 is a standard protocol suite for wide area networks using a phone line or ISDN system. The
X.25 standard was approved by CCITT now ITU in 1976.

TFTP

Trivial File Transfer Protocol (TFTP) is a very simple file transfer protocol with the very basic
features of the FTP. TFTP can be implemented in a very small amount of memory. TFTP is
useful for booting computers such as routers. TFTP is also used to transfer the files over the
network. TFPT uses UDP and provides no security features.

SNMP

The simple network management protocol (SNMP) forms the TCP/IP suite. SNMP is used to
manage the network attached devices of the complex network.

PPTP

The point to point tunneling protocol is used in the virtual private networks. PPP works by
sending regular PPP session. PPTP is a method of implementing VPN networks.

OTHER PROTOCOLS
VTP, ARP, IPX, OSPF, RARP, NFS, BOOTP, NNTP, IRC, RADIUS, Soap, Telnet, RIP, SSH.

In the beginning, there were mainframes. Every program and piece of data was stored in a single
almighty machine. Users could access this centralized computer only by means of dumb
terminals. (See Figure 1.)

Figure 1. Mainframe Architecture

In the 1980s, the arrival of inexpensive network-connected PCs produced the popular two-tier
client-server architecture. In this architecture, there is an application running in the client
machine which interacts with the server—most commonly, a database management system (see
Figure 2). Typically, the client application, also known as a fat client, contained some or all of
the presentation logic (user interface), the application navigation, the business rules and the
database access. Every time the business rules were modified, the client application had to be
changed, tested and redistributed, even when the user interface remained intact. In order to
minimize the impact of business logic alteration within client applications, the presentation logic
must be separated from the business rules. This separation becomes the fundamental principle in
the three-tier architecture.
Figure 2. Two-Tier Client-Server Architecture

In a three-tier architecture (also known as a multi-tier architecture), there are three or more
interacting tiers, each with its own specific responsibilities (see Figure 3):

Figure 3. Three-Tier Architecture

 Tier 1: the client contains the presentation logic, including simple control and user input
validation. This application is also known as a thin client.
 Tier 2: the middle tier is also known as the application server, which provides the
business processes logic and the data access.
 Tier 3: the data server provides the business data.

These are some of the advantages of a three-tier architecture:

 It is easier to modify or replace any tier without affecting the other tiers.
 Separating the application and database functionality means better load balancing.
 Adequate security policies can be enforced within the server tiers without hindering the
clients.

Putting the Theory into Practice

In order to demonstrate these design concepts, the general outline of a simple three-tier
“Hangman” game will be presented (check the source code in the archive file). The purpose of
this game, just in case the reader isn't familiar with it, is to try to guess a mystery word, one letter
at a time, before making a certain number of mistakes.

Figure 4. Hangman Client Running in Windows 98

The data server is a Linux box running the MiniSQL database management system. The database
is used to store the mystery words. At the beginning of each game, one of these words is
randomly selected.
At the client side, a Java applet contained in a web page (originally obtained from a web server)
is responsible for the application's graphical user interface (see Figure 4). The client platform
may be any computer with a web browser that supports applets. The game's logic is not
controlled by the applet; that's the middle tier's job. The client only takes care of the presentation
logic: getting the user's input, performing some simple checking and drawing the resulting
output.

The server in the middle tier is a Java application, also running within a Linux box. The rules of
the “Hangman” game (the business rules) are coded in this tier. Sockets and JDBC, respectively,
are used to communicate with the client and the data server through TCP/IP.

Figure 5. Diagram of Hardware Nodes

Figure 5 presents a UML (Unified Modeling Language) deployment diagram that shows the
physical relationship among the hardware nodes of the system.

Even though the design described gives the impression of requiring a different machine for each
tier, all tiers (each one running on a different process) can be run in the same computer. This
means the complete application is able to run in a single Linux system with a graphical desktop,
and it doesn't even have to be connected to the Net!
Windows Explorer is a file manager application that is included with releases of the Microsoft Windows
operating system from Windows 95 onwards. It provides a graphical user interface for accessing the file
systems. It is also the component of the operating system that presents many user interface items on
the monitor such as the taskbar and desktop. Controlling the computer is possible without Windows

he Windows Explorer was first included with Windows 95 as a replacement for the Windows 3.x
File Manager. It could be accessed by double-clicking the new My Computer desktop icon, or
launched from the new Start Menu that replaced the earlier Program Manager. There is also a
shortcut key combination – Windows key + E. Successive versions of Windows (and in some
cases, Internet Explorer) introduced new features and capabilities, removed other features, and
generally progressed from being a simple file system navigation tool into a task-based file
management system.

While “Windows Explorer” is a term most commonly used to describe the file management
aspect of the operating system, the Explorer process also houses the operating system’s search
functionality and File Type associations (based on filename extensions), and is responsible for
displaying the desktop icons, the Start Menu, the Taskbar, and the Control Panel. Collectively,
these features are known as the Windows shell.

After a user logs in, the explorer process is created by userinit process. Userinit performs some
initialization of the user environment (such as running the login script and applying group
policies) and then looks in the registry at the Shell value and creates a process to run the system-
defined shell - by default, Explorer.exe. Then Userinit exits. This is the reason why Explorer.exe
is shown by various process explorers with no parent—its parent has exited.

You might also like