You are on page 1of 20

1) Features of different network topologies as well as their advantages and

disadvantages.

Network Topology is the layout and structure of network. Topologies are either physical (the
physical layout of devices on a network) or logical (the way that signals act on a network media,
or the way that the data passes through the network from one device to the next).
Physical Topology: is the actual geometric layout of workstation .
In physical topology, there are Mesh Topology, Bus topology, Ring topology, Star topology,
Hierarchical topology, Hybrid topology, extended topology.
Bus Topology: all networking device(workstation) is connected to the central cable called
bus– the main wire – that connects all devices on a local-area networks. It use T-connectors to
connect all device to central cable
. Advantages (benefits) of Linear Bus Topology
1) It is easy to set-up and extend bus network.
2) Cable length required for this topology is the least compared to other networks.
3) Bus topology costs very less.
4) Linear Bus network is mostly used in small networks. Good for LAN.

Disadvantages (Drawbacks) of Linear Bus Topology

1) There is a limit on central cable length and number of nodes that can be connected.
2) Dependency on central cable in this topology has its disadvantages.If the main cable
(i.e. bus ) encounters some problem, whole network breaks down.
3) Proper termination is required to dump signals. Use of terminators is must.
4) It is difficult to detect and troubleshoot fault at individual station.
5) Maintenance costs can get higher with time.
6) Efficiency of Bus network reduces, as the number of devices connected to it
increases.
7) It is not suitable for networks with heavy traffic.
8) Security is very low because all the computers receive the sent signal from the
source.
Ring Topology or loop topology: network topology where all computer and peripherals that are
laid out in a circle. data flow around the circle from device to device .there two types of ring
topology based on data flow which are unidirection and bidirectional.
Unidirectional ring topology hand data traffic in either clockwise or anti-clockwise direction
thus can be called half-duplex network while for bidirectional ring topology handle data traffic in
both direction hence is full-duplex
Advantage of Ring Topology
• Easier to Mange than a Bus Network
• Good Communication over long distances
• Handles high volume of traffic
Disadvantages of Ring Topology
• The failure of a single node of the network can cause the entire network
to fail.
• The movement or changes made to network nodes affects the
performance of the entire network.
.in unidirectional ring data packet must pass through all the nodes

Star topology:
In Star topology, all the components of network are connected to the central device called “hub”
which may be a hub, a router or a switch. Unlike Bus topology (discussed earlier), where nodes were
connected to central cable, here all the workstations are connected to central device with a point-to-
point connection. So it can be said that every computer is indirectly connected to every other node by
the help of “hub”.
Advantages of Star Topology
1) As compared to Bus topology it gives far much better performance, signals don’t necessarily get transmitted
to all the workstations. A sent signal reaches the intended destination after passing through no more than 3-4
devices and 2-3 links. Performance of the network is dependent on the capacity of central hub.
2) Easy to connect new nodes or devices. In star topology new nodes can be added easily without affecting
rest of the network. Similarly components can also be removed easily.
3) Centralized management. It helps in monitoring the network.
4) Failure of one node or link doesn’t affect the rest of network. At the same time its easy to detect the failure
and troubleshoot it.

Disadvantages of Star Topology


1) Too much dependency on central device has its own drawbacks. If it fails whole network goes down.
2) The use of hub, a router or a switch as central device increases the overall cost of the network.
3) Performance and as well number of nodes which can be added in such topology is depended on capacity
of central device.

Mesh topology:
In a mesh network topology, each of the network node, computer and other devices, are interconnected
with one another. Every node not only sends its own signals but also relays data from other nodes. In
fact a true mesh topology is the one where every node is connected to every other node in the network.
This type of topology is very expensive as there are many redundant connections, thus it is not mostly
used in computer networks. It is commonly used in wireless networks. Flooding or routing technique is
used in mesh topology.
Types of Mesh Network topologies:-topology:
1)Full Mesh Topology:-
Mesh Topology Diagram
In this, like a true mesh, each component is connected to every other component. Even after considering the
redundancy factor and cost of this network, its main advantage is that the network traffic can be redirected to
other nodes if one of the nodes goes down. Full mesh topology is used only for backbone networks.

2) Partial Mesh Topology:-


This is far more practical as compared to full mesh topology. Here, some of the systems are connected in
similar fashion as in mesh topology while rests of the systems are only connected to 1 or 2 devices. It can be
said that in partial mesh, the workstations are ‘indirectly’ connected to other devices. This one is less costly and
also reduces redundancy.

Advantages of Mesh topology


1) Data can be transmitted from different devices simultaneously. This topology can withstand high traffic.
2) Even if one of the components fails there is always an alternative present. So data transfer doesn’t get
affected.
3) Expansion and modification in topology can be done without disrupting other nodes.

Disadvantages of Mesh topology


1) There are high chances of redundancy in many of the network connections.
2) Overall cost of this network is way too high as compared to other network topologies.
3) Set-up and maintenance of this topology is very difficult. Even administration of the network is tough.

- Data can be transmitted from different devices simultaneously


- Even if one of the components fail, there is always an alternative present. So that data transfer
can be done without disrupting other nodes.
Disadvantages: - Set-up and maintenance of this topology is very difficult.
- Overall cost of this network is way too high as compared to other network

Hybrid Topology: Hybrid topology is an interconnection of two or more basic network topologies, each of which
contains its own nodes.
Types of Hybrid Network Topologies Star-Wired Ring Network Topology
In a Star-Wired Ring hybrid topology, a set of Star topologies are connected by a Ring topology as the adjoining
topology. Joining each star topology to the ring topology is a wired connection.
Figure 1 is a diagrammatic representation of the star-wired ring topology

Figure 1: A Star-Wired Ring Network Topology


In Figure 1, individual nodes of a given Star topology like Star Topology 1 are interconnected by a central switch which in
turn provide an external connection to other star topologies through a node A in the main ring topology.
Information from a given star topology reaching a connecting node in the main ring topology like Aflows either in
a bidirectional or unidirectionalmanner.
A bidirectional flow will ensure that a failure in one node of the main ring topology does not lead to the complete breakdown
of information flow in the main ring topology.

Star-wired bus Network Topology


A Star-Wired Bus topology is made up of a set of Star topologies interconnected by a central Bus topology.
Joining each Star topology to the Bus topology is a wired connection.
Figure 2 is a diagrammatic representation of the star-wired bus topology

Figure 2: A Star-Wired Bus Network Topology


In this setup, the main Bus topology provides a backbone connection that interconnects the individual Star topologies.
The backbone in this case is a wired connection.

Hierarchical Network Topology


Hierarchical Network topology is structured in different levels as a hierarchical tree. It is also referred to as Tree network
topology.
Figure 3 shows a diagrammatic representation of Hierarchical network topology.

Figure 3: A Tree Network Topology


Connection of the lower levels like level 2 to higher levels like level 1 is done through wired connection.
The top most level, level 0, contains the parent (root) node. The second level, level 1 contains the child nodes which in
turn have child nodes in level 3. All the nodes in a given level have a higher parent node except for the node(s) at the top
most level.
The nodes at the bottom most level are called leaf nodes as they are peripheral and are parent to no other node. At the
basic level, a tree network topology is a collection of star network topologies arranged in different levels.
Each level including the top most can contain one or more nodes.

Uses of Hybrid Network Topologies


The decision to use the Hybrid network topology over a basic topology in a network is mostly based on the organizational
needs to be addressed by the network envisioned. The following are some of the reasons that can make an organization
pick on a hybrid as the preferred network topology:

Where there is need for Flexibility and Ease of network Growth


Network growth is when more network nodes are added to an existing network. A hybrid network easens addition of new
nodes to the network as changes can be done at the basic network levels as well as on the main network.
For example, in a Campus set up, there could be different hostels each of which could be having its own network. The
individual hostel networks have the liberty of adding new node to its network at any time without affecting the other hostels
network. Additionally, new hostel networks can be added to the existing main network.
In case there is a user that needs to leave a hostel network, the other hostels network configuration is not affected.

Where there is need for Isolation of Individual Network


In certain cases, there is need for a distributed network administration. In this case there is an overall network
administrator(admin) while the individual basic networks are administered locally. In the Campus network example, each
hostel network can have its local admin who is able to manage addition of new nodes or removal of existing nodes. Each
network can have its network policies and controls different from other networks.
To unlock this lesson you must be a Study.com Member. Cre
Advantages of Hybrid Network Topology
1) Reliable : Unlike other networks, fault detection and troubleshooting is easy in this type of topology. The
part in which fault is detected can be isolated from the rest of network and required corrective measures can be
taken, WITHOUT affecting the functioning of rest of the network.
2) Scalable: Its easy to increase the size of network by adding new components, without disturbing existing
architecture.
3) Flexible: Hybrid Network can be designed according to the requirements of the organization and by
optimizing the available resources. Special care can be given to nodes where traffic is high as well as where
chances of fault are high.
4) Effective: Hybrid topology is the combination of two or more topologies, so we can design it in such a way
that strengths of constituent topologies are maximized while there weaknesses are neutralized. For example
we saw Ring Topology has good data reliability (achieved by use of tokens) and Star topology has high
tolerance capability (as each node is not directly connected to other but through central device), so these two
can be used effectively in hybrid star-ring topology.

Disadvantages of Hybrid Topology


1) Complexity of Design: One of the biggest drawback of hybrid topology is its design. Its not easy to design
this type of architecture and its a tough job for designers. Configuration and installation process needs to be
very efficient.
2) Costly Hub: The hubs used to connect two distinct networks, are very expensive. These hubs are different
from usual hubs as they need to be intelligent enough to work with different architectures and should be
function even if a part of network is down.
3) Costly Infrastructure: As hybrid architectures are usually larger in scale, they require a lot of cables,
cooling systems, sophisticate network devices, etc.

Advantages: - A tree topology is a good choice for large computer networks as the tree topology
‘divides’ the whole network into parts that are more easily manageable.
Disadvantages: - The entire network depends on a central hub and a failure of the central hub can
cripple the whole network.

Tree topology: a tree topology is also known as a star bus topology. It incorporates elements
of both a bus topology and a star topology. Below is an example network diagram of a tree

topology, in which the central nodes of two star networks are connected to one another.
In the picture above, if the main cable or trunk between each of the two star topology networks
were to fail, those networks would be unable to communicate with each other. However,
computers on the same star topology would still be able to communicate

. There are certain special cases where tree topology is more effective:

 Communication between two networks


 A network structure which requires a root node, intermediate parents node, and leaf nodes (just like we see
in an n-tree) or a network structure which exhibits three level of hierarchy because two level of
hierarchy is already displayed in the star topology.

Advantages of tree topology:


 Scalable as leaf nodes can accommodate more nodes in the hierarchical chain.
 A point to point wiring to the central hub at each intermediate node of a tree topology represents a node in
the bus topology
 Other hierarchical networks are not affected if one of them gets damaged
 Easier maintenance and fault finding

Disadvantages of tree topology:


 Huge cabling is needed
 A lot of maintenance is needed
 backbone forms the point of failure.

Extended topology: It is made up of all connected individual star topologies and it is a star
network with an additional networking device connected to the main networking device.
Advantages: - It has bigger communication area than star topology.
- It is basically every computer attached to a central hub/router.
Disadvantages: - The high dependence of the system on the functioning of the central hub.
-The performance and scalability of the network also depend on compatibility of
the hub. Logical Topology: refers to the nature of the paths the signals follow from node to
node. Logical topology of network determine how the host communicate across the
medium,there are two most common type of logical topologies are BROADCAST and TOKEN
PASSING

2) Network performance metrics

Network performance . refers to measures of service quality of a network as seen by the customer.
The following measures are often considered important:

 Bandwidth commonly measured in bits/second is the maximum rate that information can be transferred
 Throughput is the actual rate that information is transferred
 Latency the delay between the sender and the receiver decoding it, this is mainly a function of the signals travel time,
and processing time at any nodes the information traverses
 Jitter variation in packet delay at the receiver of the information
 Error rate the number of corrupted bits expressed as a percentage or fraction of the total sent

Although, at first glance, bandwidth in bits per second and throughput seem the same,

Packet loss: Occurs due to the fact that buffers are not infinite in size.
When a packet arrives to a buffer that is full the packet is discarded.
Packet loss, if it must be corrected, is resolved at higher levels in the network stack (transport or
application layers).
3) Standardization organizations: committees, forums, and government regulatory
agencies.
Standards are essential in creating and maintaining an open and competitive market for
equipment manufactures and also in guaranteeing national and international interoperability of
data and telecommunication technology and processes.
They provide guidelines to manufactures, vendors, government agencies and other service
providers to ensure interconnectivity necessary in today’s marketplace and in international
communication.
Data communication standards fall into two categories: de facto (meaning ‘by fact’ or ‘by
convention’) and de jure (meaning ‘by law’ and ‘by regulation’).
- De facto. Standards that have not been approved by an organized body but have been
adopted as standards through widespread use are de facto standards.
- De jure. De jure standards are those that have been legislated by an officially
recognized body.
Standards are developed through cooperation of standards creation committees, forum and
government regulatory agencies. Some of the standards establishment organizations are:
International Standards Organization (ISO), International Telecommunication Union-
Telecommunication Standards Sector (ITU-T), American National Standard Institute (ANSI),
Institute of Electrical and Electronics Engineers (IEEE), Electronic Industries Association (EIA).
An association of organizations, governments, manufacturers and users form the standards organizations and are
responsible for developing, coordinating and maintaining the standards. The intent is that all data communications
equipment manufacturers and users comply with these standards. The primary standards organizations for data
communication are:

1. International Standard Organization (ISO) 20:

ISO 20 is the international organization for standardization on a wide range of subjects. It is comprised mainly of members
from the standards committee of various governments throughout the world. It is even responsible for developing models
which provides high level of system compatibility, quality enhancement, improved productivity and reduced costs. The ISO is
also responsible for endorsing and coordinating the work of the other standards organizations.

2. International Telecommunications Union-Telecommunication Sector (ITU-T) 10:

ITU-T 10 is one of the four permanent parts of the International Telecommunications Union based in Geneva, Switzerland. It
has developed three sets of specifications: the V series for modem interfacing and data transmission over telephone lines,
the X series for data transmission over public digital networks, email and directory services; the I and Q series for Integrated
Services Digital Network (ISDN) and its extension Broadband ISDN. ITU-T membership consists of government authorities
and representatives from many countries and it is the present standards organization for the United Nations

3. Institute of Electrical and Electronics Engineers (IEEE) 15:

IEEE 15 is an international professional organization founded in United States and is compromised of electronics, computer
and communications engineers. It is currently the world’s largest professional society with over 200,000 members. It
develops communication and information processing standards with the underlying goal of advancing theory, creativity, and
product quality in any field related to electrical engineering.

##4. American National Standards Institute (ANSI) 5:

ANSI 5 is the official standards agency for the United States and is the U.S voting representative for the ISO. ANSI is a
completely private, non-profit organization comprised of equipment manufacturers and users of data processing equipment
and services. ANSI membership is comprised of people form professional societies, industry associations, governmental
and regulatory bodies, and consumer goods.

##5. Electronics Industry Association (EIA) 10:

EIA 10 is a non-profit U.S. trade association that establishes and recommends industrial standards. EIA activities include
standards development, increasing public awareness, and lobbying and it is responsible for developing the RS
(recommended standard) series of standards for data and communications.

##6. Telecommunications Industry Association (TIA) 5:

TIA 5 is the leading trade association in the communications and information technology industry. It facilitates business
development opportunities through market development, trade promotion, trade shows, and standards development. It
represents manufacturers of communications and information technology products and also facilitates the convergence of
new communications networks.

##7. Internet Architecture Board (IAB) 2:

IAB 2 earlier known as Internet Activities Board is a committee created by ARPA (Advanced Research Projects Agency) so
as to analyze the activities of ARPANET whose purpose is to accelerate the advancement of technologies useful for U.S
military. IAB is a technical advisory group of the Internet Society.

##8. Internet Engineering Task Force (IETF) 2:

The IETF 2 is a large international community of network designers, operators, vendors and researchers concerned with the
evolution of the Internet architecture and smooth operation of the Internet.

##9. Internet Research Task Force (IRTF) 4:

The IRTF 4 promotes research of importance to the evolution of the future Internet by creating focused, long-term and small
research groups working on topics related to Internet protocols, applications, architecture and technology.
Forums: Telecommunication technology development is moving faster than the ability of
standards committee to ratify standards. Standards committees are procedural bodies and by
nature slow moving. To accommodate the need from working models and agreements and to
facilitate the standardization process, many special interest groups have developed forums made
up of representative from interested corporation. The forums work with universities and users to
test, evaluate and standardize new technologies. The standard forums present their conclusion to
the standards bodies.
Regulatory agencies: All communication is subject to regulation by government agencies. The
purpose of these agencies is to protect the public interest by regulating radio, television and wire
cable communication.

4) Internet: development history and standards

Three organizations under the Internet Society are responsible for the actual work of standards development and publication:

 Internet Architecture Board (IAB): Responsible for defining the overall architecture of the Internet, providing guidance and broad

direction to the IETF.

 Internet Engineering Task Force (IETF): The protocol engineering and development arm of the Internet.

Internet Engineering Steering Group (IESG): Responsible for technical management of IETF activities and the Internet standards process.

The actual development of new standards and protocols for the Internet is carried out by working groups chartered by the IETF. Membership

in a working group is voluntary; any interested party can participate. During the development of a specification, a working group will make a

draft version of the document available as an Internet Draft, which is placed in the IETF's "Internet-Drafts" online directory. The document
may remain as an Internet Draft for up to six months, and interested parties can review and comment on the draft. During that time, the IESG

may approve publication of the draft as an RFC (Request for Comment). If the draft has not progressed to the status of an RFC during the

six-month period, it is withdrawn from the directory. The working group may subsequently publish a revised version of the draft.

The IETF is responsible for publishing the RFCs, with approval of the IESG. The RFCs are the working notes of the Internet research and

development community. A document in this series may cover essentially any topic related to computer communications and may be

anything from a meeting report to the specification of a standard.

The work of the IETF is divided into eight areas, each with an area director and composed of numerous working groups:

 General: IETF processes and procedures. An example is the process for development of Internet standards.

 Applications: Internet applications. Examples include Web-related protocols, EDI-Internet integration, LDAP.

 Internet: Internet infrastructure. Examples include IPv6, PPP extensions.

 Operations and management: Standards and definitions for network operations. Examples include SNMPv3, remote network

monitoring.

 Routing: Protocols and management for routing information. Examples include multicast routing, OSPF.

 Security: Security protocols and technologies. Examples include Kerberos, IPSec, X.509, S/MIME, TLS.

 Transport: Transport layer protocols. Examples include differentiated services, IP telephony, NFS, RSVP.

 User services: Methods to improve the quality of information available to users of the Internet,. Examples include responsible use of

the Internet, user services, FYI documents.

The decision of which RFCs become Internet standards is made by the IESG, on the recommendation of the IETF. To become a standard, a

specification must meet the following criteria:

 Be stable and well understood

 Be technically competent

 Have multiple, independent, and interoperable implementations with substantial operational experience

 Enjoy significant public support

 Be recognizably useful in some or all parts of the Internet


The key difference between these criteria and those used for international standards from ITU is the emphasis here on operational

experience.

The left side of Figure 1 shows the series of steps, called the standards track, that a specification goes through to become a standard; this

process is defined in RFC 2026. The steps involve increasing amounts of scrutiny and testing. At each step, the IETF must make a

recommendation for advancement of the protocol, and the IESG must ratify it. The process begins when the IESG approves the publication

of an Internet Draft document as an RFC with the status of Proposed Standard.

Figure 1

Internet RFC publication process.

The white boxes in the diagram represent temporary states, which should be occupied for the minimum practical time. However, a document

must remain a Proposed Standard for at least six months and a Draft Standard for at least four months to allow time for review and comment.

The shaded boxes represent long-term states that may be occupied for years.

For a specification to be advanced to Draft Standard status, there must be at least two independent and interoperable implementations from

which adequate operational experience has been obtained.

After significant implementation and operational experience has been obtained, a specification may be elevated to Internet Standard. At this

point, the specification is assigned an STD number as well as an RFC number.

Finally, when a protocol becomes obsolete, it's assigned to the Historic state.

All Internet standards fall into one of two categories:

 Technical specification (TS): A TS defines a protocol, service, procedure, convention, or format. The bulk of the Internet standards

are TSs.

 Applicability statement (AS): An AS specifies how and under what circumstances one or more TSs may be applied to support a

particular Internet capability. An AS identifies one or more TSs that are relevant to the capability, and may specify values or ranges for

particular parameters associated with a TS or functional subsets of a TS that are relevant for the capability.

There are numerous RFCs that are not destined to become Internet standards. Some RFCs standardize the results of community

deliberations about statements of principle or conclusions about what is the best way to perform some operations or IETF process functions.

Such RFCs are designated as Best Current Practice (BCP). Approval of BCPs follows essentially the same process as approval of Proposed

Standards. Unlike standards-track documents, there is no three-stage process for BCPs; a BCP goes from Internet draft statute to approved

BCP in one step.


A protocol or other specification that's not considered ready for standardization may be published as an Experimental RFC. After further

work, the specification may be resubmitted. If the specification is generally stable, has resolved known design choices, is believed to be well

understood, has received significant community review, and appears to enjoy enough community interest to be considered valuable, the RFC

will be designated a Proposed Standard.

Finally, an Informational Specification is published for the general information of the Internet community

World Wide Web is typically given to Leonard Kleinrock. In 1961, he wrote about ARPANET, the predecessor of the
Internet, in a paper entitled "Information Flow in Large Communication Nets." Kleinrock, along with other innnovators
such as J.C.R. Licklider, the first director of the Information Processing Technology Office (IPTO), provided the backbone
for the ubiquitous stream of emails, media, Facebook postings and tweets that are now shared online every day. Here,
then, is a brief history of the Internet:

. This timeline offers a brief history of the Internet’s evolution:

1965: Two computers at MIT Lincoln Lab communicate with one another using packet-switching technology.
1968: Beranek and Newman, Inc. (BBN) unveils the final version of the Interface Message Processor (IMP)
specifications. BBN wins ARPANET contract.
1969: On Oct. 29, UCLA’s Network Measurement Center, Stanford Research Institute (SRI), University of California-
Santa Barbara and University of Utah install nodes. The first message is "LO," which was an attempt by student Charles
Kline to "LOGIN" to the SRI computer from the university. However, the message was unable to be completed because
the SRI system crashed.
1972: BBN’s Ray Tomlinson introduces network email. The Internetworking Working Group (INWG) forms to address
need for establishing standard protocols.
1973: Global networking becomes a reality as the University College of London (England) and Royal Radar
Establishment (Norway) connect to ARPANET. The term Internet is born.
1974: The first Internet Service Provider (ISP) is born with the introduction of a commercial version of ARPANET,
known as Telenet.
1974: Vinton Cerf and Bob Kahn (the duo said by many to be the Fathers of the Internet) publish "A Protocol for Packet
Network Interconnection," which details the design of TCP.
1976: Queen Elizabeth II hits the “send button” on her first email.
1979: USENET forms to host news and discussion groups.
1981: The National Science Foundation (NSF) provided a grant to establish the Computer Science Network (CSNET) to
provide networking services to university computer scientists.
1982: Transmission Control Protocol (TCP) and Internet Protocol (IP), as the protocol suite, commonly known as TCP/IP,
emerge as the protocol for ARPANET. This results in the fledgling definition of the Internet as connected TCP/IP
internets. TCP/IP remains the standard protocol for the Internet.
1983: The Domain Name System (DNS) establishes the familiar .edu, .gov, .com, .mil, .org, .net, and .int system for
naming websites. This is easier to remember than the previous designation for websites, such as 123.456.789.10.
1984: William Gibson, author of "Neuromancer," is the first to use the term "cyberspace."
1985: Symbolics.com, the website for Symbolics Computer Corp. in Massachusetts, becomes the first registered domain.
1986: The National Science Foundation’s NSFNET goes online to connected supercomputer centers at 56,000 bits per
second — the speed of a typical dial-up computer modem. Over time the network speeds up and regional research and
education networks, supported in part by NSF, are connected to the NSFNET backbone — effectively expanding the
Internet throughout the United States. The NSFNET was essentially a network of networks that connected academic users
along with the ARPANET.
1987: The number of hosts on the Internet exceeds 20,000. Cisco ships its first router.
1989: World.std.com becomes the first commercial provider of dial-up access to the Internet.
1990: Tim Berners-Lee, a scientist at CERN, the European Organization for Nuclear Research, develops HyperText
Markup Language (HTML). This technology continues to have a large impact on how we navigate and view the Internet
today.
1991: CERN introduces the World Wide Web to the public.
1992: The first audio and video are distributed over the Internet. The phrase "surfing the Internet" is popularized.
1993: The number of websites reaches 600 and the White House and United Nations go online. Marc Andreesen develops
the Mosaic Web browser at the University of Illinois, Champaign-Urbana. The number of computers connected to
NSFNET grows from 2,000 in 1985 to more than 2 million in 1993. The National Science Foundation leads an effort to
outline a new Internet architecture that would support the burgeoning commercial use of the network.
1994: Netscape Communications is born. Microsoft creates a Web browser for Windows 95.
1994: Yahoo! is created by Jerry Yang and David Filo, two electrical engineering graduate students at Stanford
University. The site was originally called "Jerry and David's Guide to the World Wide Web." The company was later
incorporated in March 1995.
1995: Compuserve, America Online and Prodigy begin to provide Internet access. Amazon.com, Craigslist and eBay go
live. The original NSFNET backbone is decommissioned as the Internet’s transformation to a commercial enterprise is
largely completed.
1995: The first online dating site, Match.com, launches.
1996: The browser war, primarily between the two major players Microsoft and Netscape, heats up. CNET buys tv.com
for $15,000.
1996: A 3D animation dubbed "The Dancing Baby" becomes one of the first viral videos.
1997: Netflix is founded by Reed Hastings and Marc Randolph as a company that sends users DVDs by mail.
1997: PC makers can remove or hide Microsoft’s Internet software on new versions of Windows 95, thanks to a settlement
with the Justice Department. Netscape announces that its browser will be free.
1998: The Google search engine is born, changing the way users engage with the Internet.
1998: The Internet Protocol version 6 introduced, to allow for future growth of Internet Addresses. The current most
widely used protocol is version 4. IPv4 uses 32-bit addresses allowing for 4.3 billion unique addresses; IPv6, with 128-bit
addresses, will allow 3.4 x 1038 unique addresses, or 340 trillion trillion trillion.
1999: AOL buys Netscape. Peer-to-peer file sharing becomes a reality as Napster arrives on the Internet, much to the
displeasure of the music industry.
2000: The dot-com bubble bursts. Web sites such as Yahoo! and eBay are hit by a large-scale denial of service attack,
highlighting the vulnerability of the Internet. AOL merges with Time Warner
2001: A federal judge shuts down Napster, ruling that it must find a way to stop users from sharing copyrighted material
before it can go back online.
2003: The SQL Slammer worm spread worldwide in just 10 minutes. Myspace, Skype and the Safari Web browser debut.
2003: The blog publishing platform WordPress is launched.
2004: Facebook goes online and the era of social networking begins. Mozilla unveils the Mozilla Firefox browser.
2005: YouTube.com launches. The social news site Reddit is also founded.
2006: AOL changes its business model, offering most services for free and relying on advertising to generate revenue. The
Internet Governance Forum meets for the first time.
2006: Twitter launches. The company's founder, Jack Dorsey, sends out the very first tweet: "just setting up my twttr."
2009: The Internet marks its 40th anniversary.
2010: Facebook reaches 400 million active users.
2010: The social media sites Pinterest and Instagram are launched.
2011: Twitter and Facebook play a large role in the Middle East revolts.
2012: President Barack Obama's administration announces its opposition to major parts of the Stop Online Piracy Act and
the Protect Intellectual Property Act, which would have enacted broad new rules requiring internet service providers to
police copyrighted content. The successful push to stop the bill, involving technology companies such as Google and
nonprofit organizations including Wikipedia and the Electronic Frontier Foundation, is considered a victory for sites such
as YouTube that depend on user-generated content, as well as "fair use" on the Internet.
2013: Edward Snowden, a former CIA employee and National Security Agency (NSA) contractor, reveals that the NSA
had in place a monitoring program capable of tapping the communications of thousands of people, including U.S. citizens.
2013: Fifty-one percent of U.S. adults report that they bank online, according to a survey conducted by the Pew Research
Center.
2015: Instagram, the photo-sharing site, reaches 400 million users, outpacing Twitter, which would go on to reach 316
million users by the middle of the same year.
2016: Google unveils Google Assistant, a voice-activated personal assistant program, marking the entry of the Internet
giant into the "smart" computerized assistant marketplace. Google joins Amazon's Alexa, Siri from Apple, and Cortana
from Mic
An Internet Standard is documented by a Request for Comments or a set of RFCs. A
specification that is to become a Standard or part of a Standard begins as an Internet Draft, and is
later, usually after several revisions, accepted and published by the RFC Editor as an RFC and
labeled a Proposed standard.
There were previously three standard maturity levels: Proposed Standard, Draft Standard and
Internet Standard. RFC 6410 reduced this to two maturity levels.
Proposed Standard.
Draft Standard
In October 2011, RFC 6410 merged the second and third maturity levels into one Draft Standard.
Existing older Draft Standards retain that classification. The IESG can reclassify an old Draft
Standard as Proposed Standard after two years (October 2013).

Internet Standard: An internet standard is a specification that has been approved by the Internet
Engineering Task Force (IETF)

An Internet Standard ensures that hardware and software produced by different vendors can work together

The goals of the Internet Standards Process are:

 technical excellence;
 prior implementation and testing;
 clear, concise, and easily understood documentation;
 openness and fairness; and
 timeliness.
5)
a) Challenges (to Network service providers): How to maximize the capacity and minimize
the cost?
Currently, most service providers conduct capacity planning by collecting data from various
systems and pouring it into spreadsheets on a weekly or monthly basis. This labor-intensive
method is no longer fast enough to keep up with the rapid changes that occur as more and more
customers turn to service providers for everything from wireless phone service and cable TV to
cloud-based IT services
Capacity planning involves determining when the demand for a given resource – bandwidth,
CPU, disk space, memory, etc. – will outweigh the capacity to deliver it. Done correctly,
capacity planning makes it possible to determine which infrastructure upgrades will deliver a
return in increased customer business and reduce churn. Make a mistake and service providers
may find themselves over-provisioned or not delivering capacity upgrades when they need them
most.
ISP to minimize the cost should consider the following:
-ISP should have a local access number so that you can avoid long – distance phone charges
while you use the internet.
- Hire good negotiators who understand the working of internet service provider.
- Security programs must evolve to avoid the hackers.
b) Assume a business man wishes to interconnect north and south regions (e.g., Musanze to
Huye). As expert, discuss which transmission medium to suggest?
The most adequate transmission medium for this business man to use is the fiber optic
transmission as a medium. Fiber optic cable boasts an ability to span very long distances,
withstand weather and electromagnetic signals without experiencing a break in service and offer
lightning-fast upload and download speeds. But also fiber optic transmission is faster when
traveling over a long distance. Fiber optic cables experience less signal loss known as
attenuation. They do not break easily meaning that the business man does not have to worry
about replacing them as frequently.
c) What happens (throughput or delay) when we send more data on the network?
When we send more data on the network the throughput decreases and the delay increases for
example if you are downloading files, the system wants to send the files at time and number of
bits transmitted per second are partitioned due to the files to be downloaded that decreases
throughput and delay increases due to the time taken for a file to be downloaded.
d) If you are a network manager for a company, how will you increase the reliability of
your company’s network connection?

Reliability of a network is concerned with the ability of a network tom carry out the desired
operation such as communication, network reliability and performance challenges are often
addressed by adding additional bandwidth however there are ways to yield more “good-put”
(good net payload throughput) out of the same network infrastructure.
There are number of ways of how we can improve the reliability of a network
Protocol of the network:
TCP (Transmission Control Protocol) is one of the most commonly used protocols between any
two network devices today. While the TCP protocol has many advantages, such as reliability.
Connection brokering:
By using the ADC as a TCP proxy, it can maintain a few TCP connections opened with the
server on one side and multiplex those connections with users’ TCP connections, reducing the
delay caused by establishing the connection with the server.
Bandwidth Management:
Congestion is unavoidable. In itself, congestion is not necessarily a problem. The challenge is
how to minimize its effects. One of those effects can be packet loss or varying delays, and even
“time outs” of different application processes.

e) If you are a network security engineer, discuss the network vulnerabilities and strategies
for securing the network?
In computer security, a vulnerability is the weakness which can be exploited by threat actor, such
as an attacker, to perform unauthorized actions within a computer system. There are examples of
vulnerabilities. Physical environment of the system, the personnel, management, administration
procedures and security measures within the organization, business operation and service
delivery, hard ware, software, communication equipment and facilities. The strategies for
securing the network are firewall; internet edge, datacenter access. Intrusion prevention, malware
detection/prevention.802.1x authentication. Web-filtering. Spam filtering

f) Is it possible to use a topology alone or to combine different topologies in a network?


All are possible because network topology is the arrangement of the elements of a
communication network. There are three basic topologies used in the construction of computer
networks; bus, ring and star. They are used quite often and combined topologies, among which
the most widely used are star bus and star ring.
g) Who owns internet?
No one actually owns the Internet, and no single person or organization controls the Internet in
its entirety. No one actually owns the Internet, and no single person or organization controls the
Internet in its entirety. The Internet is more of a concept than an actual tangible entity, and it
relies on a physical infrastructure that connects networks to other networks.
Owning Pieces of Infrastructure-There are many organizations, corporations, governments,
schools, private citizens and service providers that all own pieces of the infrastructure, but there
is no one body that owns it all. There are, however, organizations that oversee and standardize
what happens on the Internet and assign IP addresses and domain names.

6) Discuss (in your respective groups) the IEEE 802 LAN/MAN standards: frequency
range, specifications, data rate, applications/services.
IEEE802 LAN/MAN standards: The IEEE 802 is a family of IEEE standards. These standards
committee develops and maintains networking standards and recommended practices for local,
metropolitan, and other area networks, using an open and accredited process, and advocates them
on a global. The IEEE 802 standards are restricted to networks carrying variable size packets. By
contrast, in cell relay networks data is transmitted in short, uniformly sized units called cells.
*DATA RATE: Its net data rate ranges from 54Mbits/s to 600 Mbits/s
*SPECIFICATIONS: The IEEE 802 Standard comprises a family of networking standards that
cover the physical layer specifications of technologies from Ethernet to wireless. IEEE 802 is
subdivided into 22 parts that cover the physical and data-link aspects of networking. The better
known specifications include 802.3 Ethernet, 802.11 Wi-Fi, 802.15 Bluetooth/ZigBee, and
802.16.
*FRENQUENCY: IEEE 802 is a set of Media Access Control (MAC) and the physical layer
specifications for implementing wireless local area network (WLAN) computer communication
in the 900 MHz and 2.4, 3.6, 5 and 60 GHz frequency bands. They are the world’s most widely
used wireless computer networking standards, used in most home and office networks to allow
laptops, printers, and smartphones to talk to each other and access internet without connecting
wires.
*Applications/services: Used in everyday life. Technology standards ensure that products and
services perform as intended. They improve the quality of life of countless communities and
individuals worldwide.
They are mostly applied in: *Telecommunications
*Information Technology
*Power generation

Many things we fundamentally rely upon like email for example, it would not be as broadly
available or as dependable without the IEEE 802 network standards.
It is estimated that greater than 98% of all internet traffic crosses one or more IEEE 802
networks during transmission. Without IEEE 802 standards to build upon computer to computer
connections, simple email, internet access, World Wide Web, and mobile broadband would not
have been possible to the extent we see today.
IEEE802 standards are undeniably an essential foundation of today’s networked world.

You might also like