You are on page 1of 24

1.

INTRODUCTION

Moore’s Law has long offered a roadmap for advances in computing


power. In 1965, Gordon Moore predicted that computer processing power should
double approximately every 12 months, eventually amending this prediction to 18
months. Developments in computer chip design have kept up with the current
version of Moore’s Law, but this pace cannot continue indefinitely. As we will
describe below, the laws of physics impose a physical limit on how much
processing power can be achieved with a silicon chip.

Similarly, the ability of medicine to treat diseases more effectively is


limited by the ability of current biotechnology to interface with the processes of
the body. We do not yet have an effective way for silicon based computers to
interact directly with the chemical processes of the human body in order to
diagnose and treat illnesses. Biological computing has the potential to solve both
problems. This paper sets the stage for considering this topic by examining the
limitations of the silicon-based computing paradigm and discussing the other
alternative paradigms. It then focuses on biological computing based on
DNA(Deoxy-ribo Nucleic Acid) and considers the benefits and possible problems
of this radically different form of computing.

1
Dept. Of Computer Science & Engineering, SIST
2. Limitations of Current Computing Technology

The processing power of current computing technology ie silicon-based


computing is possible up to the point of the limitations imposed on it by the laws
of physics. But there are additional problems with silicon-based technology that
make finding viable alternatives even more imperative.
First, let’s look at the limitations imposed by physics. The ‘processing
power’ of computers is usually measured by speed. Here speed means how fast
circuits can move information from A to B and how fast the information can be
processed once it gets to B. The traditional computing design paradigm focuses
on decreasing the distance that information (in the form of electrical signals) have
to travel; in other words, shortening the distance between A and B. This has
meant packing more and more processing elements or transistors into the central
processing chip of the computer. Each of these transistors is essentially a tiny
binary on/off switch. Today, this packing of transistors has reached an amazing
level of density. For instance, a common Pentium IV chip packs 55 million
transistors in a space the size of a dime. By relentlessly pursuing this
miniaturization strategy, computing technology has advanced rapidly in a
comparatively short amount of time. To put the advances in perspective, compare
the Pentium chip equipped desktop with the ENIAC computer of the 1940s. That
device, with its 17,000 vacuum tubes (the transistor’s predecessor) weighed 30
tons and filled an entire room. Yet, its processing power was less than one
hundred thousandth that of the Pentium.
As relentless as this progress has been thus far, it is dependent upon
continuing to find new means to shrink transistors so as to fit ever more of them
onto a chip. In the coming years, transistors will have decreased in size to such a
great extent that the only way to make them smaller is to construct them out of
individual atoms or small groupings of atoms. Unfortunately, quantum effects of
physics operating on that size scale will prevent the effective transmission of
signals. For example, the Heisenberg Uncertainty Principle states that particles
2
Dept. Of Computer Science & Engineering, SIST
(such as the electrons that make up the information signals that flow through
computers) can exhibit the strange behavior of being in other places than where
they should be. Researchers can never be completely certain where these particles
are at any given moment. This means that electrons, which should ideally be
speeding down the atomic-scale circuit pathways in future silicon computers,
might be someplace else along with the information they were assigned to carry!
Given that the circuit pathways of computers must be able to reliably transmit
information; this quantum effect that emerges at the atomic scale is clearly a
problem. Thus, there is a lower size limit that the laws of physics impose upon
silicon-based computers. This limit may be reached as soon as 10-15 years from
now.
Reaching this upper limit to the processing power of silicon-based
computers could be very problematic for businesses. Though current and future
computers will have sufficient power for handling many day to- day operations,
e.g. email, future business software needed for activities such as hyper-realistic
strategic planning simulations, mapping more efficient airline routes, etc. will
ultimately require more processing power than silicon-based computing will be
able to provide. Software designers are always in the process of designing
software that forces hardware designers to keep adding more power to their
creations. As this continues, it will force silicon-based computers up to their
design limits.

There are two other problems with silicon-based computers. First, the components
out of which computer processing chips are made are toxic, e.g. arsenic, and
therefore present challenges in both fabrication and disposal. Second, silicon-
based computers are not very energy efficient. They waste a great deal of energy
in the form of the heat that they generate and the energy they consume. With these
limitations in mind, let’s look at some alternatives to the current computing
paradigm.

3
Dept. Of Computer Science & Engineering, SIST
3. Alternatives to Silicon based Computing

Researchers have pursued a number of alternatives to silicon-based


computing.These have included biological, optical, and quantum computing.
Optical computing is based on replacing with light pulses instead of the electrical
signals that carry information through silicon-based computers. Information can
travel at the speed of light through information pathways in the optical computing
scheme far faster than the commonly used pathways in silicon-based computers.

Quantum computers, by contrast, use quantum states of subatomic


particles to represent information values. While information is essentially limited
to binary (on-off) values in silicon and optical computers, quantum computing
permits each information element to carry multiple values simultaneously.
Quantum computing also exploits the phenomenon of quantum entanglement,
which essentially allows any given information state to exist in two locations
simultaneously. This allows the equivalent of instantaneous transfers of
information from location to location. It should be noted that a great deal of
development works remains to be done before either optical computing or
quantum computing can produce practical devices for commercial use.

Molecule cascade computing is the newest area in the development of


alternatives to traditional computing. This technique is based on forming circuits
by creating a precise pattern of carbon monoxide molecules on a copper surface.
By nudging a single molecule, it has been possible to cause a cascade of
molecules, much like toppling dominoes. Different molecules can represent the 1s
and 0s of binary information, making possible calculations. While this technique
may make possible circuits hundreds of thousands of times smaller than those
used today, it shares with the other alternatives the fact that a number of problems
must be solved for it to ever be suitable for practical applications.

4
Dept. Of Computer Science & Engineering, SIST
4. Biological Computing

Biological computing is the use of living organisms or their component


parts to perform computing operations or operations associated with computing,
e.g. storage. The various forms of biological computing take a different route than
those used by quantum or optical computing to overcome the limitations to
performance that silicon-based computers face.

Rather than focusing on increasing the speed of individual computing


operations, biological computing focuses on the use of massive parallelism, or the
allocation of tiny portions of a computing task to many different processing
elements.Each element in and of itself cannot perform its task quickly, but the fact
that there is an incredibly huge number of such elements, each performing a small
task, means that the processing operation can be performed far more quickly.
Silicon-based computers have used massively parallel processing but will never
be capable of the level of massively parallel processing that biological computers
can demonstrate. The biological nature of the operation of biological computing
also makes it uniquely suited to controlling processes that would require an
interface between other biological processes and human technology. The table
below compares biological computing with silicon-based computing in several
areas, including the component materials, processing scheme, maximum
operations per second, presence of toxic components, and energy efficiency.

Research in the field of biological computing is focusing on the


development of a number of different, though related, forms of technology. While
all of these forms share the biological components listed above, they share little
else and are best thought of as distant cousins. Some of these technologies will
likely be applicable to a variety of problems, while others are best thought of as
tools suited to specific purposes.

5
Dept. Of Computer Science & Engineering, SIST
5. DNA Computing

INTRODUCTION TO DNA

DNA means DeoxyriboNucleic Acid. The complete set of instructions for making
an organism is called its GENOME. It contains the master blueprint for all
cellular structures and activities for the lifetime of the cell or organism. Found in
every nucleus of a persons many trillions of cells, the human genomes consists of
tightly coiled threads of deoxyribonucleic acid (DNA) and associated protein
molecules, organized into structures called chromosomes.

In humans, as in other higher organisms, a DNA molecule consists of two strands


that wrap around each other to resemble a twisted ladder whose sides, made of
sugar and phosphate molecules are connected by rings of nitrogen-containing
chemicals called bases. Each strand is a linear arrangement of repeating similar
units called nucleotides. A nucleotide is composed of one sugar, one phosphate,
and a nitrogenous base.

6
Dept. Of Computer Science & Engineering, SIST
7
Dept. Of Computer Science & Engineering, SIST
For users who already have an Internet access and an audio-capable PC. This
scenario can take advantage of integration with other Internet services such as
World Wide Web, instant messaging, e-mail, etc.

PC to telephone or telephone to PC:

Figure 2 PC to Phone or Phone to PC Scenario

In this scenario, PC-callers may reach also the PSTN users. A gateway
converting the Internet call into a PSTN call has to be used. Traditional telephone
users also can make a call to a PC going through the gateway that connects the IP
network with PSTN.

8
Dept. Of Computer Science & Engineering, SIST
Telephone to telephone:

Figure 3 Phone-to-Phone Scenarios

The IP network can be a dedicated backbone to connect PSTN. Gateways should


connect PSTN to the IP network.

6. BASIC SYSTEM COMPONENTS OF VoIP

There are three major system components to VoIP technology:


clients, servers, and gateways.

9
Dept. Of Computer Science & Engineering, SIST
Clients:

The client comes in two basic forms. It is either a suite of software


running on a user’s PC that allows the user, through a GUI, to set-up and
clear voice calls, encode, packetize and transmit outbound voice
information from the user’s microphone and receive, decode and play
inbound voice information through the user’s speaker or headsets. The
other type of client, known as a ‘virtual’ client, does not have a direct
user interface, but resides in gateways and provides an interface for users
of POTS.

Servers:

In order for IP Telephony to work and to be viable as a


commercial enterprise, a wide range of complex database operations, both real-
time and non-real-time, must occur transparently to the user. Such applications
include user validation, rating, accounting, billing, revenue collection, revenue
distribution, routing (least cost, least latency or other algorithms), management of
the overall service, downloading of clients, fulfillment of service, registration of
users, directory services, and more.

Gateways:

VoIP technology allows voice calls originated and terminated at


standard telephones supported by the PSTN to be conveyed over IP networks.
VoIP "gateways" provide the bridge between the local PSTN and the IP network
for both the originating and terminating sides of a call. To originate a call, the
calling party will access the nearest gateway either by a direct connection or by
placing a call over the local PSTN and entering the desired destination phone
number.

10
Dept. Of Computer Science & Engineering, SIST
The VoIP technology translates the destination telephone number
into the data network address (IP address) associated with a corresponding
terminating gateway nearest to the destination number. Using the appropriate
protocol and packet transmission over the IP network, the terminating gateway
will then initiate a call to the destination phone number over the local PSTN to
completely establish end-to-end two-way communications. Despite the additional
connections required, the overall call set-up time is not significantly longer than
with a call fully supported by the PSTN.

The gateways must employ a common protocol - for example, the


H.323 or SIP or a proprietary protocol - to support standard telephony signaling.
The gateways emulate the functions of the PSTN in responding to the telephone's
on-hook or off-hook state, receiving or generating DTMF digits and receiving or
generating call progress tones. Recognized signals are interpreted and mapped to
the appropriate message for relay to the communicating gateway in order to
support call set-up, maintenance, billing and call tear down.

7. BENEFITS OF VOIP

11
Dept. Of Computer Science & Engineering, SIST
Voice communications will certainly remain as basic form of
interaction among people. A simple replacement of PSTN is hard to implement in
short term. The immediate goal for many VoIP service providers is to reproduce
existing telephone capabilities at a significantly lower cost and offer a quality of
service competitive to PSTN. In general, the benefits of VoIP technology can be
the following:

1. Low cost:

By avoiding traditional telephony access charges and settlement, a caller


can significantly reduce the cost of long distance calls. Although the cost
reduction is somewhat related to future regulations, VoIP certainly adds an
alternate option to existing PSTN services. Only one physical network is required
to deal with both voice/fax and data traffic instead of two physical networks.
Having only one physical network has the advantages that lower physical
equipment cost, lower maintenance costs.

2. Network efficiency:

Packetized voice offers much higher bandwidth efficiency than


circuit-switched voice because it does not take up any bandwidth in listening
mode or during pauses in a conversation. It is a big saving when we consider a
significant part of a conversation is silence. The network efficiency can also be
improved by removing the redundancy in certain speech patterns. If we were to
use the same 64 Kbps Pulse Code Modulation (PCM) digital-voice encoding
method in both technologies, we would see that bandwidth consumption of
packetized voice is only a fraction of the consumption of circuit-switched voice.
The packetized voice can take advantage of the latest voice-compression
algorithms to improve efficiency.

3. Simplification and consolidation:

12
Dept. Of Computer Science & Engineering, SIST
An integrated infrastructure that supports all forms of communication
allows more standardization and reduces the total equipment and management
cost. The combined infrastructure could support bandwidth optimization and a
fault tolerant design. Universal use of the IP protocols for all applications reduces
both complexity and more flexibility. Directory services and security services
could be more easily shared.

4. Single network infrastructure:

When installing VoIP in the office only a single cable is required


to the desk, for both telephone and data eliminating separate telephone wiring.

5. VoIP uses "soft" switching:

VoIP uses "soft" switching which eliminates most of the legacy


PBX equipment. Reducing the cost of installing a communications infra-structure
and the maintenance cost once installed.

6. Simple upgrade path:

The VoIP PBX technology is software based. It is easier to


expand, upgrade and maintain than its traditional telephony counterparts.

7. Bandwidth efficiency:

VoIP can compress more voice calls into available bandwidth than
legacy telephony. IP Telephony helps to eliminate wasted bandwidth by not
transporting the 60% of normal speech which is silence.

IP - the underlying protocol - is supported by most platforms and is independent


of the transport protocol used.

8. DEVELOPMENT CHALLENGES

13
Dept. Of Computer Science & Engineering, SIST
The goal of VoIP developers is to add telephone calling capabilities to IP-
based networks and interconnect these to traditional public telephone network and
to private voice networks maintaining current voice quality standards and
preserve the features everyone expects from the telephone. We can summarize the
technical challenges as the following.

1. Quality of Service (QoS):

The voice quality should be comparable to what is available using


the PSTN, even over networks of varying levels of QoS. The following factors
decide the VoIP quality:

2. Packet loss:

In order to operate a multi-service packet based network at a


commercially viable load level, random packet loss is inevitable. This is
particularly true with communications over the Internet where traffic profiles are
highly unpredictable and the competitive nature of the business drives
corporations to load their networks to the maximum.

Packetizing voice codec are becoming better at reducing sensitivity


to packet loss. The main approaches are smaller packet sizes, interpolation
(algorithmic regeneration of lost sound), and a technique where a low-bit-rate
sample of each voice packet is appended to the subsequent packet. Through these
techniques, and at some cost of bandwidth efficiency, good sound quality can be
maintained even in relatively high packet loss scenarios.

As techniques for reducing sensitivity to packet loss improve, so a


new opportunity for the achievement of even greater efficiencies is presented.
This refers to the suppression of the transmission of voice packets whose loss is
determined by the encoder to be below a threshold of tolerability at the decoder.

14
Dept. Of Computer Science & Engineering, SIST
This is particularly attractive in the packet based networking world where
statistical multiplexing favors the reuse of freed-up bandwidth.

3. Delay:

Two problems that result from high end-to-end delay in a voice


network is echo and talker overlap. Echo becomes a problem when the round-trip
delay is more than 50 milliseconds. Since echo is perceived as a significant
quality problem, VoIP systems must address the need for echo control and
implement some means of echo cancellation. Talker overlap (the problem of one
caller stepping on the other talker’s speech) becomes significant if the one-way
delay becomes greater than 250 milliseconds. The end-to-end delay budget is
therefore the major constraint and driving requirement for reducing delay through
a packet network.

Propagation delay (the time taken for the information wave-front


to travel a given distance through a given media), jitter buffering, packetization,
analog to digital encoding and digital to analog decoding delays are responsible
for most of the overall delay. Service and wait time through the switching and
transmission elements of the network may be considered trivial given the small
packet sizes and relatively wide bandwidths prevalent on the Internet. It is
generally true that when considering the achievable quality of a given service, the
overall geographic distance traveled by a call is far more important than the
complexity of its routing, (i.e. the number of intermediary nodes or "hop-count").

4. Jitter:

Jitter is the variation in inter-packet arrival time as introduced by the


variable transmission delay over the network. Removing jitter requires collecting
packets and holding them long enough to allow the slowest packets to arrive in
time to be played in the correct sequence, which causes additional delay. The
jitter buffers add delay, which is used to remove the packet delay variation that
each packet is subjected to as it transits the packet network.

15
Dept. Of Computer Science & Engineering, SIST
5. Overhead:

Each packet carries a header of various sizes that contains


identification and routing information. This information, necessary for the
handling of each packet, constitutes ‘overhead’ not present with circuit switching
techniques. Small packet size is important with real-time transmissions since
packet size contributes directly to delay and the smaller the packet size, the less
sensitive a given transmission would be to packet loss. Various new techniques
such as header compression are evolving to reduce the packet overhead in IP
networks. It is likely that packet based networks, of one form or another, will
eventually approach the efficiency, with respect to overhead, of circuit-based
networks.

6. User friendly design:

The user need not know what technology is being used for the call.
He should be able to use the telephone as he does right now.

7. Easy configuration:

An easy to use management interface is needed to configure the


equipment. A variety of parameters and options such as telephony protocols,
compressing algorithm selections, dialing plans, access controls, PSTN fall back
features, port arrangement etc. are to be taken care of.

8. Addressing/Directories:

Telephone numbers and IP addresses need to be managed in a way


that it is transparent to the user. PCs that are used for voice calls may need
telephone numbers. IP enabled telephones IP addresses or an access to one via
DHCP protocols and Internet directory services will need to be extended to
include mappings between the two types of addresses.

9. Security issues:

16
Dept. Of Computer Science & Engineering, SIST
VOIP networks introduce some new risks to carriers and their
customers, risks that are not yet fully appreciated. Responding to these threats
requires some specific techniques, comprehensive, multi-layer security policies,
and firewalls that can handle the special latency and performance requirements of
VoIP.

It is important to remember that a VoIP network is an IP network.


Any VoIP device is an IP device, and it's therefore vulnerable to the same types of
attacks as any other IP device. In addition, a VoIP network will almost always
have non-VoIP devices attached to it and be connected to other mission-critical
networks.

Every IP network, regardless of how private it is, eventually winds


up connected to the global Internet. Even if it is not possible to directly route a
packet from the "private" network onto the Internet, it is extremely likely that
some host on the "private" network will also be connected to a less private
network. Compromising this host provides an attacker with a gateway into the
presumed secure private network. It's important, therefore, to secure all IP
networks, but VoIP networks have special security requirements. Specific
techniques, comprehensive policies, and VoIP-capable firewalls are needed to do
the job right.

9. VOIP APPLICATIONS

17
Dept. Of Computer Science & Engineering, SIST
Cross-platform connections

Some VoIP products, including Skype and Gizmo Project, run on


Windows and Linux, while Apple’s iChat AV runs only on OS X.

Skype, SightSpeed, Gizmo Project, and iChat AV allow you to host either
multiparty voice or videoconference calls. Unlike expensive high-end
conferencing systems designed for large businesses, which are often connected to
a telephone system, these simple desktop VoIP apps can make conferencing easier
—and more affordable. All of these applications allow you to call other Internet
users for free. But if you want to call somebody using his or her telephone
number, as permitted by Skype, Gizmo Project, and the Wengo plug-in, you’ll
pay a basic, per-minute fee. At this writing, neither iChat AV nor Sigh Speed
permits computer-to-phone calling.

Advanced features cost money

While you can make basic calls for free, more-advanced features
will cost you. For instance, Skype’s voice-mail feature carries a small monthly
charge. Obtaining a permanent phone number from Skype (called a SkypeIn
number) involves an additional fee. Also for a fee, Gizmo Project allows you to
forward your incoming calls to another telephone, such as your cell phone, and
SightSpeed offers extended conferencing and video-messaging features for paid
subscribers. iChat AV users can’t call traditional phone numbers, but they can call
each other, using securely encrypted audio channels on the Internet if all
participants are .Mac subscribers.

Once you become accustomed to a desktop VoIP tool, you may


find that VoIP calling becomes a part of your daily routine. After all, it’s a lot
easier to dial a Skype buddy by double-clicking on a name than it is to look up a
number in Address Book and manually punch it in on your telephone’s keypad. If
you’re into multiplayer Internet games, using a tool like Skype to keep in touch
with your teammates is nice, as it relieves you from having to type text-chat

18
Dept. Of Computer Science & Engineering, SIST
messages during the game. And if you have relatives in other countries, talking to
them over the Internet will cost you a lot less than placing international long-
distance calls.

VoIP in Military Applications

Military organizations located worldwide are currently


transitioning their telephony infrastructure from legacy TDM to Next Generation
Networks (NGN) based on VoIP technology.
The reasons for the migration taking place within the military
have many similarities to those of the migration in the commercial telephony
space. Some of these similarities include lower OPEX resulting from having a
single consolidated network for data and telephony, and the ability to deliver new
services quickly. However, deployments within the military can have additional
specific benefits when moving to VoIP. Voice over IP, as its name implies,
traverses over IP networks which, when designed correctly, can be more resilient
than TDM networks. IP networks are easier to deploy and manage compared to
their older TDM counterparts.

In the past, significant investments were made in legacy telephony


equipment by military organizations. In most cases, this investment included
military purpose built TDM equipment. As a result, the process of migration to
NGN will be gradual. It may take many years before full end-to-end VoIP
communication can be realized - where all handsets are SIP based and TDM
trunks are eliminated.

Mobile VoIP

Today, as cost efficient communications increase in demand,


Mobile VoIP technologies that turn a mobile device into a SIP client and use a

19
Dept. Of Computer Science & Engineering, SIST
data network for sending and receiving communications, have become
increasingly important to users. The mobile VoIP market is expected to be worth
$32.2 billion by 2013 and by 2019, half of all mobile calls will be made over all-
IP networks, according to recent industry reports. Mobile VoIP provider REVE
Systems offers operators support for the shift to mobile VoIP with their iTel
Mobile Dialer Express, a mobile application that makes it possible to use VoIP
via any mobile phone and that can be branded by operators. iTel Mobile Dialer
Express supports GPRS, Wi-Fi and Bluetooth for Internet connectivity and can
run on any phone on Symbian or Windows Mobile 5 and 6 platforms.

10. IS VOIP THE FUTURE OF


TELECOMMUNICATIONS?

20
Dept. Of Computer Science & Engineering, SIST
VoIP means that the technology used to send data over the
Internet is now being used to transmit voice as well. The technology is known as
packet switching. Instead of establishing a dedicated connection between two
devices (computers, telephones, etc.) and sending the message "in one piece", this
technology divides the message into smaller fragments, called 'packets'. These
packets are transmitted separately over a decentralized network and when they
reach the final destination, they're reassembled into the original message.

VoIP allows a much higher volume of telecommunications traffic


to flow at much higher speeds than traditional circuits do, and at a significantly
lower cost. VoIP networks are significantly less capital intensive to construct and
much less expensive to maintain and upgrade than legacy networks (traditional
circuit-switched networks). Since VoIP networks are based on Internet protocol,
they can seamlessly and cost-effectively interface with the high technology,
productivity-enhancing services shaping today's business landscape. These
networks can seamlessly interface with web-based services such as virtual portals,
interactive voice response (IVR), and unified messaging packages, integrating
data, fax, voice, and video into one communications platform that can
interconnect with the existing telecommunications infrastructure.

Industry experts see VoIP as a tool that will become the standard
platform for the international calling market. It is strongly believed that the profit
realization VoIP will trigger in the global telecommunications industry will dwarf
the impact of the now ancient "digital revolution".

As with any promising new technology, a myriad of companies are


trying to climb aboard the VoIP bandwagon. Currently, however, the industry is
characterized by a high degree of confusion. Most companies, including large
resource-rich national and international telecommunications carriers, are
experiencing enormous difficulty in building effective international VoIP

21
Dept. Of Computer Science & Engineering, SIST
networks. They are unable to harness the power of VoIP or effectively
communicate the benefits of VoIP to their customers.

We mustn’t ignore the problem of scalability. The system has to be


designed so that it can grow. And each segment within the system must be able to
grow. Creating an architecture that can handle billions of minutes of use per
month requires a solution with high call processing capabilities. If we are looking
at a global solution, we have to start from the beginning with a global approach.
And that’s one of the reasons why a fully implemented solution won’t be
available tomorrow.

11. CONLUSION

22
Dept. Of Computer Science & Engineering, SIST
Data traffic has traditionally been forced to fit onto the voice
network (using modems, for example). The Internet has created an opportunity to
reverse this integration strategy – voice and facsimile can now be carried over IP
networks, with the integration of video and other multimedia applications close
behind. The Internet and its underlying TCP/IP protocol suite have become the
driving force for new technologies, with the unique challenges of real-time voice
being the latest in a series of developments.

Telephony over the Internet cannot make compromise in voice


quality, reliability, scalability, and manageability. Future extensions will include
innovative new solutions including conference bridging, voice/data
synchronization, combined real-time and message-based services, text-to- speech
conversion and voice response systems.

The market for VoIP products is established and is beginning its


rapid growth phase. Producers in this market must look for ways to improve their
time-to market if they wish to be market leaders. Buying and integrating
predefined and pre-tested software (instead of custom building everything) is one
of the options. Significant benefits of the “buy vs. build “ approach include
reduced development time, simplified product integration, lower costs, off-
loading of standard compliance issues, and fewer risks. Software that is known to
conform to standards, has built-in accommodation for difference in national
telephone systems, has already been optimized for performance and reliability,
and has “plug and play” capabilities can eliminate many very time-consuming
development tasks.

12. REFERENCES

23
Dept. Of Computer Science & Engineering, SIST
1. Computer Networks by Andrew S.Tanenbaum

2. Internetworking with TCP/IP by Douglas E.comer

3. www.iec.org.com

4. www.telogy.com

5. www.rad.com

6. www.mailto:blazer@gslis.utexas.edu

24
Dept. Of Computer Science & Engineering, SIST

You might also like