You are on page 1of 53

BROADBAND WIRELESS TECHNOLOGIES

UNIT I
XDSL SYSTEMS

Digital Subscriber Line (DSL), is a family of technologies that provides digital data transmission over the wires
of a local telephone network. xDSL refers to the different variants of DSL. DSL originally stood for digital
subscriber loop, although in recent years, the term digital subscriber line has been widely adopted as a more
marketing-friendly term for Asymmetric Digital Subscriber Line (ADSL), which is the most popular version of
consumer-ready DSL. DSL can be used at the same time and on the same telephone line with regular telephone,
as it uses high frequency, while regular telephone uses low frequency.
Typically, the download speed of consumer DSL services ranges from 256 kilobits per second (kbit/s) to 24,000
kbit/s, depending on DSL technology, line conditions and service level implemented. Typically, upload speed is
lower than download speed for ADSL and equal to download speed for the rarer Symmetric Digital Subscriber
Line (SDSL).
The line length limitations from telephone exchange to subscriber are more restrictive for higher data
transmission rates. Technologies such as Very high bit-rate Digital Subscriber Line (VDSL) provide very high
speed, short-range links as a method of delivering "triple play" services (typically implemented in fiber to the
curb network architectures). Technologies like Gigabit Digital Subsriber Line (GDSL) can further increase the
data rate of DSL. Fiber Optic technologies exist today that allow the conversion of copper based Integrated
Services Digital Network (ISDN), ADSL and DSL over fiber optics.
CommVerge Solutions has provided numerous broadband DSL solutions to major Service Providers acroos the
region. Our expertise in these areas greatly benefit our carrier customers, and subsequently, their subscribers
and shareholders.
CommVerge offers a wide array of solutions partnering with well known and proven strategic partners that will
address each operator's requirement in terms of DSL deployment while considereing factors such as time to
market, cost efficiency and reliability of such DSL infrastructures.
Features & Benefits:

Unlike traditional analog dial-up, DSL is "always up" and ready to use.

Provides digital data data transmission over the existing wires of a local telephone network.

Provides downlnk speed of up to 20Mbps depending on the DSL technology used.

Requires no massive re-wiring making the cost of any initial deployment financially feasible.

Enables Telecom operators to offer multi-media services.

FIBER TO THE HOME (FTTH)

Fiber to the home (FTTH), also called "fiber to the premises" (FTTP), is the installation and use of optical
fiber from a central point directly to individual buildings such as residences, apartment buildings and businesses
to provide unprecedented high-speed Internet access. FTTH dramatically increases the connection speeds
available to computer users compared with technologies now used in most places.

While FTTH promises connection speeds of up to 100 megabits per second (Mbps) -- 20 to 100 times as fast as
a typical cable modem or DSL (Digital Subscriber Line) connection -- implementing FTTH on a large scale will
be costly because it will require installation of new cable sets over the "last links" from existing optical fiber
cables to individual users. Some communities currently enjoy "fiber to the curb" (FTTC) service, which refers
to the installation and use of optical fiber cable to the curbs near homes or businesses, with a "copper" medium
carrying the signals between the curb and the end users.

HFC SYSTEMS

Hybrid fiber coaxial (HFC) refers to a broadband telecommunications network that combines optical fiber and
coaxial cable.
Hybrid fiber coaxial is used for delivering video, telephony, voice telephony, data and other interactive services
over coaxial and fiber optic cables. Hybrid fiber coaxial is globally employed by cable operators.
Hybrid fiber coaxial is also known as hybrid fiber coax.
The fiber-optic network extends from the cable operator’s master head end to the regional head ends and then to
the neighborhood hub site and to fiber-optic nodes serving approximately 25 to 2,000 homes. Master head ends
consist of satellite dishes for the reception of distant video signals and IP aggregation routers.
The master head ends may also house telephony equipment that provides telecommunication service to
communities. The area hub receives video signals from the master head end and adds it to public, educational
and government access cable TV channels, as required by franchising authorities.
The different services are encoded, modulated and upgraded on radio frequency carriers, combined into single
electrical signals, and inserted into a broadband optical transmitter. The transmitter converts the electrical signal
to a downstream optically modulated signal, which is sent to the nodes. Fiber-optic cables connect the head end
to optical nodes in star topologies or protected ring topologies.

LEGACY SYSTEMS

A legacy system refers to a computing device or equipment that is outdated, obsolete or no longer in production.
This includes all devices that are unsupported or no longer commonly used by most devices and software
applications.
ypically, a legacy device consists of non plug-and-play (PnP) devices that lack a peripheral controller interface
(PCI) and requires manual configuration and jumper installation. A legacy device also includes computing
equipment rendered obsolete by modern technologies.

For example, because CD drives replaced floppy disk drives, there are few new computers distributed with
built-in floppy drives. Similarly, native legacy devices are not supported by most modern software applications

GIGABIT ETHERNET

Gigabit Ethernet is a version of the Ethernet technology broadly used in local area networks (LANs) for
transmitting Ethernet frames at 1 Gbps. It is used as a backbone in many networks, particularly those of large
organizations. Gigabit Ethernet is an extension to the preceding 10 Mbps and 100 Mbps 802.3 Ethernet
standards. It supports 1,000 Mbps bandwidth while maintaining full compatibility with the installed base of
around 100 million Ethernet nodes.
Gigabit Ethernet usually employs optical fiber connection to transmit information at a very high speed over long
distances. For short distances, copper cables and twisted pair connections are used.
Gigabit Ethernet is abbreviated as GbE or 1 GigE.
Gigabit Ethernet was developed by Dr. Robert Metcalf and introduced by Intel, Digital and Xerox in the early
1970s. It quickly became a larger LAN technology system for information and data sharing worldwide. In 1998,
the first Gigabit Ethernet standard, labeled 802.3z, was certified by the IEEE 802.3 Committee.
Gigabit Ethernet is supported by five physical layer standards. The IEEE 802.3z standard incorporates 1000
BASE-SX for data transmission via multimode optical fiber. In addition, the IEEE 802.3z includes 1000 BASE-
LX over single-mode fiber and 1000 BASE-CX via copper cabling for transmission. These standards use
8b/10b encoding, but the IEEE 802.3ab, known as interface type 1000BASE-T, uses a different encoding
sequence for transmission over twisted pair cable.
Gigabit Ethernet offers the following benefits over regular 10 to 100 Mbps Ethernet:
 Transmission rate is 100 times greater
 Reduces bottleneck problems and enhances bandwidth capacity, resulting in superior performance
 Offers full-duplex capacity that can provide virtually doubled bandwidth
 Offers cumulative bandwidth for faster speed by employing gigabit server adapters and switches
 Quality of service (QoS) features reduced latency problems and offers better video and audio services
 Highly affordable to own
 Compatible with existing installed Ethernet nodes
 Transfers a large amount of data quickly

10 GIGABIT ETHERNET

10 Gigabit Ethernet (10 GbE, 10 GE or 10 GigE) is a telecommunications technology that transmits data
packets over Ethernet at a rate of 10 billion bits per second. This innovation extended the traditional and
familiar use of Ethernet in the local area network (LAN) to a much wider field of network application, including
high-speed storage area networks (SAN), wide area networks (WAN) and metropolitan area networks (MAN).

10 Gigabit Ethernet is also known as IEEE 802.3ae.

10 GbE differs from traditional Ethernet in that it takes advantage of full-duplex protocol, in which data is
transmitted in both directions simultaneously by using a networking switch to link devices. This means that the
technology strays from the Carrier Sense Multiple Access/Collision Detection (CSMA/CD) protocols, which
are rules used to determine how network devices will respond when two devices attempt to use a data channel
simultaneously, also called a collision. Since the transmission in 10 GbE is bidirectional, the transfer of frames
is faster.
The advantages of 10 Gigabit Ethernet include:
 Low-cost bandwidth
 Faster switching. 10 GbE uses the same Ethernet format, which allows seamless integration of LAN,
SAN, WAN and MAN. This eliminates the need for packet fragmentation, reassembling, address
translation, and routers.
 Straightforward scalability. Upgrading from 1 GbE to a 10 GbE is simple because their upgrade paths
are similar.

The main issue here is that 10 GbE is optimized for data and therefore does not provide built-in quality of
service, although this may be provided in the higher layers.
100 GIGABIT ETHERNET

100 gigabit Ethernet (100 GbE) is a version and series of Ethernet technologies that enables the transmission of
data at a speed of 100 gigabits per second. 100 GbE is developed and maintained under the IEEE 802.3ba
standard committee to provide high-speed data transfer between long distance channels and nodes.
100 GbE is primarily designed for direct communication between switches. It provides the highest achievable
data transmission speeds and maintains support and integration with existing Ethernet technologies/interfaces. If
used in copper medium, the 100 GbE can reach a distance of 10 meters, whereas in single mode fiber it can be
extended to 60 miles. A 100 GbE modulation scheme involves breaking the whole bandwidth into two polarized
streams, each of which is further broken down into two streams of 25 Gbps each.

LIMITATIONS OF TWISTED PAIR CABLES

A twisted pair of wires is used to transmit analog and digital signals, and has a frequency of 100 Hz to 5 MHz.
It features two copper wires of 1 mm thickness. Some major drawbacks of twisted pair of wires are:
*Need primary peer-to-peer cable connections
*Highly sensitive and prone to electrical interference
*Can be used only over a short distance
*Has limited data rate and bandwidth
*Lesser security than a non-twisted pair

UNIT – 2
Fundamentals of broadband distribution systems

COAXIAL CABLE, or coax is a type of electrical cable that has an inner conductor surrounded by a
tubular insulating layer, surrounded by a tubular conducting shield. Many coaxial cables also have an insulating
outer sheath or jacket. Coaxial cable is a type of transmission line, used to carry high frequency electrical
signals with low losses. It is used in such applications as telephone trunklines, broadband internet networking
cables, high speed computer data busses, carrying cable television signals, and connecting radio
transmitters and receivers to their antennas. It differs from other shielded cables because the dimensions of the
cable and connectors are controlled to give a precise, constant conductor spacing, which is needed for it to
function efficiently as a transmission line.
TYPES

Hard line

Hard line is used in broadcasting as well as many other forms of radio communication. It is a coaxial
cable constructed using round copper, silver or gold tubing or a combination of such metals as a shield. Some
lower-quality hard line may use aluminum shielding, aluminum however is easily oxidized and unlike silver
oxide, aluminum oxide drastically loses effective conductivity. Therefore, all connections must be air and water
tight. The center conductor may consist of solid copper, or copper-plated aluminum. Since skin effect is an issue
with RF, copper plating provides sufficient surface for an effective conductor. some internal applications may
omit the insulation jacket. Hard line can be very thick, typically at least a half inch or 13 mm and up to several
times that, and has low loss even at high power. These large-scale hard lines are almost always used in the
connection between a transmitter on the ground and the antenna or aerial on a tower. Hard line may also be
known as Heliax, Cablewave. The dielectric in hard line may consist of polyethylene foam, air, or a pressurized
gas such as nitrogen or desiccated air (dried air). In gas-charged lines, hard plastics such as nylon are used as
spacers to separate the inner and outer conductors. The addition of these gases into the dielectric space reduces
moisture contamination, provides a stable dielectric constant, and provides a reduced risk of internal arcing.
Gas-filled hardlines are usually used on high-power RF transmitters such as television or radio broadcasting,
military transmitters, and high-power amateur radio applications but may also be used on some critical lower-
power applications such as those in the microwave bands. However, in the microwave region, waveguide is
more often used than hard line for transmitter-to-antenna, or antenna-to-receiver applications. The various
shields used in hardline also differ; some forms use rigid tubing, or pipe, while others may use corrugated
tubing, which makes bending easier, as well as reduces kinking when the cable is bent to conform. Smaller
varieties of hard line may be used internally in some high-frequency applications, in particular in equipment
within the microwave range, to reduce interference between stages of the device.

Radiating

Radiating or leaky cable is another form of coaxial cable which is constructed in a similar fashion to hard line;
however it is constructed with tuned slots cut into the shield. These slots are tuned to the specific RF
wavelength of operation or tuned to a specific radio frequency band. This type of cable is to provide a tuned bi-
directional "desired" leakage effect between transmitter and receiver. It is often used in elevator shafts, US
Navy Ships, underground transportation tunnels and in other areas where an antenna is not feasible. One
example of this type of cable is Radiax.

RG-6

RG-6 is available in four different types designed for various applications. In addition, the core may be copper
clad steel (CCS) or bare solid copper (BC). "Plain" or "house" RG-6 is designed for indoor or external house
wiring. "Flooded" cable is infused with waterblocking gel for use in underground conduit or direct burial.
"Messenger" may contain some waterproofing but is distinguished by the addition of a steel messenger
wire along its length to carry the tension involved in an aerial drop from a utility pole. "Plenum" cabling is
expensive and comes with a special Teflon-based outer jacket designed for use in ventilation ducts to meet fire
codes. It was developed since the plastics used as the outer jacket and inner insulation in many "Plain" or
"house" cabling gives off poison gas when burned.

Triaxial cable

Triaxial cable or triax is coaxial cable with a third layer of shielding, insulation and sheathing. The outer
shield, which is earthed (grounded), protects the inner shield from electromagnetic interference from outside
sources.

Twin-axial cable

Twin-axial cable or twinax is a balanced, twisted pair within a cylindrical shield. It allows a nearly perfect
differential signal which is both shielded and balanced to pass through. Multi-conductor coaxial cable is also
sometimes used.

Semi-rigid

Semi-rigid cable is a coaxial form using a solid copper outer sheath. This type of coax offers superior screening
compared to cables with a braided outer conductor, especially at higher frequencies. The major disadvantage is
that the cable, as its name implies, is not very flexible, and is not intended to be flexed after initial forming.

Conformable cable is a flexible reformable alternative to semi-rigid coaxial cable used where flexibility is
required. Conformable cable can be stripped and formed by hand without the need for specialized tools, similar
to standard coaxial cable.

Rigid line

Rigid line is a coaxial line formed by two copper tubes maintained concentric every other meter using PTFE-
supports. Rigid lines cannot be bent, so they often need elbows. Interconnection with rigid line is done with an
inner bullet/inner support and a flange or connection kit. Typically, rigid lines are connected using
standardised EIA RF Connectors whose bullet and flange sizes match the standard line diameters. For each
outer diameter, either 75 or 50 ohm inner tubes can be obtained. Rigid line is commonly used indoors for
interconnection between high power transmitters and other RF-components, but more rugged rigid line with
weatherproof flanges is used outdoors on antenna masts, etc. In the interests of saving weight and costs, on
masts and similar structures the outer line is often aluminium, and special care must be taken to prevent
corrosion. With a flange connector, it is also possible to go from rigid line to hard line. Many broadcasting
antennas and antenna splitters use the flanged rigid line interface even when connecting to flexible coaxial
cables and hard line. Rigid line is produced in a number of different sizes:

IMPEDANCE

The best coaxial cable impedances in high-power, high-voltage, and low-attenuation applications were
experimentally determined at Bell Laboratories in 1929 to be 30, 60, and 77 Ω, respectively. For a coaxial cable
with air dielectric and a shield of a given inner diameter, the attenuation is minimized by choosing the diameter
of the inner conductor to give a characteristic impedance of 76.7 Ω. When more common dielectrics are
considered, the best-loss impedance drops down to a value between 52–64 Ω. Maximum power handling is
achieved at 30 Ω.

The approximate impedance required to match a Centre-fed dipole antenna in free space (i.e., a dipole without
ground reflections) is 73 Ω, so 75 Ω coax was commonly used for connecting shortwave antennas to receivers.
These typically involve such low levels of RF power that power-handling and high-voltage breakdown
characteristics are unimportant when compared to attenuation. Likewise with CATV, although many broadcast
TV installations and CATV headends use 300 Ω folded dipole antennas to receive off-the-air signals, 75 Ω coax
makes a convenient 4:1 balun transformer for these as well as possessing low attenuation.

The arithmetic mean between 30 Ω and 77 Ω is 53.5 Ω; the geometric mean is 48 Ω. The selection of 50 Ω as a
compromise between power-handling capability and attenuation is in general cited as the reason for the
number. 50 Ω also works out tolerably well because it corresponds approximately to the drive impedance
(ideally 36 ohms) of a quarter-wave monopole, mounted on a less than optimum ground plane such as a vehicle
roof. The match is better at low frequencies, such as for CB Radio around 27 MHz, where the roof dimensions
are much less than a quarter wavelength, and relatively poor at higher frequencies, VHF and UHF, where the
roof dimensions may be several wavelengths. The match is at best poor, because the antenna drive impedance,
due to the imperfect ground plane, is reactive rather than purely resistive, and so a 36 ohm coaxial cable would
not match properly either. Installations which need exact matching will use some kind of matching circuit at the
base of the antenna, or elsewhere, in conjunction with a carefully chosen (in terms of wavelength) length of
coaxial, such that a proper match is achieved, which will be only over a fairly narrow frequency range.

RG-62 is a 93 Ω coaxial cable originally used in mainframe computer networks in the 1970s and early 1980s (it
was the cable used to connect IBM 3270 terminals to IBM 3274/3174 terminal cluster controllers). Later, some
manufacturers of LAN equipment, such as Datapoint for ARCNET, adopted RG-62 as their coaxial cable
standard. The cable has the lowest capacitance per unit-length when compared to other coaxial cables of similar
size.

All of the components of a coaxial system should have the same impedance to avoid internal reflections at
connections between components. Such reflections may cause signal attenuation and ghosting TV picture
display; multiple reflections may cause the original signal to be followed by more than one echo. In analog
video or TV systems, this causes ghosting in the image. Reflections also introduce standing waves, which cause
increased losses and can even result in cable dielectric breakdown with high-power transmission
(see Impedance matching). Briefly, if a coaxial cable is open, the termination has nearly infinite resistance, this
causes reflections; if the coaxial cable is short-circuited, the termination resistance is nearly zero, there will be
reflections with the opposite polarity. Reflection will be nearly eliminated if the coaxial cable is terminated in a
pure resistance equal to its impedance.

ATTENUATION

It quantifies the loss of signal and is expressed in dB (decibels). In terms of voltage (reception) 6 dB of
attenuation halve the signal, in power (transmission) the signal is halved every 3 dB. Attenuation in a coaxial
cable depends on the frequency and length of the cable itself.
Higher is work frequency greater will be the attenuation. By convention the length is set to 100 meters, as
shown in the following chart.
The cable attenuation is determined by:
- Diameter of the center conductor
- Quality of copper and its drawing
- Dielectric quality

The central conductor (with dielectric) is the weakest part of the cable. For this reason it must be pulled into the
cable pipes, joining together the conductor, the braid and the foil. Pulling it from the sheath certainly
represents the best option because it's possible to exert a force of at least 25 Kg (for example on the 5mm
DIGISAT 122 Expert model).

RETURN LOSS:

In telecommunications, return loss is the loss of power in the signal returned/reflected by a discontinuity in
a transmission line or optical fiber. This discontinuity can be a mismatch with the terminating load or with a
device inserted in the line. It is usually expressed as a ratio in decibels (dB)

Where RL(dB) is the return loss in dB, Pi is the incident power and Pr is the reflected power.

Return loss is related to both standing wave ratio (SWR) and reflection coefficient (Γ). Increasing return
loss corresponds to lower SWR. Return loss is a measure of how well devices or lines are matched. A
match is good if the return loss is high. A high return loss is desirable and results in a lower insertion
loss.

Return loss is used in modern practice in preference to SWR because it has better resolution for small
values of reflected wave.

SHIELDING

Cable shielding functions as an electromagnetic energy interceptor: it prevents electrical interference from
traveling to the cable's center conductor and disrupting the data signal.
This article focuses on the coaxial cable shield, its effectiveness and what to look for when buying cable. The
shield is easy to examine, which provides a lot of information about the cable quality. Of course, other parts of
the cable are important, too, but they are not discussed here in any detail. The basic components of a coaxial
cable, from the inside out, are center conductor, dielectric, one or more shield layers and jacket (figure 1). A
significant part of the cost to manufacturer coaxial cables is the outer conductor, or shield. Depending on the
cable construction, the shield may use braided bare- or tinnedcopper wires, a conductive foil tape such as
aluminum, a corrugated or smooth solid copper or aluminum tube outer conductor or some combination. It is
intuitive that the more shield coverage, the better. Some shield types, such as a tubular or wrapped shield,
completely enclose the dielectric and center conductor. As a practical matter, a single-braid shield alone cannot
achieve 100% coverage. The best individual braids achieve 95%. Many low-quality cables have around
50~60% coverage, or even less.
Shield purpose
The shield serves four basic purposes. The first is to keep the desired electrical currents inside, and the second is
to keep the undesired currents outside. In radio astronomy, the desired currents are from celestial radio waves
coupled by the radio telescope antenna into the coaxial cable transmission line. Undesired currents are those
from terrestrial transmitters and other radio frequency interference (RFI) sources that are coupled to the cable
from the outside. In all practical cables some undesired RFI energy can leak into the cable and some of the
desired energy can leak out. The basic leakage mechanisms are radio currents diffusing through the shield
materials, inductive coupling via the magnetic fields setup as the currents flow in the cable and by capacitive
(electrostatic) coupling through holes or gaps in the shield, such as in the weave of a braided shield. The third
purpose of the cable shield is to provide a return path for currents used to power tower-mounted electronics
such as a preamplifier through the coax center conductor. Finally, the fourth purpose is to provide a path to
earth ground for foreign voltages and currents on the coaxial cable due to lightning events, accidental power
cross and static build-up (for additional details, see [1]). The last two purposes are beyond the scope of this
article. Shield coverage and shielding effectiveness
The use of shield coverage in advertising literature and datasheets is a marketing gimmick. It is the percentage
of actual metal surface area to the total surface area of the cylinder underneath the shield. Shield coverage can
be calculated from the geometry and dimensions of the cable components (figure 2) [2], but it provides only a
rough indication of performance or effectiveness. While it is true that cables with higher coverage may provide
better shielding than cables with lower coverage, shield coverage as a percentage is a physical quantity with an
ambiguous relationship to effectiveness and says nothing about frequency effects. Technically appropriate terms
that describe shield effectiveness are screening fficiency, screening factor and transfer impedance. It is very
difficult to accurately calculate shield effectiveness, but it can be measured.
Screening efficiency is a measure of the current transfer ratio and screening factor is a measure of the voltage
transfer ratio, where transfer ratio means from inside to outside or outside to inside. Both can be expressed as a
linear ratio or logarithmic in dB. Transfer impedance is the quantity most commonly used to describe shield
effectiveness .
Impedance is the ratio of voltage to current in a circuit. For cable shields the transfer impedance is the ratio of
the voltage setup in a disturbed circuit by current flowing in the disturbing circuit. In a receiving system, such
as a radio telescope, the disturbed circuit is that of the coaxial cable transmission line and the disturbing circuit
is the electrical environment surrounding the cable.

MULTIPLEXING

The purpose of multiplexing is to share the bandwidth of a single transmission channel among several users.
Two multiplexing methods are commonly used in fiber optics: 1. Time-division multiplexing (TDM) 2.
Wavelength-division multiplexing (WDM) F UNDAMENTALS OF P HOTONICS 314 © 2000 University of
Connecticut A.

Time-Division Multiplexing (TDM)

In time-division multiplexing, time on the information channel, or fiber, is shared among the many data
sources. The multiplexer MUX can be described as a type of “rotary switch,” which rotates at a very high
speed, individually connecting each input to the communication channel for a fixed period of time. The process
is reversed on the output with a device known as a demultiplexer, or DEMUX. After each channel has been
sequentially connected, the process repeats itself. One complete cycle is known as a frame. To ensure that each
channel on the input is connected to its corresponding channel on the output, start and stop frames are added to
synchronize the input with the output. TDM systems may send information using any of the digital modulation
schemes described (analog multiplexing systems also exist). This is illustrated in Figure 8-15.

Figure 8-15 Time-division multiplexing system

The amount of data that can be transmitted using TDM is given by the MUX output rate and is defined by

. MUX output rate = N × Maximum input rate

where N is the number of input channels and the maximum input rate is the highest data rate in bits/second of
the various inputs.

The bandwidth of the communication channel must be at least equal to the MUX output rate. Another
parameter commonly used in describing the information capacity of a TDM system is the channel-switching
rate. This is equal to the number of inputs visited per second by the MUX and is defined as

Channel switching rate = Input data rate × Number of channels

Wavelength-Division Multiplexing (WDM)

In wavelength-division multiplexing, each data channel is transmitted using a slightly different wavelength
(different color). With use of a different wavelength for each channel, many channels can be transmitted
through the same fiber without interference. This method is used to increase the capacity of existing fiber optic
systems many times. Each WDM data channel may consist of a single data source or may be a combination of
a single data source and a TDM (time-division multiplexing) and/or FDM (frequency-division multiplexing)
signal. Dense wavelength-division multiplexing (DWDM) refers to the transmission of multiple closely spaced
wavelengths through the same fiber. For any given wavelength λ and corresponding frequency f, the
International Telecommunications Union (ITU) defines standard frequency spacing ∆f as 100 GHz, which
translates into a ∆λ of 0.8-nm wavelength spacing. This follows from the relationship

∆λ = λ ∆f/ f .

WDM
systems operate in the 1550-nm window because of the low attenuation characteristics of glass at 1550 nm and
the fact that erbium-doped fiber amplifiers (EDFA) operate in the 1530-nm–1570-nm range. Commercially
available systems today can multiplex up to 128 individual wavelengths at 2.5 Gb/s or 32 individual
wavelengths at 10 Gb/s (see Figure 8-17). Although the ITU grid specifies that each transmitted wavelength in
a DWDM system is separated by 100 GHz, systems currently under development have been demonstrated that
reduce the channel spacing to 50 GHz and below (< 0.4 nm). As the channel spacing decreases, the number of
channels that can be transmitted increases, thus further increasing the transmission capacity of the system.

COAXIAL PASSIVE COMPONENTS

Filters High and low pass filters are used to filter out unwanted frequencies. For example when an area of the
network does not have the need for return path operation, a high pass trunk filter can be used to filter out
possible ingress and also isolate the subscriber area from the return network. Diplex filters are used to combine
or separate forward and return path frequencies.

Attenuators and equalizers

Inline attenuators are used when extra attenuation is needed in the signal path. With F – female / F – male
connectors, the attenuators can be mounted directly on a F – connector without the need of adapters.

Digital line

Digital line splitters, taps and multitaps are a versatile family of indoor passives. By offering a wide choice of
different housings and complete line of tap off and splitting ratios, digital line offers the highest possible
flexibility in designing and installing indoor access network.

Galvanic isolators
Galvanic isolators are used to galvanically isolate coaxial access network from subscribers premises. They
prevent problems caused by potential differences from occurring on sensitive devices like LCD and Plasma
screens, VoIP and cable modems, when connected directly to system outlet. The isolators withstand 2000
VDC, 0.7mA leakage current (1 minute) and 230 VAC RMS, 2.0 mA RMS (50/60 Hz).

OPTICAL BASICS

Optics is the branch of physics that deals with light and its properties and behavior. It is a vast science covering
many simple and complex subjects ranging from the reflection of light off a metallic surface to create an image,
to the interaction of multiple layers of coating to create a high optical density rugate notch filter. As such, it is
important to learn the basic theoretical foundations governing the electromagnetic spectrum, interference,
reflection, refraction, dispersion, and diffraction before picking the best component for one's optics, imaging,
and/or photonics applications.

THE ELECTROMAGNETIC SPECTRUM

Light is a type of electromagnetic radiation usually characterized by the length of the radiation of interest,
specified in terms of wavelength, or lambda (λ). Wavelength is commonly measured in nm (10-9 meters) or μm
(10-6 meters). The electromagnetic spectrum encompasses all wavelengths of radiation ranging from long
wavelengths (radio waves) to very short wavelengths (gamma rays); Figure 1 illustrates this vast spectrum. The
most relevant wavelengths to optics are the ultraviolet, visible, and infrared ranges. Ultraviolet (UV) rays,
defined as 1– 400nm, are used in tanning beds and are responsible for sunburns. Visible rays, defined as 400 -
750nm, comprise the part of the spectrum that can be perceived by the human eye and make up the colors
people see. The visible range is responsible for rainbows and the familiar ROYGBIV - the mnemonic many
learn in school to help memorize the wavelengths of visible light starting with the longest wavelength to the
shortest. Lastly, infrared (IR) rays, defined as 750nm – 1000μm, are used in heating applications. IR radiation
can be broken up further into near-infrared (750nm - 3μm), mid-wave infrared (3 - 30μm) and far-infrared (30 –
1000μm).
Figure 1: Electromagnetic Spectrum

INTERFERENCE

Isaac Newton (1643 - 1727) was one of the first physicists to propose that light was comprised of small
particles. A century later, Thomas Young (1773 - 1829) proposed a new theory of light which demonstrated
light's wave qualities. In his double-slit experiment, Young passed light through two closely spaced slits and
found that the light interfered with itself (Figure 2). This interference could not be explained if light was purely
a particle, but could if light was a wave. Though light has both particle and wave characteristics, known as the
wave-particle duality, the wave theory of light is important in optics while the particle theory in other branches
of physics.

Interference occurs when two or more waves of light add together to form a new pattern. Constructive
interference occurs when the troughs of the waves align with each other, while destructive interference occurs
when the troughs of one wave align with the peaks of the other (Figure 3). In Figure 3, the peaks are indicated
with blue and the troughs with red and yellow. Constructive interference of two waves results in brighter bands
of light, whereas destructive interference results in darker bands. In terms of sound waves, constructive
interference can make sound louder while destructive interference can cause dead spots where sound cannot be
heard.

Interference is an important theoretical foundation in optics. Thinking of light as waves of radiation similar to
ripples in water can be extremely useful. In addition, understanding this wave nature of light makes the
concepts of reflection, refraction, dispersion and diffraction discussed in the following sections easier to
understand.

Figure 2: Thomas Young's Double-Slit Experiment

REFLECTION

Reflection is the change in direction of a wavefront when it hits an object and returns at an angle. The law of
reflection states that the angle of incidence (angle at which light approaches the surface) is equal to the angle of
reflection (angle at which light leaves the surface). Figure 4 illustrates reflection from a first surface mirror.
Ideally, if the reflecting surface is smooth, all of the reflected rays will be parallel, defined as specular, or
regular, reflection. If the surface is rough, the rays will not be parallel; this is referred to as diffuse, or irregular,
reflection. Mirrors are known for their reflective qualities which are determined by the material used and the
coating applied.
Figure 4: Reflection from a First Surface Mirror

REFRACTION

While reflection causes the angle of incidence to equal the angle of reflection, refraction occurs when the
wavefront changes direction as it passes through a medium. The degree of refraction is dependent upon the
wavelength of light and the index of refraction of the medium. Index of refraction (n) is the ratio of the speed of
light in a vacuum (c) to the speed of light within a given medium (v). This can be mathematically expressed by
Equation 1. Index of refraction is a means of quantifying the effect of light slowing down as it enters a high
index medium from a low index medium (Figure 5).

(1)
(2)

Figure 5: Light Refraction from a Low Index to a High Index Medium


where n1 is the index of the incident medium, θ1 is the angle of the incident ray, n2 is the index of the
refracted/reflected medium, and θ2 is the angle of the refracted/reflected ray.
If the angle of incidence is greater than a critical angle θc (when the angle of refraction = 90°), then light is
reflected instead of refracted. This process is referred to as total internal reflection (TIR). Figure 6 illustrates
TIR within a given medium.
Figure 6: Total Internal Reflection
TIR is mathematically expressed by Equation 3:

(3)

Total Internal Reflection is responsible for the sparkle one sees in diamonds. Due to their high index of
refraction, diamonds exhibit a high degree of TIR which causes them to reflect light at a variety of angles, or
sparkle. Another notable example of TIR is in fiber optics, where light entering one end of a glass or plastic
fiber optic will undergo several reflections throughout the fiber's length until it exits the other end (Figure 7).
Since TIR occurs for a critical angle, fiber optics have specific acceptance angles and minimum bend radii
which dictate the largest angle at which light can enter and be reflected and the smallest radii the fibers can be
bent to achieve TIR.
Figure 7: Total Internal Refraction in a Single Fiber Optic

DISPERSION
Dispersion is a measure of how much the index of refraction of a material changes with respect to wavelength.
Dispersion also determines the separation of wavelengths known as chromatic aberration (Figure 8). A glass
with high dispersion will separate light more than a glass with low dispersion. One way to quantify dispersion is
to express it by Abbe number. Abbe number (vd) is a function of the refractive index of a material at the f
(486.1nm), d (587.6nm), and c (656.3nm) wavelengths of light (Equation 4).

(4)
The chromatic aberration caused by dispersion is responsible for the familiar rainbow effect one sees in optical
lenses, prisms, and similar optical components. Dispersion can be a highly desirable phenomenon, as in the case
of an equilateral prism to split light into its components colors. However, in other applications, dispersion can
be detrimental to a system's performance.

Figure 8: Dispersion through a Prism

DIFFRACTION

The interference patterns created by Thomas Young's double-slit experiment can also be characterized by the
phenomenon known as diffraction. Diffraction usually occurs when waves pass through a narrow slit or around
a sharp edge. In general, the greater the difference between the size of the wavelength of light and the width of
the slit or object the wavelength encounters, the greater the diffraction. The best example of diffraction is
demonstrated using diffraction gratings. A diffraction grating's closely spaced, parallel grooves cause incident
monochromatic light to bend, or diffract. The degree of diffraction creates specific interference patterns. Figures
9 and 10 illustrate various patterns achieved with diffractive optics. Diffraction is the underlying theoretical
foundation behind many applications using diffraction gratings, spectrometers, monochrometers, laser
projection heads, and a host of other components.
Multi-Line Red Laser Diffraction Pattern

Dot Matrix Red Laser Diffraction Pattern

The basic theoretical foundations governing the electromagnetic spectrum, interference, reflection, refraction,
dispersion, and diffraction are important stepping stones to more complex optical concepts. Light's wave
properties explain a great deal of optics; understanding the fundamental concepts of optics can greatly increase
one's understanding of the way light interacts with a variety of optical, imaging, and photonics components.

SINGLE MODE & MULTI MODE FIBER

Fiber optic cable functions as a "light guide," guiding the light introduced at one end of the cable through to the
other end. The light source can either be a light-emitting diode (LED)) or a laser.

The light source is pulsed on and off, and a light-sensitive receiver on the other end of the cable converts the
pulses back into the digital ones and zeros of the original signal.

Even laser light shining through a fiber optic cable is subject to loss of strength, primarily through dispersion
and scattering of the light, within the cable itself. The faster the laser fluctuates, the greater the risk of
dispersion. Light strengtheners, called repeaters, may be necessary to refresh the signal in certain applications.
While fiber optic cable itself has become cheaper over time - a equivalent length of copper cable cost less per
foot but not in capacity. Fiber optic cable connectors and the equipment needed to install them are still more
expensive than their copper counterparts.

Single Mode cable is a single stand (most applications use 2 fibers) of glass fiber with a diameter of 8.3 to 10
microns that has one mode of transmission. Single Mode Fiber with a relatively narrow diameter, through
which only one mode will propagate typically 1310 or 1550nm. Carries higher bandwidth than multimode fiber,
but requires a light source with a narrow spectral width. Synonyms mono-mode optical fiber, single-mode fiber,
single-mode optical waveguide, uni-mode fiber.

Single Modem fiber is used in many applications where data is sent at multi-frequency (WDM Wave-Division-
Multiplexing) so only one cable is needed - (single-mode on one single fiber)

Single-mode fiber gives you a higher transmission rate and up to 50 times more distance than multimode, but it
also costs more. Single-mode fiber has a much smaller core than multimode. The small core and single light-
wave virtually eliminate any distortion that could result from overlapping light pulses, providing the least signal
attenuation and the highest transmission speeds of any fiber cable type.

Single-mode optical fiber is an optical fiber in which only the lowest order bound mode can propagate at the
wavelength of interest typically 1300 to 1320nm.

jump to single mode fiber page

Multi-Mode cable has a little bit bigger diameter, with a common diameters in the 50-to-100 micron range for
the light carry component (in the US the most common size is 62.5um). Most applications in which Multi-mode
fiber is used, 2 fibers are used (WDM is not normally used on multi-mode fiber). POF is a newer plastic-based
cable which promises performance similar to glass cable on very short runs, but at a lower cost.

Multimode fiber gives you high bandwidth at high speeds (10 to 100MBS - Gigabit to 275m to 2km) over
medium distances. Light waves are dispersed into numerous paths, or modes, as they travel through the cable's
core typically 850 or 1300nm. Typical multimode fiber core diameters are 50, 62.5, and 100 micrometers.
However, in long cable runs (greater than 3000 feet [914.4 meters), multiple paths of light can cause signal
distortion at the receiving end, resulting in an unclear and incomplete data transmission so designers now call
for single mode fiber in new applications using Gigabit and beyond.
The use of fiber-optics was generally not available until 1970 when Corning Glass Works was able to produce a
fiber with a loss of 20 dB/km. It was recognized that optical fiber would be feasible for telecommunication
transmission only if glass could be developed so pure that attenuation would be 20dB/km or less. That is, 1% of
the light would remain after traveling 1 km. Today's optical fiber attenuation ranges from 0.5dB/km to
1000dB/km depending on the optical fiber used. Attenuation limits are based on intended application.
The applications of optical fiber communications have increased at a rapid rate, since the first commercial
installation of a fiber-optic system in 1977. Telephone companies began early on, replacing their old copper
wire systems with optical fiber lines. Today's telephone companies use optical fiber throughout their system as
the backbone architecture and as the long-distance connection between city phone systems.

Cable television companies have also began integrating fiber-optics into their cable systems. The trunk lines
that connect central offices have generally been replaced with optical fiber. Some providers have begun
experimenting with fiber to the curb using a fiber/coaxial hybrid. Such a hybrid allows for the integration of
fiber and coaxial at a neighborhood location. This location, called a node, would provide the optical receiver
that converts the light impulses back to electronic signals. The signals could then be fed to individual homes via
coaxial cable.

Local Area Networks (LAN) is a collective group of computers, or computer systems, connected to each other
allowing for shared program software or data bases. Colleges, universities, office buildings, and industrial
plants, just to name a few, all make use of optical fiber within their LAN systems.

Power companies are an emerging group that have begun to utilize fiber-optics in their communication systems.
Most power utilities already have fiber-optic communication systems in use for monitoring their power grid
systems.

Some 10 billion digital bits can be transmitted per second along an optical fiber link in a commercial network,
enough to carry tens of thousands of telephone calls. Hair-thin fibers consist of two concentric layers of high-
purity silica glass the core and the cladding, which are enclosed by a protective sheath. Light rays modulated
into digital pulses with a laser or a light-emitting diode move along the core without penetrating the cladding.

The light stays confined to the core because the cladding has a lower refractive index—a measure of its ability
to bend light. Refinements in optical fibers, along with the development of new lasers and diodes, may one day
allow commercial fiber-optic networks to carry trillions of bits of data per second.

Total internal refection confines light within optical fibers (similar to looking down a mirror made in the shape
of a long paper towel tube). Because the cladding has a lower refractive index, light rays reflect back into the
core if they encounter the cladding at a shallow angle (red lines). A ray that exceeds a certain "critical" angle
escapes from the fiber (yellow line).

STEP-INDEX MULTIMODE FIBER has a large core, up to 100 microns in diameter. As a result, some of the
light rays that make up the digital pulse may travel a direct route, whereas others zigzag as they bounce off the
cladding. These alternative pathways cause the different groupings of light rays, referred to as modes, to arrive
separately at a receiving point. The pulse, an aggregate of different modes, begins to spread out, losing its well-
defined shape. The need to leave spacing between pulses to prevent overlapping limits bandwidth that is, the
amount of information that can be sent. Consequently, this type of fiber is best suited for transmission over
short distances, in an endoscope, for instance.
GRADED-INDEX MULTIMODE FIBER contains a core in which the refractive index diminishes gradually
from the center axis out toward the cladding. The higher refractive index at the center makes the light rays
moving down the axis advance more slowly than those near the cladding. Also, rather than zigzagging off the
cladding, light in the core curves helically because of the graded index, reducing its travel distance. The
shortened path and the higher speed allow light at the periphery to arrive at a receiver at about the same time as
the slow but straight rays in the core axis. The result: a digital pulse suffers less dispersion.

SINGLE-MODE FIBER has a narrow core (eight microns or less), and the index of refraction between the core
and the cladding changes less than it does for multimode fibers. Light thus travels parallel to the axis, creating
little pulse dispersion. Telephone and cable television networks install millions of kilometers of this fiber every
year.

UNIT III

OFDM

What is OFDM: Orthogonal Frequency Division Multiplexing


OFDM, Orthogonal Frequency Division Multiplexing uses multiple close spaced carriers each with low rate
data for resilient communications.
OFDM, Orthogonal Frequency Division Multiplexing is a form of signal waveform or modulation that provides
some significant advantages for data links.
Accordingly, OFDM, Orthogonal Frequency Division Multiplexing is used for many of the latest wide
bandwidth and high data rate wireless systems including Wi-Fi, cellular telecommunications and many more.
The fact that OFDM uses a large number of carriers, each carrying low bit rate data, means that it is very
resilient to selective fading, interference, and multipath effects, as well providing a high degree of spectral
efficiency.
Early systems using OFDM found the processing required for the signal format was relatively high, but with
advances in technology, OFDM presents few problems in terms of the processing required.
Development of OFDM
The use of OFDM and multicarrier modulation in general has come to the fore in recent years as it provides an
ideal platform for wireless data communications transmissions.
However the concept of OFDM technology was first investigated in the 1960s and 1970s during research into
methods for reducing interference between closely spaced channels. IN addition to this other requirements
needed to achieve error free data transmission in the presence of interference and selective propagation
conditions.
Initially the use of OFDM required large levels of processing and accordingly it was not viable for general use.
Some of the first systems to adopt OFDM were digital broadcasting - here OFDM was able to provide a highly
reliable form of data transport over a variety of signal path conditions. Once example was DAB digital radio
that was introduced in Europe and other countries. It was Norwegian Broadcasting Corporation NRK that
launched the first service on 1st June 1995. OFDM was also used for digital television.
Later processing power increased as a result of rising integration levels enabling OFDM to be considered for the
4G mobile communications systems which started to be deployed from around 2009. Also OFDM was adopted
for Wi-Fi and a variety of other wireless data systems.
What is OFDM?
OFDM is a form of multicarrier modulation. An OFDM signal consists of a number of closely spaced
modulated carriers. When modulation of any form - voice, data, etc. is applied to a carrier, then sidebands
spread out either side. It is necessary for a receiver to be able to receive the whole signal to be able to
successfully demodulate the data. As a result when signals are transmitted close to one another they must be
spaced so that the receiver can separate them using a filter and there must be a guard band between them. This
is not the case with OFDM. Although the sidebands from each carrier overlap, they can still be received without
the interference that might be expected because they are orthogonal to each another. This is achieved by having
the carrier spacing equal to the reciprocal of the symbol period.
Traditional-slection if signals on different chanels
To see how OFDM works, it is necessary to look at the receiver. This acts as a bank of demodulators,
translating each carrier down to DC. The resulting signal is integrated over the symbol period to regenerate the
data from that carrier. The same demodulator also demodulates the other carriers. As the carrier spacing equal
to the reciprocal of the symbol period means that they will have a whole number of cycles in the symbol period
and their contribution will sum to zero - in other words there is no interference contribution.
Basic concept of OFDM, Orthogonal Frequency Division Multiplexing
One requirement of the OFDM transmitting and receiving systems is that they must be linear. Any non-linearity
will cause interference between the carriers as a result of inter-modulation distortion. This will introduce
unwanted signals that would cause interference and impair the orthogonality of the transmission.
In terms of the equipment to be used the high peak to average ratio of multi-carrier systems such as OFDM
requires the RF final amplifier on the output of the transmitter to be able to handle the peaks whilst the average
power is much lower and this leads to inefficiency. In some systems the peaks are limited. Although this
introduces distortion that results in a higher level of data errors, the system can rely on the error correction to
remove them.
Data on OFDM
The traditional format for sending data over a radio channel is to send it serially, one bit after another. This
relies on a single channel and any interference on that single frequency can disrupt the whole transmission.
OFDM adopts a different approach. The data is transmitted in parallel across the various carriers within the
overall OFDM signal. Being split into a number of parallel "substreams" the overall data rate is that of the
original stream, but that of each of the substreams is much lower, and the symbols are spaced further apart in
time.
This reduces interference among symbols and makes it easier to receive each symbol accurately while
maintaining the same throughput.
The lower data rate in each stream means that the interference from reflections is much less critical. This is
achieved by adding a guard band time or guard interval into the system. This ensures that the data is only
sampled when the signal is stable and no new delayed signals arrive that would alter the timing and phase of the
signal. This can be achieved far more effectively within a low data rate substream.
Guard interval on OFDM signals
The distribution of the data across a large number of carriers in the OFDM signal has some further advantages.
Nulls caused by multi-path effects or interference on a given frequency only affect a small number of the
carriers, the remaining ones being received correctly. By using error-coding techniques, which does mean
adding further data to the transmitted signal, it enables many or all of the corrupted data to be reconstructed
within the receiver. This can be done because the error correction code is transmitted in a different part of the
signal.
Key features of OFDM
The OFDM scheme differs from traditional FDM in the following interrelated ways:
 Multiple carriers (called subcarriers) carry the information stream
 The subcarriers are orthogonal to each other.
 A guard interval is added to each symbol to minimize the channel delay spread and intersymbol interference.
OFDM advantages & disadvantages
OFDM advantages
OFDM has been used in many high data rate wireless systems because of the many advantages it provides.
 Immunity to selective fading: One of the main advantages of OFDM is that is more resistant to frequency
selective fading than single carrier systems because it divides the overall channel into multiple narrowband
signals that are affected individually as flat fading sub-channels.
 Resilience to interference: Interference appearing on a channel may be bandwidth limited and in this way
will not affect all the sub-channels. This means that not all the data is lost.
 Spectrum efficiency: Using close-spaced overlapping sub-carriers, a significant OFDM advantage is that it
makes efficient use of the available spectrum.
 Resilient to ISI: Another advantage of OFDM is that it is very resilient to inter-symbol and inter-frame
interference. This results from the low data rate on each of the sub-channels.
 Resilient to narrow-band effects: Using adequate channel coding and interleaving it is possible to recover
symbols lost due to the frequency selectivity of the channel and narrow band interference. Not all the data is
lost.
 Simpler channel equalisation: One of the issues with CDMA systems was the complexity of the channel
equalisation which had to be applied across the whole channel. An advantage of OFDM is that using
multiple sub-channels, the channel equalization becomes much simpler.
OFDM disadvantages
Whilst OFDM has been widely used, there are still a few disadvantages to its use which need to be addressed
when considering its use.
 High peak to average power ratio: An OFDM signal has a noise like amplitude variation and has a
relatively high large dynamic range, or peak to average power ratio. This impacts the RF amplifier efficiency
as the amplifiers need to be linear and accommodate the large amplitude variations and these factors mean
the amplifier cannot operate with a high efficiency level.
 Sensitive to carrier offset and drift: Another disadvantage of OFDM is that is sensitive to carrier frequency
offset and drift. Single carrier systems are less sensitive.
OFDM, orthogonal frequency division multiplexing has gained a significant presence in the wireless market
place. The combination of high data capacity, high spectral efficiency, and its resilience to interference as a
result of multi-path effects means that it is ideal for the high data applications that have become a major factor
in today's communications scene.
DMA

DMA stands for "Direct Memory Access" and is a method of transferring data from the computer's RAM to another
part of the computer without processing it using the CPU. While most data that is input or output from your
computer is processed by the CPU, some data does not require processing, or can be processed by another device.
In these situations, DMA can save processing time and is a more efficient way to move data from the computer's
memory to other devices. In order for devices to use direct memory access, they must be assigned to a DMA
channel. Each type of port on a computer has a set of DMA channels that can be assigned to each connected device.
For example, a PCI controller and a hard drive controller each have their own set of DMA channels.

For example, a sound card may need to access data stored in the computer's RAM, but since it can process the data
itself, it may use DMA to bypass the CPU. Video cards that support DMA can also access the system memory and
process graphics without needing the CPU. Ultra DMA hard drives use DMA to transfer data faster than previous
hard drives that required the data to first be run through the CPU.
An alternative to DMA is the Programmed Input/Output (PIO) interface in which all data transmitted between
devices goes through the processor. A newer protocol for the ATAIIDE interface is Ultra DMA, which provides a
burst data transfer rate up to 33 mbps. Hard drives that come with Ultra DMAl33 also support PIO modes 1, 3, and
4, and multiword DMA mode 2 at 16.6 mbps.
DMA Transfer Types
Memory To Memory Transfer
In this mode block of data from one memory address is moved to another memory address. In this mode current
address register of channel 0 is used to point the source address and the current address register of channel is used
to point the destination address in the first transfer cycle, data byte from the source address is loaded in the
temporary register of the DMA controller and in the next transfer cycle the data from the temporary register is
stored in the memory pointed by destination address. After each data transfer current address registers are
decremented or incremented according to current settings. The channel 1 current word count register is also
decremented by 1 after each data transfer. When the word count of channel 1 goes to FFFFH, a TC is generated
which activates EOP output terminating the DMA service.
Auto initialize
In this mode, during the initialization the base address and word count registers are loaded simultaneously with the
current address and word count registers by the microprocessor. The address and the count in the base registers
remain unchanged throughout the DMA service.
After the first block transfer i.e. after the activation of the EOP signal, the original values of the current address and
current word count registers are automatically restored from the base address and base word count register of that
channel. After auto initialization the channel is ready to perform another DMA service, without CPU intervention.
DMA Controller
The controller is integrated into the processor board and manages all DMA data transfers. Transferring data
between system memory and an 110 device requires two steps. Data goes from the sending device to the DMA
controller and then to the receiving device. The microprocessor gives the DMA controller the location, destination,
and amount of data that is to be transferred. Then the DMA controller transfers the data, allowing the
microprocessor to continue with other processing tasks. When a device needs to use the Micro Channel bus to send
or receive data, it competes with all the other devices that are trying to gain control of the bus. This process is
known as arbitration. The DMA controller does not arbitrate for control of the BUS instead; the I/O device that is
sending or receiving data (the DMA slave) participates in arbitration. It is the DMA controller, however, that takes
control of the bus when the central arbitration control point grants the DMA slave's request.

NETWORK DESIGN

Network design is a category of systems design that deals with data transport mechanisms. As with other systems'
design disciplines, network design follows an analysis stage, where requirements are generated, and precedes
implementation, where the system (or relevant system component) is constructed. The objective of network design
is to satisfy data communication requirements while minimizing expense. Requirement scope can vary widely from
one network design project to another based on geographic particularities and the nature of the data requiring
transport.
Network analysis may be conducted at an inter-organizational, organizational, or departmental level. The
requirements generated during the analysis may therefore define an inter-network connecting two or more
organizations, an enterprise network that connects the departments of a single organization, or a departmental
network to be designed around specific divisional needs. Inter-networks and enterprise networks often span
multiple buildings, some of which may be hundreds or thousands of miles apart. The distance between physical
connections often dictates the type of technology that must be used to facilitate data transmission.
Components that exist within close physical proximity (usually within the same building) and can be connected to
each other directly or through hubs or switches using owned equipment are considered part of a local area network
(LAN) . It is generally impractical and often impossible to connect the equipment of multiple buildings as a single
LAN; so individual LANs are instead interconnected to form a greater network, such as a metropolitan area
network (MAN) or wide area network (WAN).
MANs may be constructed where buildings are located close enough to each other to facilitate a reliable high-speed
connection (usually less than 50 kilometers or 30 miles). Greater distances generally result in much slower
connections, which are often leased from common carriers to create WANs. Due to the close proximity of
equipment, LAN connections offer the best performance and control (usually with speeds around 100 Mbps) and
WAN connections the worst (with many machines often sharing a single connection of less than 2 Mbps).

CAPACITY:

Capacity is the complex measurement of the maximum amount of data that may be transferred between network
locations over a link or network path. Because of the amount of intertwined measurement variables and scenarios,
actual network capacity is rarely accurate.
Capacity is also known as throughput.
Capacity depends on the following variables, which are never constant:
 Network engineering
 Subscriber services
 Rate at which handsets enter and leave a covered cell site area
Wireless carriers are pushed to increase network capacity to accommodate user demand for high-bandwidth
services. Until recently, subscribers used wireless networks to make calls or send Short Message Service
(SMS)/Multimedia Message Service (MMS) messages. Today, capacity is required to handle increased subscribers
and additional services, including:
 Web browsing
 Facebook updates
 Digital file downloads, like e-books
 Streaming audio/video
 Online multiplayer games
Because of marginal network capacity costs, providers focus on offering packaged and a la carte services, such as
location-based add-ons and products, like ring tones, to create additional revenue with negligible operational
expense effect.

POWER CONTROL:

Power control, broadly speaking, is the intelligent selection of transmitter power output in a communication
system to achieve good performance within the system.[1] The notion of "good performance" can depend on
context and may include optimizing metrics such as link data rate, network capacity, outage probability, geographic
coverage and range, and life of the network and network devices. Power control algorithms are used in many
contexts, including cellular networks, sensor networks, wireless LANs, and DSL modems.
Transmit power control
Transmit power control is a technical mechanism used within some networking devices in order to prevent too
much unwanted interference between different wireless networks (e.g. the owner's network and the neighbour's
network).
The network devices supporting this feature include IEEE 802.11h Wireless LAN devices in the 5 GHz band
compliant to the IEEE 802.11a. The idea of the mechanism is to automatically reduce the used transmission output
power when other networks are within range. Reduced power means reduced interference problems and increased
battery capacity. The power level of a single device can be reduced by 6 dB, which should result in an accumulated
power level reduction (the sum of radiated power of all devices currently transmitting) of at least 3 dB (half of the
power).

WCDMA NETWORK PLANNING:

Planning:
Planning should meet current standards and demands and also comply with future requirements.
Uncertainty of future traffic growth and service needs.
 High bit rate services require knowledge of coverage and capacity enhancements methods.
 Real constraints
 Coexistence and co-operation of 2G and 3G for old operators.
 Environmental constraints for new operators.
 Network planning depends not only on the coverage but also on load
Objectives of Radio network planning
Capacity:
To support the subscriber traffic with sufficiently low blocking and delay.
Coverage:
To obtain the ability of the network ensure the availability of the service in the entire service area.
Quality:
Linking the capacity and the coverage and still provide the required QoS.
Costs:
To enable an economical network implementation when the service is established and a controlled
network expansion during the life cycle of the network.
Radio network planning process

Capacity estimation in a CDMA cell


Impact of uncertainties to the capacity in the cell:

MC- CDMA

Multi-carrier code-division multiple access (MC-CDMA) is a multiple access scheme used in OFDM-based
telecommunication systems, allowing the system to support multiple users at the same time over same frequency
band.
MC-CDMA spreads each user symbol in the frequency domain. That is, each user symbol is carried over multiple
parallel subcarriers, but it is phase-shifted (typically 0 or 180 degrees) according to a code value. The code values
differ per subcarrier and per user. The receiver combines all subcarrier signals, by weighing these to compensate
varying signal strengths and undo the code shift. The receiver can separate signals of different users, because these
have different (e.g. orthogonal) code values.
Since each data symbol occupies a much wider bandwidth (in hertz) than the data rate (in bit/s), a ratio of signal to
noise-plus-interference (if defined as signal power divided by total noise plus interference power in the entire
transmission band) of less than 0 dB is feasible.
One way of interpreting MC-CDMA is to regard it as a direct-sequence CDMA signal (DS-CDMA), which is
transmitted after it has been fed through an inverse FFT (fast Fourier transform).
Code Division Multiple Access (CDMA)
In CDMA, the narrowband message signal is multiplied by avery large bandwidth signal called the spreading
signal.
The spreading signal is a pseudo-noise code sequence that has a chip rate which is orders of magnitudes greater
than thedata rate of the message.
All users use thesame carrierfrequency and maytransmitsimultaneously.
Each user has its own pseudorandom codeword which is approximately orthogonal to all other code words.

Code Division Multiple Access (CDMA)


Power control:
Provided by each base station in a cellular system and assures that each mobile within the base station coverage
area provides the same signal level to the base station receiver. This solves the problem of a nearby subscriber
overpowering the base station receiver and drowning out the signals of faraway subscribers. Power control is
implemented at the base station by rapidly sampling the radio signal strength indicator (RSSI) levels of each mobile
and then sending a power change command over the forward radio link. Out-of-cell mobiles provide interference
which is not under the control of the receiving base station.

CELLULAR MOBILE COMMUNICATION BEYOND 3G:

3G, short for third generation, is the third generation of wireless mobile telecommunications technology. It is the
upgrade for 2G and 2.5G GPRS networks, for faster internet speed. This is based on a set of standards used for
mobile devices and mobile telecommunications use services and networks that comply with the International
Mobile Telecommunications-2000 (IMT-2000) specifications by the International Telecommunication Union. 3G
finds application in wireless voice telephony, mobile Internet access, fixed wireless Internet access, video calls and
mobile TV.
3G telecommunication networks support services that provide an information transfer rate of at least 0.2 Mbit/s.
Later 3G releases, often denoted 3.5G and 3.75G, also provide mobile broadband access of several Mbit/s to
smartphones and mobile modems in laptop computers. This ensures it can be applied to wireless voice telephony,
mobile Internet access, fixed wireless Internet access, video calls and mobile TV technologies.
UNIT II
BER
Bit Error Rates
• There is a theoretical limit on how much information a channel can carry
• Bit error rate depends on the SINR and the modulation
• This is why wireless link layers use more complex chip/bit encoding
UNIT V

COMPARISON TO ALTERNATE BROADBAND ACCESS NETWORKS:

Optical access network technologies

ACTIVE VS PASSIVE
FTTx COMPARISON

GENERIC ACCESS REFERENCE MODEL


ACCESS NETWORK DEPLOYMENT:

ADSL

Asymmetric Digital Subscriber Line (ADSL) is asymmetric because of its relatively high capacity to
download data when compared to its lower upload capacity. ADSL allows you an 18,000-foot loop from the phone
company and is capable of transmitting at speeds of up to 8Mbps over ordinary twisted copper pairs. ADSL allows
for a splitter box that lets users talk on the telephone at the same time data is being transmitted. The asymmetric
speed of ADSL is appropriate for home users who typically draw more from the Internet than they send out to it.
ADSL uses carrierless amplitude phase modulation (CAP) or discrete multitone (DMT).
xDSL
xDSL ranges from 6.1Mbps to 155Mbps incoming, and from 600Kbps to 15Mbps outgoing. The "x" is a wildcard
that can be ADSL (asynchronous) or SDSL (synchronous). xDSL uses digital encoding to provide more bandwidth
over existing twisted-pair telephone lines (POTS). Many iterations of xDSL allow the phone to be used for data
communication at the same time it's being used to transmit data. This is because phone conversations use
frequencies below 4KHz, above which xDSL tends to operate. Several types of xDSL modems come with
"splitters" for using voice and data concurrently.
xDSL connections use frequencies of more than 4000KHz to achieve their great bandwidth. This comes at the
expense of attenuation. The two most popular types of line coding, CAP and DMT, use lower frequencies and
therefore are able to support longer loops between the user and the phone company. You can see a breakdown of
their capacity
WIRELESS ACCCESS NETWORK:

A wireless network is a computer network that uses wireless data connections between network nodes.Wireless
networking is a method by which homes, telecommunications networks and business installations avoid the costly
process of introducing cables into a building, or as a connection between various equipment locations. Wireless
telecommunications networks are generally implemented and administered using radio communication. This
implementation takes place at the physical level (layer) of the OSI model network structure.
Examples of wireless networks include cell phone networks, wireless local area networks (WLANs), wireless
sensor networks, satellite communication networks, and terrestrial microwave networks.
TYPES OF WIRELESS NETWORKS:
Wireless PAN:
Wireless personal area networks (WPANs) connect devices within a relatively small area, that is generally within a
person's reach.[5] For example, both Bluetooth radio and invisible infrared light provides a WPAN for
interconnecting a headset to a laptop. ZigBee also supports WPAN applications.[6] Wi-Fi PANs are becoming
commonplace (2010) as equipment designers start to integrate Wi-Fi into a variety of consumer electronic devices.
Intel "My WiFi" and Windows 7 "virtual Wi-Fi" capabilities have made Wi-Fi PANs simpler and easier to set up
Wireless LAN:
Wireless LANs are often used for connecting to local resources and to the Internet
A wireless local area network (WLAN) links two or more devices over a short distance using a wireless distribution
method, usually providing a connection through an access point for internet access. The use of spread-spectrum or
OFDM technologies may allow users to move around within a local coverage area, and still remain connected to the
network.
Products using the IEEE 802.11 WLAN standards are marketed under the Wi-Fi brand name . Fixed wireless
technology implements point-to-point links between computers or networks at two distant locations, often using
dedicated microwave or modulated laser light beams over line of sight paths. It is often used in cities to connect
networks in two or more buildings without installing a wired link. To connect to Wi-Fi, sometimes are used devices
like a router or connecting HotSpot using mobile smartphones.
Wireless ad hoc network:
A wireless ad hoc network, also known as a wireless mesh network or mobile ad hoc network (MANET), is a
wireless network made up of radio nodes organized in a mesh topology. Each node forwards messages on behalf of
the other nodes and each node performs routing. Ad hoc networks can "self-heal", automatically re-routing around a
node that has lost power. Various network layer protocols are needed to realize ad hoc mobile networks, such as
Distance Sequenced Distance Vector routing, Associativity-Based Routing, Ad hoc on-demand Distance Vector
routing, and Dynamic source routing.
Wireless MAN:
Wireless metropolitan area networks are a type of wireless network that connects several wireless LANs.
WiMAX is a type of Wireless MAN and is described by the IEEE 802.16 standard.
Wireless WAN:
Wireless wide area networks are wireless networks that typically cover large areas, such as between neighbouring
towns and cities, or city and suburb. These networks can be used to connect branch offices of business or as a
public Internet access system. The wireless connections between access points are usually point to point microwave
links using parabolic dishes on the 2.4 GHz and 5.8Ghz band, rather than omnidirectional antennas used with
smaller networks. A typical system contains base station gateways, access points and wireless bridging relays.
Other configurations are mesh systems where each access point acts as a relay also. When combined with
renewable energy systems such as photovoltaic solar panels or wind systems they can be stand alone systems.
Cellular network:
Example of frequency reuse factor or pattern 1/4
A cellular network or mobile network is a radio network distributed over land areas called cells, each served by at
least one fixed-location transceiver, known as a cell site or base station. In a cellular network, each cell
characteristically uses a different set of radio frequencies from all their immediate neighbouring cells to avoid any
interference.
When joined together these cells provide radio coverage over a wide geographic area. This enables a large number
of portable transceivers (e.g., mobile phones, pagers, etc.) to communicate with each other and with fixed
transceivers and telephones anywhere in the network, via base stations, even if some of the transceivers are moving
through more than one cell during transmission.
Although originally intended for cell phones, with the development of smartphones, cellular telephone networks
routinely carry data in addition to telephone conversations:
Global System for Mobile Communications (GSM): The GSM network is divided into three major systems: the
switching system, the base station system, and the operation and support system. The cell phone connects to the
base system station which then connects to the operation and support station; it then connects to the switching
station where the call is transferred to where it needs to go. GSM is the most common standard and is used for a
majority of cell phones.
Personal Communications Service (PCS): PCS is a radio band that can be used by mobile phones in North America
and South Asia. Sprint happened to be the first service to set up a PCS.
D-AMPS: Digital Advanced Mobile Phone Service, an upgraded version of AMPS, is being phased out due to
advancement in technology. The newer GSM networks are replacing the older system.
Global area network:
A global area network (GAN) is a network used for supporting mobile across an arbitrary number of wireless
LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off user
communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of
terrestrial wireless LANs.
Space network:
Space networks are networks used for communication between spacecraft, usually in the vicinity of the Earth. The
example of this is NASA's Space Network.

FIXED WIRELESS MEDIA CHARACTERTERISTICS

Fixed wireless refers to the operation of wireless devices or systems in fixed locations such as homes and offices.
Fixed wireless devices usually derive their electrical power from the utility mains, unlike mobile wireless or
portable wireless which tend to be battery-powered. Although mobile and portable systems can be used in fixed
locations, efficiency and bandwidth are compromised compared with fixed systems. Mobile or portable, battery-
powered wireless systems can serve as emergency backups for fixed systems in case of a power blackout or natural
disaster.
The technology for wireless connection to the Internet is as old as the Net iteself. Amateur radio operators began
"patching" into telephone lines with fixed, mobile, and portable two-way voice radios in the middle of the 20th
Century. A wireless modem works something like an amateur-radio "phone patch," except faster. High-end fixed
wireless employs broadband modems that bypass the telephone system and offer Internet access hundreds of times
faster than twisted-pair hard-wired connections or cell-phone modems.
Some of the most important assets of fixed wireless are as follows.
Subscribers can be added or moved (to a certain extent) without modifying the infrastructure.
Subscribers in remote areas can be brought into a network without the need for stringing new cables or optical
fibers across the countryside.
Broad bandwidth is possible because there are no wires or cables to introduce reactance into the connection
(reactance limits bandwidth by preventing signals higher than a certain frequency from efficiently propagating).
As the number of subscribers increases, the connection cost per subscriber goes down.
Wireless Transmission
Wireless communication technology has developed significantly over the past few decades and has become one of
the most important types of media transmission from one device to another. Without the use of wires or electronic
conductors, information can be transmitted by using electromagnetic waves. The various types of wireless
communication include radio broadcast (RF), Infrared (IR), satellite, microwave, and Bluetooth. Mobile phones,
GPS, Wi-Fi, and cordless telephones are devices that use wireless transmission to exchange data and information.
Frequency Ranges
Have you ever wondered how your television and mobile phone can work at the same time? Both receive signals
via antenna in the form of electromagnetic waves but don't interfere with each other. The reason is that all wireless
devices operate in their own frequency bands within which they transmit and receive signals. For example,
television broadcast operates between 54-216 MHz, FM radio operates between 87.5-108 MHz and cell phones
operate either between 824-894 MHz or 1850-1990 MHz.

WIFI

Wi-Fi is the name of a popular wireless networking technology that uses radio waves to provide wireless high-
speed Internet and network connections. A common misconception is that the term Wi-Fi is short for "wireless
fidelity," however this is no the case. Wi-Fi is simply a trademarked phrase that means IEEE 802.11x.
How Wi-Fi Networks Works
Wi-Fi networks have no physical wired connection between sender and receiver by using radio frequency (RF)
technology -- a frequency within the electromagnetic spectrum associated with radio wave propagation. When an
RF current is supplied to an antenna, an electromagnetic field is created that then is able to propagate through
space.
The cornerstone of any wireless network is an access point (AP). The primary job of an access point is to broadcast
a wireless signal that computers can detect and "tune" into. In order to connect to an access point and join a wireless
network, computers and devices must be equipped with wireless network adapters.
The Wi-Fi Alliance
The Wi-Fi Alliance, the organization that owns the Wi-Fi registered trademark term specifically defines Wi-Fi as
any "wireless local area network (WLAN) products that are based on the Institute of Electrical and Electronics
Engineers' (IEEE) 802.11 standards."
Initially, Wi-Fi was used in place of only the 2.4GHz 802.11b standard, however the Wi-Fi Alliance has expanded
the generic use of the Wi-Fi term to include any type of network or WLAN product based on any of the 802.11
standards, including 802.11b, 802.11a, dual-band and so on, in an attempt to stop confusion about wireless LAN
interoperability.
Wi-Fi Support in Applications and Devices
Wi-Fi is supported by many applications and devices including video game consoles, home networks, PDAs,
mobile phones, major operating systems, and other types of consumer electronics. Any products that are tested and
approved as "Wi-Fi Certified" (a registered trademark) by the Wi-Fi Alliance are certified as interoperable with
each other, even if they are from different manufacturers. For example, a user with a Wi-Fi Certified product can
use any brand of access point with any other brand of client hardware that also is also "Wi-Fi Certified".
Products that pass this certification are required to carry an identifying seal on their packaging that states "Wi-Fi
Certified" and indicates the radio frequency band used (2.5GHz for 802.11b, 802.11g, or 802.11n, and 5GHz for
802.11a).

WIMAX

Worldwide Interoperability for Microwave Access is a technology standard for long-range wireless networking for
both mobile and fixed connections. While WiMAX was once envisioned to be a leading form of internet
communication as an alternative to cable and DSL, its adoption has been limited.
Primarily owing to its much higher cost, WiMAX is not a replacement for Wi-Fi or wireless hotspot technologies.
However, it can be cheaper to implement WiMAX instead of standard wired hardware as with DSL.
What is WiMAX?
WiMAX equipment comes in two basic forms: base stations, installed by service providers to deploy the
technology in a coverage area; and receivers, installed in clients.
WiMAX is developed by an industry consortium overseen by a group called the WiMAX Forum, which certifies
WiMAX equipment to ensure that it meets technical specifications. Its technology is based on the IEEE 802.16 set
of wide-area communications standards.
WiMAX has some great benefits when it comes to mobility, but that is precisely where its limitations are most
painful.
WiMAX Pros:
WiMAX is popular because of its low cost and flexible nature. It can be installed faster than other internet
technologies because it can use shorter towers and less cabling, supporting even non-line-of-sight coverage across
an entire city or country.
WiMAX isn't just for fixed connections either, like at home. You can also subscribe to a WiMAX service for your
mobile devices since USB dongles, laptops, and phones sometimes have the technology built-in.
In addition to internet access, WiMAX can provide voice and video-transferring capabilities as well as telephone
access. Since WiMax transmitters can span a distance of several miles with data rates reaching up to 30-40
megabits per second (1 Gbps for fixed stations), it's easy to see its advantages, especially in areas where wired
internet is impossible or too costly to implement.
WiMAX supports several networking usage models:
A means to transfer data across an Internet Service Provider network, commonly called backhaul
A form of fixed wireless broadband internet access, replacing satellite internet service
A form of mobile internet access that competes directly with LTE technology
Internet access for users in extremely remote locations where laying cable would be too expensive
WiMAX Cons:
Because WiMAX is wireless by nature, the further away from the source that the client gets, the slower their
connection becomes. This means that while a user might pull down 30 Mbps in one location, moving away from the
cell site can reduce that speed to 1 Mbps or next to nothing.
Similar to when several devices suck away at the bandwidth when connected to a single router, multiple users on
one WiMAX radio sector will reduce performance for the others.
Wi-Fi is much more popular than WiMAX, so more devices have Wi-Fi capabilities built in than they do WiMAX.
However, most WiMAX implementations include hardware that allows a whole household, for example, to use the
service by means of Wi-Fi, much like how a wireless router provides internet for several devices.

LTE

LTE (Long Term Evolution) is a standard for 4G wireless broadband technology that offers increased network
capacity and speed to mobile device users.
LTE offers higher peak data transfer rates -- up to 100 Mbps downstream and 30 Mbps upstream. It also provides
reduced latency, scalable bandwidth capacity and backward-compatibility with existing GSM and UMTS
technology. Future developments could yield peak throughput on the order of 300 Mbps.
History/development
The 3rd Generation Partnership Project (3GPP), a collaborative industry trade group, developed GSM, a 2G
standard; UMTS, the 3G technologies based on GSM; and, eventually, LTE. 3GPP engineers named the technology
Long Term Evolution because it represented the next step in the process.
Despite the development of GSM in the late 1980s, there wasn't a globally unified standard for wireless broadband.
GSM caught on in parts of Asia and Europe, but other countries, including the U.S. and Canada, adopted the
competing standard, code-division multiple access (CDMA). LTE aimed to merge a fragmented market and offer a
more efficient network for network operators.
In 2004, NTT DoCoMo, a major mobile phone operator in Japan, proposed making LTE the next international
standard for wireless broadband. During a live demonstration two years later, Nokia Networks simultaneously
downloaded HD video and uploaded a game via LTE.
Ericsson, a Swedish telecommunications company, demonstrated LTE with a bit rate of 144 Mbps in 2007. At
Mobile World Congress in 2008, Ericsson demonstrated the first LTE end-to-end phone call. That same year, LTE
was finalized. In 2009, TeliaSonera, a Swedish mobile network operator, made the service available in Oslo and
Stockholm.
How large LTE is around the world
Various telephone companies launched LTE at different times in different countries. Some European countries
adopted the standard as early as 2009, while North American countries adopted it in 2010 and 2011. As of this
writing, South Korea has the best LTE penetration with 97.5% of the country covered by LTE service. The U.S. has
90.3% LTE penetration.
Outside of the U.S. telecommunications market, GSM is the dominant mobile standard, covering more than 80% of
the world's cellular phone users. As a result, HSDPA and LTE are likely the wireless broadband technologies of
choice for most users.
Voice-over-LTE
Voice-over-LTE (VoLTE) is a new technology with which users can place phone calls over the LTE network as
data packets instead of as typical phone calls. This is called packet voice, and it can share packets along a network
of several phone conversations.
VoLTE can support many callers and reallocate bandwidth as needed to support it. Pauses in conversation on phone
calls won't waste bandwidth. Packet voice also allows the user to view if the person they intend to call is currently
busy or if their phone is available.
Nortel and other telecommunications infrastructure vendors are focusing significant research and development
efforts on the creation of LTE base stations -- or equipment that enables devices to wirelessly communicate with a
network -- to meet the expected demand. When implemented, LTE has the potential to bring pervasive computing
to a global audience with a seamless experience for mobile users everywhere.
Key features
Users enjoy the benefits of the LTE standard compared to older standards, such as 3G and HSPA. Users can see
improved streaming, downloads and even uploads. Globally, the average LTE download speed is 13.5 Mbps.
As a result, mobile device carriers can expect consumers to burn through data more quickly, which can lead to
overage charges on data plans. LTE can also connect consumers with services in real time. Users can talk to others
without experiencing any lag or stutters.
The upper layers of LTE are based on TCP/IP, which will likely result in an all-IP network similar to the current
state of wired communications. LTE supports mixed data, voice, video and messaging traffic.
LTE uses OFDM (orthogonal frequency division multiplexing) and, in later releases, MIMO (multiple input,
multiple output) antenna technology similar to that used in the IEEE 802.11n wireless local area network (WLAN)
standard. The higher signal-to-noise ratio (SNR) at the receiver enabled by MIMO, along with OFDM, provides
improved wireless network coverage and throughput, especially in dense urban areas.
LTE advanced
LTE Advanced (LTE-A), which was meant to improve the current standard, was first tested in 2011 in Spain. LTE-
A improves upon the radio technology and architecture of LTE. LTE-A has been tested to show that the download
and upload speeds are around two to three times faster than standard LTE. 3GPP made sure that all LTE-A devices
would be backward-compatible with standard LTE.
LTE-A supports carrier aggregation for improved speed and reliability. Carrier aggregation improves network
capacity by adding more bandwidth of up to 100 MHz across five component carriers with 20 MHz bandwidth
each. LTE-A handsets combine frequencies from multiple component carriers to improve signal and speed.
LTE-A requires devices with a special chip designed to work with LTE-A. Qualcomm, Nvidia and Broadcom all
manufacture chips that support LTE-A.
Many new flagship mobile devices support the standard. Apple supports it on the iPhone 8 and above. Many
Google Android phones released in the last year support the standard, as well.

WIMEDIA

The WiMedia Alliance, a global nonprofit organization, defines, certifies and supports enabling wireless
technology for multimedia applications. WiMedia's UWB technology represents the next evolution of Wireless
Personal Area Networks (WPANs) offering end users wireless freedom and convenience in a broad range of PC
and consumer electronics products. Current WiMedia products are offered by major OEM’s including Imation, Dell
and Toshiba, as well as smaller OEMs such as Atlona and Warpia. These products include Wireless USB docking
stations, hard drives, projectors and Laptop to HDTV audio/video extenders. Wimedia Alliance is also focused on
providing specifications for streaming video applications.
Our Mission: “To promote wireless multimedia connectivity and interoperability between devices in a
personal area network.”
WiMedia’s technology is an ISO-published radio standard for high-speed, ultra-wideband (UWB) wireless
connectivity that offers an unsurpassed combination of high data throughput rates and low energy consumption.
With regulatory approval in major markets worldwide, this technology has gained broad industry momentum as
evidenced by its selection for Wireless USB and high-speed Bluetooth.
UNIT-IV
DIGITAL CABLE TELEVISION SYSTEMS

DIGITAL COMPRESSION

The idea of sending multiple programs within the 19.39-Mbps stream is unique to digital TV and is made possible
by the digital compression system being used. To compress the image for transmission, broadcasters use MPEG-2
compression, and MPEG-2 allows you to pick both the screen size and bit rate when encoding the show. A
broadcaster can choose a variety of bit rates within any of the three resolutions.
You see MPEG-2 all the time on the Web on Web sites that offer streaming video. For example, if you go to
iFilm.com, you will find that you can view streaming video at 56 kilobits per second (Kbps), 200 Kbps or 500
Kbps. MPEG-2 allows a technician to pick any bit rate and resolution when encoding a file.
There are many variables that determine how the picture will look at a given bit rate. For example:
If a station wants to broadcast a sporting event (where there is lots of movement in the scene) at 1080i, the entire
19.39 megabits per second is needed to get a high-quality image.
On the other hand, a newscast showing a newscaster's head can use a much lower bit rate. A broadcaster might
transmit the newscast at 480p resolution and a 3-Mbps bit rate, leaving 16.39 Mbps of space for other sub-channels.
It's very likely that broadcasters will send three or four sub-channels during the day and then switch to a single
high-quality show that consumes the entire 19.39 Mbps at night. Some broadcasters are also experimenting with 1-
or 2-Mbps data channels that send information and Web pages along with a show to provide additional information.

MODULATION

Modulation is the process of converting data into radio waves by adding information to an electronic or optical
carrier signal. A carrier signal is one with a steady waveform -- constant height, or amplitude, and frequency.
Information can be added to the carrier by varying its amplitude, frequency, phase, polarization -- for optical
signals -- and even quantum-level phenomena like spin.
Modulation is usually applied to electromagnetic signals: radio waves, lasers/optics and computer networks.
Modulation can even be applied to a direct current -- which can be treated as a degenerate carrier wave with a fixed
amplitude and frequency of 0 Hz -- mainly by turning it on and off, as in Morse code telegraphy or a digital current
loop interface. The special case of no carrier -- a response message indicating an attached device is no longer
connected to a remote system -- is called baseband modulation.
Modulation can also be applied to a low-frequency alternating current -- 50-60 Hz -- as with powerline networking.
Types of modulation
There are many common modulation methods, including the following -- a very incomplete list:
Amplitude modulation (AM), in which the height -- i.e., the strength or intensity -- of the signal carrier is varied to
represent the data being added to the signal.
Frequency modulation (FM), in which the frequency of the carrier waveform is varied to reflect the frequency of
the data.
Phase modulation (PM), in which the phase of the carrier waveform is varied to reflect changes in the frequency of
the data. In PM, the frequency is unchanged while the phase is changed relative to the base carrier frequency. It is
similar to FM.
Polarization modulation, in which the angle of rotation of an optical carrier signal is varied to reflect transmitted
data.
Pulse-code modulation, in which an analog signal is sampled to derive a data stream that is used to modulate a
digital carrier signal.
Quadrature amplitude modulation (QAM), which uses two AM carriers to encode two or more bits in a single
transmission.

Radio and television broadcasts and satellite radio typically use AM or FM. Most short-range two-way radios -- up
to tens of miles -- use FM, while longer-range two-way radios -- up to hundreds or thousands of miles -- typically
employ a mode known as single sideband (SSB).
More complex forms of modulation include phase-shift keying (PSK) and QAM. Modern Wi-Fi modulation uses a
combination of PSK and QAM64 or QAM256 to encode multiple bits of information into each transmitted symbol.
Modulation and Demodulation
Modulation is the process of encoding information in a transmitted signal, while demodulation is the process of
extracting information from the transmitted signal. Many factors influence how faithfully the extracted information
replicates the original input information. Electromagnetic interference can degrade signals and make the original
signal impossible to extract. Demodulators typically include multiple stages of amplification and filtering in order
to eliminate interference.
A device that performs both modulation and demodulation is called a modem -- a name created by combining the
first letters of MOdulator and DEModulator.
A computer audio modem allows a computer to connect to another computer or to a data network over a regular
analog phone line by using the data signal to modulate an analog audio tone. A modem at the far end demodulates
the audio signal to recover the data stream. A cable modem uses network data to modulate the cable service carrier
signal.
Sometimes a carrier signal can carry more than one modulating information stream. Multiplexing combines the
streams onto a single carrier -- e.g., by encoding a fixed-duration segment of one, then of the next, for example,
cycling through all the channels before returning to the first -- a process called time-division multiplexing (TDM).
Another form is frequency-division multiplexing (FDM), where multiple carriers of different frequencies are used
on the same medium.
Why use modulation?
Multiple carriers of different frequencies can often be transmitted over a single media, with each carrier being
modulated by an independent signal. For example, Wi-Fi uses individual channels to simultaneously transmit data
to and from multiple clients.

ERROR CORRECTION

Definition
Error correction is the process of detecting errors in transmitted messages and reconstructing the original error-free
data. Error correction ensures that corrected and error-free messages are obtained at the receiver side.
Error Correction
Systems capable of requesting the retransmission of bad messages in response to error detection include an
automatic request for retransmission, or automatic repeat request (ARQ) processing, in their communication
software package. They use acknowledgments, negative acknowledgment messages and timeouts to achieve better
data transmission.
ARQ is an error control (error correction) method that uses error-detection codes and positive and negative
acknowledgments. When the transmitter either receives a negative acknowledgment or a timeout happens before
acknowledgment is received, the ARQ makes the transmitter resend the message.
Error-correcting code (ECC) or forward error correction (FEC) is a method that involves adding parity data bits to
the message. These parity bits will be read by the receiver to determine whether an error happened during
transmission or storage. In this case, the receiver checks and corrects errors when they occur. It does not ask the
transmitter to resend the frame or message.
A hybrid method that combines both ARQ and FEC functionality is also used for error correction. In this case, the
receiver asks for retransmission only if the parity data bits are not enough for successful error detection and
correction.

You might also like