You are on page 1of 50

1.

Home and Access network structures

Last-mile problem

Hindered broadband access in the home Which is resulted from:

Inadequate network infrastructure

Huge cost of new installations

Network topologies

pros and cons, design considerations (interference, power consumption etc)


Noise deteriorates the channel (surprise!)
Channel impairments
Noise
Thermal noise
Intermodulation noise
Crosstalk
Co-channel Interference (Wireless)
Impulse noise: (powerline communications)

Attenuation: signal power attenuates with distance


Delay distortion: velocity of a signal through a guided medium varies with frequency, multipath in
wireless environments

Channel Capacity the maximum rate at which data can be transmitted over a given communication
path, or channel, under given conditions
The greater the bandwidth, the higher the information-carrying capacity
Any digital waveform will have infinite bandwidth
BUT the transmission system will limit the bandwidth that can be transmitted
AND, for any given medium, the greater the bandwidth transmitted, the greater the cost
HOWEVER, limiting the bandwidth creates distortions
Factors affecting data rate
Transmitted power (energy)
Distance between transmitter and receiver
Noise level (including interference level)
Bandwidth
Wireless/wireline access

Wired Wireless

DSL (Digital Subscriber Line); RF

Hybrid fiber coaxial; Satellite


Fiber;

Power line

A unique and widely used method of multiple access is carrier sense multiple access
with collision detection (CSMA-CD). This is the classical access method used in
Ethernet local-area networks (LANs). It allows multiple users of the network to
access the single cable for transmission. All network nodes listen continuously. When
they want to send data, they listen first and then transmit if no other signals are on the
line. For instance, the transmission will be one packet or frame. Then the process
repeats. If two or more transmissions occur simultaneously, a collision occurs. The
network interface circuitry can detect a collision, and then the nodes will wait a random
time before retransmitting.

A variation of this method is called carrier sense multiple access with collision avoidance
(CSMA-CA). This method is similar to CSMA-CD. However, a special scheduling
algorithm is used to determine the appropriate time to transmit over the shared channel.
While the CSMA-CD technique is most used in wired networks, CSMA-CA is the
preferred method in wireless networks.

Multiplexing techniques
Capacity of transmission medium usually exceeds capacity required for transmission of a single
signal. Multiplexing - carrying multiple signals on a single medium, results in a more efficient use of
transmission medium.

Frequency-division multiplexing (FDM)

Time-division multiplexing (TDM)

Limiting factors (noise, contention, attenuation).


Transmission media
Twisted pair
Coaxial cable
Optical fiber
Wireless
Properties
Propagation delay
Bandwidth
Shared vs. non shared
Attenuation
Bandwidth efficiency bit/s/hz

Wireless
Properties
Shared medium
Susceptible to noise
Low bandwidth (?)
Examples MMDS (microwave distribution, wireless cable) LMDS (microwave point-to-point)
possible reuse of frequency, lower power consumption Bluetooth Zigbee
Wireline
PSTN: Public Switched Telephone Network
Analog modem; Designed for voice communication
ISDN: Integrated Service Digital Network Integrated voice and digital service over regular phone
lines; Packet service Require extra power
xDSL: Digital Subscriber Line Using Unshielded Twisted Pair (UTP) copper local loops to carry
digital data; PtP service, carriers its own power on the line;
FTTx: Fiber to the Home Passiv Optical Network (PON);

Example technologies

Power line
xDSL
PON
Mobile

2. Access methods (media access)

Access methods allow many users to share these limited channels to provide the
economy of scale necessary for a successful communications business. There are five
basic access or multiplexing methods: frequency division multiple access (FDMA), time
division multiple access (TDMA), code division multiple access (CDMA), orthogonal
frequency division multiple access (OFDMA), and spatial division multiple access
(SDMA).

FDMA

FDMA is the process of dividing one channel or bandwidth into multiple individual
bands, each for use by a single user (Fig. 1). Each individual band or channel is wide
enough to accommodate the signal spectra of the transmissions to be propagated. The
data to be transmitted is modulated on to each subcarrier, and all of them are linearly
mixed together.

1. FDMA divides the shared medium bandwidth into individual channels. Subcarriers
modulated by the information to be transmitted occupy each subchannel.
The best example of this is the cable television system. The medium is a single coax cable
that is used to broadcast hundreds of channels of video/audio programming to homes.
The coax cable has a useful bandwidth from about 4 MHz to 1 GHz. This bandwidth is
divided up into 6-MHz wide channels. Initially, one TV station or channel used a single
6-MHz band. But with digital techniques, multiple TV channels may share a single band
today thanks to compression and multiplexing techniques used in each channel.

WDMA: Different light sources (Optical domain)

Different wavelength for different channels. Since the wavelengths are dedicated, even if one user
doesnt use it another one cannot use it. (Not efficient)

But it is very simple to implement.

Used in Fiber Optic communication systems.

SCMA: Sub-carrier Multiplexing: (Electrical domain- different tuners)

Same implementation as the WDMA

TDMA:

TDMA is a digital technique that divides a single channel or band into time slots. Each
time slot is used to transmit one byte or another digital segment of each signal in
sequential serial data format. This technique works well with slow voice data signals, but
its also useful for compressed video and other high-speed data.

A good example is the widely used T1 transmission system, which has been used for
years in the telecom industry. T1 lines carry up to 24 individual voice telephone calls on a
single line (Fig. 2). Each voice signal usually covers 300 Hz to 3000 Hz and is digitized
at an 8-kHz rate, which is just a bit more than the minimal Nyquist rate of two times the
highest-frequency component needed to retain all the analog content.

The basic GSM (Global System of Mobile Communications) cellular phone system is TDMA-based. It
divides up the radio spectrum into 200-kHz bands and then uses time division techniques to put
eight voice calls into one channel. Figure 3 shows one frame of a GSM TDMA signal. The eight time
slots can be voice signals or data such as texts or e-mails. The frame is transmitted at a 270-kbit/s
rate using Gaussian minimum shift keying (GMSK), which is a form of frequency shift keying (FSK)
modulation.
3. This GSM digital cellular method shows how up to eight users can share a 200-kHz
channel in different time slots within a frame of 1248 bits.

Static solution, can be used to give access more frames to the needy users.

But sophisticated mechanisms are used, as the users have to request the time slot.

TDMA is mostly used ( More efficient though complex)

CDMA

CDMA is another pure digital technique. It is also known as spread spectrum because it takes the
digitized version of an analog signal and spreads it out over a wider bandwidth at a lower power
level. This method is called direct sequence spread spectrum (DSSS) as well (Fig. 4). The digitized and
compressed voice signal in serial data form is spread by processing it in an XOR circuit along with a
chipping signal at a much higher frequency. In the cdma IS-95 standard, a 1.2288-Mbit/s chipping
signal spreads the digitized compressed voice at 13 kbits/s.

4. Spread spectrum is the technique of CDMA. The compressed and digitized voice signal is processed
in an XOR logic circuit along with a higher-frequency coded chipping signal. The result is that the
digital voice is spread over a much wider bandwidth that can be shared with other users using
different codes.

The chipping signal is derived from a pseudorandom code generator that assigns a unique code to
each user of the channel. This code spreads the voice signal over a bandwidth of 1.25 MHz. The
resulting signal is at a low power level and appears more like noise. Many such signals can occupy
the same channel simultaneously. For example, using 64 unique chipping codes allows up to 64 users
to occupy the same 1.25-MHz channel at the same time. At the receiver, a correlating circuit finds
and identifies a specific callers code and recovers it.

The third generation (3G) cell-phone technology called wideband CDMA (WCDMA) uses a similar
method with compressed voice and 3.84-Mbit/s chipping codes in a 5-MHz channel to allow multiple
users to share the same band.

OFDMA

OFDMA is the access technique used in Long-Term Evolution (LTE) cellular systems to
accommodate multiple users in a given bandwidth. Orthogonal frequency division
multiplexing (OFDM) is a modulation method that divides a channel into multiple
narrow orthogonal bands that are spaced so they dont interfere with one another. Each
band is divided into hundreds or even thousands of 15-kHz wide subcarriers.

The data to be transmitted is divided into many lower-speed bit streams and modulated
onto the subcarriers. Time slots within each subchannel data stream are used to package
the data to be transmitted (Fig. 5). This technique is very spectrally efficient, so it
provides very high data rates. It also is less affected by multipath propagation effects.

Used in LTE, DSL, IEEE 802.11a and 802.11g standards and distribution of TV signals.

Optical CDMA:

The phase is always rotating inside the fiber (Drawback)

Not all the Spreading Codes in the Spreading tree, and hence the bandwidth is lost.

Comparison of the various methods

Example technologies (xDSL, CabelModem, PON, mobile)

3. WLAN

Wireless LAN received its name from the fact that it is primarily based on existing LAN standards.
These standards were initially created by the IEEE for wired interconnection of computers and can
be found in the 802.X standards (e.g., 802.3 [2]). In general, these standards are known as Ethernet
standards. The wireless variant, which is generally known as Wireless LAN, is specified in the 802.11
standard. Only layer 1, the physical layer, is a new development, as WLAN uses airwaves instead of
cables to transport data frames.

Transmission Speeds and Standards

The maximum data rate that could be achieved in a real environment mainly depended on the
distance between the sender and the receiver as well as on the number and kind of obstacles
between them, such as walls or ceilings in practice, around 5 Mbit/s could be achieved with this
standard but only over short distances of a few meters.
IEEE 802.11 (in 1999) originally defined three alternatives:

FHSS (Frequency Hopping Spread Spectrum)

DSSS (Direct Sequence Spread Spectrum)

IR (Infrared).

However, the 802.11 PHY never took off.

802.11b defines DSSS operation

802.11a and 802.11g use OFDM (Orthogonal Frequency Division Multiplexing) I

IEEE 802.11a

Frequency band = 5 GHz

up to 54 Mbps

based on OFDM (Orthogonal Frequency Division Multiplexing)

Modulation and Coding

modulated using BPSK, QPSK, 16-QAM, or 64-QAM

coded using convolutional codes (R = 1/2, 2/3, and )

Advantages & Disadvantages

less crowded than ISM band

strong shading due to high frequencies

802.11b

The 802.11b standard used the 2.4-GHz ISM (Industrial, Scientific and Medical) band, which can be
used in most countries without a license. One of the most important conditions for the license-free
use of this frequency band is the limitation of the maximum transmission power to 100 mW. It is
also important to know that the ISM band is not technology restricted. Other wireless systems such
as Bluetooth also use this frequency range.

802.11g

The 802.11g standard specified a much more complicated PHY as compared to the 802.11b standard
to achieve data rates of up to 54 Mbit/s. In practice, around 25 Mbit/s is reached on the application
layer under good signal conditions. Even though standardization has significantly progressed, 11g
devices are still in wide use. This variant of the standard also uses the 2.4-GHz ISM band and has
been designed in a way to be backward compatible to older 802.11b systems. This ensures that
802.11b devices can communicate in new 802.11g networks and vice versa.

802.11a
In addition to the 2.4-GHz ISM band, another frequency range was opened for WLANs in the 5-GHz
band. As with the 802.11g standard, data rates between 6 and 54 Mbit/s were specified. In practice,
however, 802.11a devices never became very popular, as they had to be backward compatible to
802.11b and g, and the support of several frequency bands increased the overall hardware costs.

802.11n

Owing to the rising data rates in local networks and of Internet connections via cable and ADSL, it
was necessary to further increase the speed of Wi-Fi networks. After several years of standardization
work, the companies involved finally agreed on a new air interface that is now specified in IEEE
802.11n. By doubling the channel bandwidth and by using numerous other improvements that are
described in more detail later in this chapter, PHY data transfer speeds of up to 600 Mbit/s can be
achieved. In practice, typical data transfer rates under favourable radio conditions are in the region
of 70150 Mbit/s. In addition, the specification supports both the 2.4-GHz and the 5-GHz bands. This
has become necessary as the 2.4 GHz is widely used today, and in cities it is not uncommon to find
many networks per channel. The 5-MHz band is still much less used today and hence allows higher
data rates in favourable transmission conditions.

Network architecture;

Ad-hoc mode

In ad hoc mode, also referred to as Independent Basic Service Set (IBSS), two or more wireless
devices communicate with each other directly. Every station is equal in the system and data is
exchanged directly between two devices. The ad hoc mode therefore works just like a standard
wireline Ethernet, where all devices are equal and where data packets are exchanged directly
between two devices. As all devices share the same transport medium (the airwaves), the packets
are received by all stations that observe the channel.

However, all stations except the intended recipient discard the incoming packets because the
destination address is not equal to their hardware address. All participants of an ad hoc network
have to configure a number of parameters before they can join the network.

The most important parameter is the service set identity (SSID), which serves as the network name.
Furthermore, all users have to select the same frequency channel number (some implementations
select a channel automatically) and ciphering key. While it is possible to use an ad hoc network
without ciphering, it poses a great security risk and is therefore not advisable.

Finally, an individual IP address has to be configured in every device, which the participants of the
network have to agree on. Owing to the number of different parameters that have to be set
manually, WLAN ad hoc networks are not very common.

Infrastructure mode

The AP can be used as a gateway between the wireless and the wire-line networks for all devices of
the BSS. Furthermore, devices in an infrastructure BSS do not communicate directly with each other.
Instead, they always use the AP as a relay.
If device A, for example, wants to send a data packet to device B, the packet is first sent to the AP.
The AP analyzes the destination address of the packet and then forwards the packet to device B. In
this way, it is possible to reach devices in the wireless and wire-line networks without the
knowledge of where the client device is.

The second advantage of using the AP as a relay is that two wireless devices can communicate with
each other over larger distances, with the AP in the middle. In this scenario, shown in Figure 5.2, the
transmit power of each device is enough to reach the AP but not the other device because it is too
far away. The AP, however, is close enough to both devices and can thus forward the packet.

The disadvantage of this method is that a packet that is transmitted between two wireless devices
has to be transmitted twice over the air. Thus, the available bandwidth is cut in half. Owing to this
reason, the 802.11e standard introduces the direct link protocol (DLP). With DLP, two wireless
devices can communicate directly with each other while still being members of an infrastructure
BSS. However, this functionality is declared as optional in the standard and not widely used today.

Extended Service Set (ESS)

The transmission power of a WLAN AP is low and can thus only cover a small area. To increase the
range of a network, several APs that cooperate with each other can be used. If a mobile user
changes his position and the network card detects that a different AP has a better signal quality, it
automatically registers with the new AP. Such a configuration is called an Extended Service Set (ESS)
and is shown in Figure 5.4. When a device registers with another AP of the ESS, the new AP informs
the previous AP of the change. This is usually done via a direct Ethernet connection between the APs
of an ESS, and is referred to as the distribution system. Subsequently, all packets arriving in the
wired distribution system, for example, from the Internet, will be delivered to the wireless device via
the new AP. As the old AP was informed of the location change, it ignores the incoming packets. The
change in APs is transparent for the higher layers of the protocol stack on the client device.
Therefore, the mobile device can keep its IP address and only a short interruption of the data
transfer will occur.

Usually, the SSID is a text string in a human readable form because during the configuration of the
client device the user has to select an SSID if several are found. Many configuration programs on
client devices also refer to the SSID as the network name.

The second parameter is the frequency or channel number. It should be set carefully if several APs
have to coexist in the same area. The ISM band in the 2.4 GHz range uses frequencies from 2.410 to
2.483 MHz. Depending on national regulations, this range is divided into a maximum of 11 (United
States) to 13 (Europe) channels of 5 MHz each. As a WLAN channel requires a bandwidth of 25 MHz,
different APs at close range should be separated by five ISM channels. As can be seen in Figure 5.5,
three infrastructure BSS networks can be supported in the same area or a single ESS with
overlapping areas of three APs. For infrastructure BSS networks, the overlapping is usually not
desired but cannot be prevented if different companies or home users operate their APs close to
each other. To be able to keep the three APs at least five channels apart from each other, channels
1, 6 and 11 should be used.
802.11a and 11n systems use the spectrum in the 5-GHz range in Europe, between 5.170 and 5.350
GHz and between 5.470 and 5.725 GHz, for data transmission. In this 455 MHz bandwidth, 18
independent networks can be operated. This is quite significant, especially when compared to the
three independent networks that can be operated in the 2.4-GHz band.

PHY layer characteristics (channel allocation, OFDM, bit rates, etc)

PHY-channel Allocation

Not all channels can be used at the same channel, since they overlap and thus interference. Only 3
channels are used (mainly 1, 6 and 11).

Non-overlap channels.

The two lower layers, physical encapsulated in the MAC layer.

PLCP is used on the physical layer to sense the channel (carrier sense)

DSSS

In mobile communication, it is used in UMTS to share capacity among users.

In this technique, the 2.4 GHz band is divided into 14 channels of 22MHz. There are 3 channels that
are completely non-overlapping. Data is sent across one of the channels without hopping to other
channels. The user data is modulated using a pre-defined wideband- spreading signal. The receiver
knows this signal and is able to recover the original data.

To compensate for noise on a given channel, 802.11 DSSS uses a techniques called chipping. Each
data is converted into a series of redundant bit patterns called chips. The transmitter encodes with
an XOR gate all data sent via an 11-bit, high speed, pseudorandom numerical (PRN) sequence called
the Barker sequence.

Each 11-chip sequence represents a single bit (1 or 0), and is


converted to a waveform, called a symbol which is sent over
the air.

The inherent redundancy of each chip combined with


spreading the signal across the 22MHz channel provides a form
provides a form of error checking and correction. Even if part
of the signal is damaged, it may still be recovered in many
cases, minimizing the need for retransmissions.

Advantage

More robust transmission (more bits)

Reduces frequency selective fading

Disadvantage
short symbol length, risk of inter-symbol interference

insufficient use of bandwidth

FSS

The 2.4 GHz band is divided into a large number of sub-channels. The peer communication
endpoints agree on the frequency-hopping pattern, and data is sent over the sub-channels. The
transmitter sends data over a sub-channel for a fixed length of time, called the dwell time, then
changes frequency according to the hopping sequence and continues transmission in the new
frequency.

Bit Rates

Coding Rate: Data bit rates is usually less than the code bit rates, due to redundancy of bits used for
error correction

MAC layer access control schemes

The MAC protocol on layer 2 has similar tasks in a WLAN as in a fixed-line Ethernet:

It controls access of the client devices to the air interface.

A MAC header is put in front of every frame that contains, among other parameters, the (MAC)
address of the sender (source) of the frame and the (MAC) address of the recipient (destination).

As the air interface is a very unreliable transmission medium, a recipient of a packet is required to
send an ACK frame to inform the sender of the correct reception of the frame.

The same or a different client device is allowed to send the next frame only when the ACK frame has
been received. If no ACK frame is received within a certain time, the sender assumes that the frame
was lost and thus resends the frame. To ensure that the ACK frame can be sent before another
device attempts to send a new data frame, the ACK frame is sent almost immediately after the data
frame has been received. There is only a short delay between the two frames, the short interframe
space (SIFS). All other devices have to delay their transmission by at least a distributed coordination
function (DCF) interframe space (distributed coordination function interframe space, or DIFS for
short).

ACK: Is a control message, hence it has high priority. It does not need to be delayed much.

OFDM

Maximum data rates of 54Mbps are permitted using OFDM radio.

Different carriers which are placed orthogonal to each other are spaced closed together.

Hidden Station Problem


Optionally, devices can also reserve the air interface prior to the transmission of a data frame. This
might be useful in situations where devices can reach the AP but are too far away from each other to
receive each others frames.

Under these circumstances, it can happen that two stations might attempt to send a frame to the
AP at the same time. As the two frames will interfere with each other, the AP will not be able to
receive either of the frames correctly. This scenario is also known as the hidden station problem. To
prevent such an overlap, a device can reserve the air interface as shown in Figure 5.12 by sending a
short RTS (Ready to Send) frame to the AP. The AP then answers with a CTS (Clear to Send) frame
and the air interface is reserved. While the RTS frame might not be seen by all client devices in the
network because of the large distance between them, the CTS frame can be seen by all devices
because the AP is the central point of the network. Both RTS and CTS frames contain a so-called
Network Allocation Vector (NAV) to inform other devices regarding the period of time during which
the air interface is reserved.

As in a wired network, there is no central instance that controls the device which is allowed to send
a frame at a certain time. Every device has to decide on its own as to when it can send a frame. To
minimize the chance of a collision with frames of other devices, a coordination function is necessary.
In WLAN networks, the DCF is used for this purpose.

CSMA/CA

Going back to the standard 802.11b DCF medium access scheme, DCF uses Carrier Sense Multiple
Access/Collision Avoidance (CSMA/CA) to detect if another device is currently transmitting a frame.
This method is quite similar to CSMA/Collision Detect (CD), which is used in fixed-line Ethernet, but it
offers a number of additional functionalities to avoid collisions.

If another device is already sending a data packet, the device has to wait until the data transfer has
finished. Afterward, the device has to observe another delay time, the DIFS period, which has been
described above. Then, the device yet again defers sending its packet for additional back-off time,
which is generated by a random number generator. Therefore, it becomes very unlikely that several
devices attempt to send data waiting in their output queue at the same time. The device with the
smallest backup time will send its data first. All other devices will see the transmission, stop their
backup timer and repeat the procedure once the transmission is over. In spite of this procedure if
two devices still attempt to send packets at the same time, the transmissions will interfere with each
other and thus no ACK frame will be sent. Both stations then have to retransmit their packets. If a
collision occurs, the maximum possible backup time from which the random generator can choose is
increased in the affected devices. This ensures that even in a high-load situation the number of
collisions remains small.

The back-off time is divided into slots of 20 microseconds. For the first transmission attempts, the
random generator will select 1 of the 31 possible slots in 802.11b and 11g devices. If the
transmission fails, the window size is increased to 63 slots, then to 127 slots and so on. The
maximum window size is 1023 slots, which equals 20 milliseconds. In the 802.11n standard, the first
back-off window has been reduced to 15 slots, that is, 0.3 milliseconds.
How the backoff is selected for a collided station next time??? It ensures fairness, this means when
they collide, the packet will start where it has finished, it doesnt start from the beginning.

A variation of this method is called carrier sense multiple access with collision avoidance
(CSMA-CA). This method is similar to CSMA-CD. However, a special scheduling
algorithm is used to determine the appropriate time to transmit over the shared channel.
While the CSMA-CD technique is most used in wired networks, CSMA-CA is the
preferred method in wireless networks.

4. xDSL broadband access basic network architecture

xDSL is the term for the Broadband Access technologies based on Digital Subscriber Line (DSL)
technology x signifies that there are various flavours of DSL

DSL (Digital Subscriber Line) is a technology for bringing high- bandwidth information to homes and
small businesses over ordinary copper telephone lines.

Contrary to the analog modem network


access that uses up to 4kHz signal
frequencies on the telephone wires and is
limited to 56Kbps data rates, DSL is able to
achieve up to 52Mbps data transmission
rates by using advanced signal modulation
technologies in the 25kHz and 1.1Mhz
frequency range.

xDSL refers to different variations of DSL, such as ADSL, HDSL, and RADSL.

DSL collectively refers to a group of technologies that utilize the unused bandwidth in the existing
copper access network to deliver high-speed data services from the distribution center, or central
office, to the end user.

DSL technology is attractive because it requires little to no upgrading of the existing copper
infrastructure that connects nearly all populated locations in the world. In addition, DSL is inherently
secure due to its point-to-point nature. A simple diagram of a typical DSL system is shown in Figure 1
below:

There are many variations of DSL, each aimed at particular markets, all designed to accomplish the
same basic goals. ADSL, or Asymmetric DSL, is aimed at the residential consumer market. ADSL
provides higher data rates in the downstream direction, from the central office to the end user, than
in the upstream direction, from the end user to the central office. Within the Internet connectivity-
based residential environment, small requests by the end user often result in large transfers of data
in the downstream direction. ADSL is a direct result of the asymmetric nature of the Internet and the
needs of the end user, and was originally designed for video-on-demand applications.

Symmetric DSL

provides the same data rate for upstream and downstream transmissions and includes the
following types:

SDSL: Targets the business section

Asymmetric DSL

provides higher downstream then upstream data transmission rates and includes the following
types:
Symmetric and Asymmetric DSL

can transmit data both symmetrically and asymmetrically and includes the following type:

Asymmetric Digital Subscriber Line (ADSL) variants are by far the most popular DSL implementations
mostly due to its suitability for Internet browsing applications that are heavily geared towards
downstream data transmission (download).

Network overlay

Range and bitrates

Evolution from ADSL to ADSL2,

ADSL 2 is similar to ADSL and typically the modems can be interchangeable. The difference
is that ADSL 2 offers a downstream rate of up to 12 Mbps, while the upstream rate remains
the same as regular ADSL, at 1 Mbps. The range of 5.5kmfrom the central office also
remains the same.

ADSL2+ is the next generation of ADSL Broadband, ADSL2+ services are capable of
download speeds of up to an incredible 24 Megabits per second (depending on your
equipment and the length of your copper line). ADSL2+ services are capable of upload
speeds of up to 2.5 Megabits per second (Annex M) or 1 Megabit per second.

ADSL2+ Broadband runs much faster than standard ADSL. This allows you to get faster
speeds at longer distances from your telephone exchange (as per the graph), or get ADSL
when you previously have not been able to in the past

5. ADSL broadband access modulation and framestructures

An ADSL system uses existing telephone wire to allow bidirectional data communications between a
user and the telephone company's central office (CO). Some other popular services, such as an ISDN
line or a standard dial-up modem, also use the phone lines to communicate. However, those services
prevent the simultaneous operation of standard analog phone service on the same phone line. An
important advantage of ADSL is that it allows the plain old telephone system (POTS) signal to co-exist
with the ADSL data signal.

ASDL: Asymmetric Digital Subscriber Line: Targets the private section

Most people use more bandwidth on downlink

First: 8 to 10 MB DL and 0.5MB UL

Suitable for applications such as web-browsing, MP3 downloading, video on demand (VoD)

Scaling issue:

With copper cables

All traffic aggregation starts at the AP. It is from here where the Erlang formula is applied to take
only probability of users are using the resources at the same time. (Users are not using all the
capacity at the same time).

Outside the DP, is where the users get the guaranteed bandwidth, inside there are a lot of
aggregations.
The important concept in this section is to remember the parts involved in the access part of the
ADSL network. It is important to mention why the second figure divides the network into AP, FP and
DP.

Below the figure it is specified the amount of users that each element aggregates, so each AP,FP,DP
elements symbolize an aggregation level.

The more we go towards the core the more we need reliability. Costs are more expensive towards
the client in terms of equipment, network management and line fixing so costs need to be kept low.

Symmetric

Symmetric => downstream & upstream rates are equal

Suitable for office type apps like Video conferencing

Types of symmetric xDSL

Symmetric DSL (SDSL) Based on HDSL but single pair Spectral compatibility an issue
(crosstalk & interference)

High bit-rate DSL (HDSL) The first of the symmetric DSL technologies Uses multiple wire
pairs (2 or 3) to achieve high bit rates

CAP, DMT, Spectrum usage

CAP (Carrierless Amplitude Phase Modulation)

The CAP method works by taking the entire bandwidth of the copper wires and simply splitting those
up into 3 distinct sections or bands separated to ease interference. Each signal band is then
allocated a particular task.

The first band is in the signal range of


0 to 4 kHz and is used for telephone
conversations (voice).

The second band occupies the range


of 25 to 160 kHz which is used as an
upstream channel, while the third
band covers from 240 kHz up to a
maximum (depending on conditions)
of 1.5 MHz and is used as a
downstream channel. This method
was simple and effective as poor
quality wires or large amounts of
interference wouldn't affect the xDSL
from working, instead it would just
limit the range of the third band and result in slightly reduced speeds.
Pros

CAP: no carrier, means no power in the carrier. Save some energy in this-

Cons

BUT: not efficient, the quality is not so gud

Upstream BW: (25kHz-200khz): This spectrum is simple to use as it has less attenuation, this means
we transmit with very low power. This simplify it in the end user equipment (we want to have low
power and transceivers as possible to the end users)

Downstream BW: We can use high power at operators end equipment as the cost is shared at this
point by all users; hence very powerful transceivers are used.

DMT (Dual Multitone)

The DMT is system is much more complex. It works by splitting the entire frequency
range (bandwidth) into 247 channels of 4 kHz each and allocating a range of the lower
channels, starting at around 8 kHz, as bidirectional to provide upstream and
downstream channels. By splitting the bandwidth up in this way it effectively allows one
connection to operate as if there were 247 modems connected to it, each of which operating
at 4 kHz. The technology used in the DMT system is vastly more complex than that required
for the CAP method as each of the 247 channels requires constant monitoring and
assessment. If the system detects that a specific channel or range of channels are suffering
from interference or a degradation in quality then the data stream must be automatically
transferred to different channels. For the DMT system one need to place low pass filters into
any telephone socket for making voice calls, because voice calls take place below the 4 kHz
frequency and the filters simply block anything above this to prevent data signals interfering
with the telephone call.

Spectrum Usage

The ADSL PHY was designed so that it could peacefully co-exist with the standard POTS spectrum.
The two services can co-exist because the ADSL spectrum only uses the frequencies above POTS. The
POTS spectrum goes from near DC to approximately 4 kHz. A frequency guard band is placed
between the POTS spectrum and the ADSL spectrum to help avoid interference. The ADSL spectrum
starts above the POTS band and extends up to approximately 1.1 MHz. The lower part of the ADSL
spectrum is for upstream transmission (from the customer to the CO) and the upper part of the
spectrum is fordownstream transmission. There are actually two different ways that the upstream
and downstream spectra can be arranged.
In a frequency division
multiplexed (FDM)
system, the upstream
and downstream
spectra use separate
frequency ranges. They
can vary for different
implementations, but
typically the upstream
band is from 25 to 200
kHz and the
downstream band is
from 200 kHz to 1.1
MHz. Other divisions
are also permitted
within the ADSL
standard. This system is free from the occurrence of a type of interference called self-crosstalk. One
drawback, however, is that the downstream bandwidth is reduced in comparison to an echo-
cancelled system.

An echo-cancelled system allows the downstream band to overlap with the upstream band. The
upstream band still uses the frequencies from 25 to 200kHz, but the downstream band can now
extend over the upstream band. The main advantage of this system is that it significantly extends
the available downstream bandwidth. However, it does require echo-canceling circuitry due to the
full-duplex transmission. In addition, the presence of self-crosstalk causes additional interference.

Transmission aspects

DSLs advantage of using the legacy PSTN physical plant is offset by several factors. The subscribers
data rate, or speed, reduces as the distance from the network operators DSL modem (DSLAM, DSL
Access Multiplexer) to the subscribers DSL modem increases. A common solution is to place the
DSLAM in the network in a remote terminal (RT), thus reducing the loop length to the subscriber.

DSL performance on the PSTN is also limited by the quality of the physical plant. Old cables damaged
by age, fatigue, corrosion, or even poor handling and installation practice, can reduce DSL capability.
Even the presence of lighter gauge wires (which can range from 0.4 mm to 0.9 mm) or the mix of
different wire diameters reduces capability and impairs DSL service.

Finally, DSL performance is affected by the number of subscribers served within a distribution area,
as well as the coexistence of different services in the same cable. Noise from TWP carrying DSL
degrades service on other pairs in the distribution cable.

Because bundled telephone cable contains many wires for many different users, crosstalk is a
common impairment. These wires radiate electromagnetically and can induce currents in other
wires in the cable. This interference effect is known as crosstalk. There are two basic types of
crosstalk and they both appear at the receiver as additive noise.

Near-end crosstalk (NEXT) occurs when a transmitter interferes with a receiver located on the same
end of the cable. Far-end crosstalk (FEXT) occurs when the transmitter interferes with a receiver on
the opposite end of the cable. The effect of NEXT is more severe than FEXT since the FEXT
interference travels the entire length of the cable and is attenuated by the time it reaches the
receiver.

In echo-cancelled ADSL, the upstream and downstream channels overlap. Since the same frequency
band is being used for transmission and reception, the system will suffer from self- and foreign
crosstalk. However, in FDM ADSL the upstream and downstream channels use separate frequency
bands. This system will not suffer from self-crosstalk, although foreign crosstalk will still be present.

Discrete Multi-Tone (DMT), the most widely used modulation method, separates the ADSL signal
into 255 carriers (bins) centred on multiples of 4.3125 kHz. DMT has 224 downstream frequency bins
and up to 31 upstream bins. Bin 0 is at DC and is not used. When voice (POTS) is used on the same
line, then bin 7 is the lowest bin used for ADSL.

The centre frequency of bin N is (N x 4.3125) kHz. The spectrum of each bin overlaps that of its
neighbours: it is not confined to a 4.3125 kHz wide channel. The orthogonally of COFDM makes this
possible without interference.

Up to 15 bits per symbol can be encoded on each bin on a good quality line.

The frequency layout can be summarised as:

30 Hz-4 kHz, voice.

425 kHz, unused guard band.

25138 kHz, 25 upstream bins (7-31).

1381104 kHz, 224 downstream bins (32-255).

Typically, a few bins around 31-32 are not used in order to prevent interference between upstream
and downstream bins either side of 138 kHz. These unused bins constitute a guard band to be
chosen by each DSLAM manufacturer - it is not defined by the G.992.1 specification.

Transceivers architecture
In the ADSL transmitter shown in Figure 2, an input bit stream is first partitioned into sub-streams
using a serial-to-parallel (S/P) converter. Each sub-stream is then encoded using quadrature
amplitude modulation (QAM), which produces a complex number representing each encoded bit
sub-stream as mentioned in an earlier section. The outputs of the QAM encoder are mirrored and
conjugated before they enter an N point IFFT, where N/2 is the number of sub-channels. The
mirroring creates real values at the output of the IFFT.

The IFFT then maps each QAM symbol into orthogonal frequency bins producing a discrete multi-
tone symbol of N samples. To form a frame, the last v samples of the symbol, known as the cyclic
prefix (CP) are copied and prepended to the symbol. This provides a buffer against Interblock
Interference (IBI), which was discussed earlier. Unfortunately the addition of the CP decreases the
transceiver power efficiency by a factor of N/(N+ ) [7]. The final two stages serialize the data and
convert it to analog via the parallel-to-serial (P/S) converter and digital-to-analog (DAC) converter
respectively.

An ADSL receiver receives the data through the channel, which is modelled as an FIR filter. The
operation of the receiver is the dual of that of the transmitter, plus the addition of an equalizer. The
equalizer has two tasks: 1) reduce ISI in the time domain and shorten the channel impulse response
to the CP limit, and 2) compensate for magnitude and phase distortion in the frequency domain [7].
The first task is done by the time-domain equalizer (TEQ), while the latter is performed by the
frequency-domain equalizer (FEQ).

Other xDSL technologies

6. PON and AON network architecture


Overall architecture FTTx

The effective response from telephone operating companies has been to replace the PSTN with fibre
optics. Optical fibre is capable of delivering bandwidth intensive integrated voice, data and video
services in the access network to distances beyond 20 km, e.g. more than 4 times the distances
allowed with TWP cables through the DSL systems.

A fibre optic wire-line broadband network can have several configurations, such as Fibre-to-the-
Home (FTTH), Fibre-to-the-Building (FTTB), Fibre-to-the-Curb (FTTC) and Fibre-to-the-Node (FTTN).
In each case the optical network is terminated at an Optical Network Unit (ONU also known as an
Optical Network Terminal, or ONT).

The versions of FTTx are differentiated by the location of the ONU. For FTTH, the ONU is located on
the subscribers premises and serves as the demarcation between the operators and customers
facilities. For FTTB and FTTC, the ONU serves as a common interface for several subscribers (e.g., the
basement of an apartment building or a telephone pole), with the service delivered over the
customers existing TWP drop cables. For FTTN, the ONU is located in an active network node serving
dozens to hundreds of subscribers from which service is delivered by existing TWP local loops.
PON

There are two common architectures for FTTx: point-to-point (PtP) and the Passive optical
network (PON).

In a PtP configuration, enterprise local area network (LAN) architecture is applied to the telephone
access network, with a dedicated optical fibre connection (one or two fibres) from the ONU to the
telephone exchange.

In a PON network, several ONU typically up to 32 share a single fibre connection to the network
which is typically split at a passive network node. An example is shown in Figure P-4. A future
configuration of PON, Wavelength Division Multiplexing (WDM) PON, replaces the splitter with a
grating so that each subscriber can be served with a dedicated channel, i.e., wavelength.

In the FTTH approach the optical fibre in the local access network can be used in a point-to-point
topology, with a dedicated fibre running from the local exchange to each end-user subscriber. While
this is a simple architecture, in most cases it is cost prohibitive due to the fact that it requires
significant outside plant fibre deployment as well as connector termination space in the local
exchange. Considering N subscribers at an average distance L km from the central office, a point-to-
point design requires 2N transceivers and N * L total fibre length (assuming single fibre is used for
bidirectional transmission).

To reduce fibre deployment, it is possible to deploy a remote switch (concentrator) close to the
customer (FTTC, FTTB). This reduces fibre consumption to only L km (assuming negligible distance
between the switch and customers), but actually increases the number of transceivers to 2N+2 since
there is one more link added to the network. In addition, a curb-switched architecture requires
electrical power as well as backup power at the curb unit. Currently, one of the highest costs for
local exchange carriers is providing and maintaining electrical power in the local loop. Moreover, as
the service is given over the existing copper subscriber lines, the maximum speed achievable with
very high speed digital subscriber line (VDSL) systems is limited with respect to that of fibre-based
systems.

An alternative solution to the two above is to replace the hardened active curb-side switch with an
inexpensive passive optical component. Passive optical network (PON) is a technology viewed by
many network operators as an attractive solution to minimize the amount of optical transceivers,
central office terminations and fibre deployment. A PON is a point-to-multipoint optical network
with no active element in the signal path from source to destination. The only interior elements used
in a PON are passive optical components such as fibre, splices and splitters. Access networks based
on single-fibre PON require only N+1 transceivers and L km of fibre. The general structure of a PON
network is shown in Figure 2-1.

OLT and ONUs

At the network side there is an optical line termination (OLT), which is usually installed at the local
central office. The OLT is the interface between all the users connected to the given PON and the
metro network. Such users have access to the services offered by the network, through the network
terminal (NT), and to the optical network through the ONU optical network unit/optical network
termination (ONU/ONT). The OLT and the ONUs are connected via an optical distribution network
(ODN), which in many cases has a point-to-multipoint configuration with one or more splitters.
Typical splitting factors include 1:16 / 1:32 / 1:64 or more.
OLT has two float directions: upstream (getting a distributing different type of data and voice traffic
from users) and downstream (getting data, voice and video traffic from metro network or from a
long-haul network and send it to all ONT modules on the ODN)

PON splitters can be placed near the OLT or at the user sites, depending on the availability of fibres
in the ODN, and/or on the ODN deployment strategy adopted by network operators.

The PON shown in Figure 2-1 is completely passive and the maximum distance between the OLT and
the ONU is typically limited to 20 km at nominal split ratios. However, there are also solutions that
include deployment of active elements in the network structure (e.g., optical amplifiers) when it is
necessary to achieve a longer reach (e.g., up to 60 km) or to reduce the number of central office
sites (central office concentration), or to connect a larger number of users to a single OLT port (e.g.,
where higher power budget is required due to a higher split ratio). Such solutions are typically
referred to as long-reach PON

As shown in Figure 2-1, a PON can be deployed in a FTTH architecture, where an ONU/ONT is
provided at the subscriber's premises, or in FTTB, FTTC or fibre-to-the-cabinet (FTTCab)
architectures, depending on local demands. In the latter cases, the optical link is terminated at the
ONU, and the last stretch to the subscriber's premises is typically deployed as part of the copper
network using, for example, existing xDSL lines. Various types of xDSL family technologies, e.g.,
VDSL2, are typically used.

Media access and modulation

In the upstream channel (from subscriber to the OLT), access to a shared fibre channel is guaranteed
by the use of the time division multiple access (TDMA) mechanism, where a certain bandwidth is
assigned to each ONU/ONT by the OLT. In the downstream channel (from the OLT to the
subscribers), there is only one transmitter located at the OLT, and data to individual ONUs/ONTs is
transmitted using time division multiplexing (TDM). Figure 2-2 shows the use of these techniques in
downstream and upstream channels.

WDMA, CDMA, TDMA, SCMA

The downstream channel works in continuous mode, i.e., the cells/packets to be sent to the
different ONTs are queued with no time gap between them. Idle cells/packets are generated by the
OLT when necessary, in order to assure a continuous data flow in the downstream direction. This
allows the ONTs to recover their own clock from the downstream data flow.

The upstream channel works in burst mode instead, and when the cells/packets reach the OLT
receiver, they have different amplitude, because the branches of the ODN have very likely different
lengths and attenuation. A suitable guard time is guaranteed between consecutive cells/packets by
the media access control (MAC) protocol. In case of poor upstream traffic, the OLT receiver must be
able to cope with the reception of a cell/packet after a relatively long period of silence. Moreover,
the length of the upstream cells/packets is not fixed, thanks to the dynamic bandwidth assignment
(DBA) algorithm, which assigns more bandwidth to the ONTs that have more upstream traffic to be
transmitted at a particular moment. In the downstream direction, an ONT can receive more
cells/packets than another one, but this is not reflected in Figure 2-2, for the sake of simplicity.
WDMA

In order to reduce the need for dual fibre ODNs, the aforementioned PON systems can take
advantage of the wavelength division multiplexing (WDM) technique, where downstream and
upstream channels are transmitted at different wavelengths on the same optical fibre, e.g., 1260-
1360 nm for the upstream and 1480-1500 nm for the downstream. It is also possible to add another
optical signal, to for example, carry radio-frequency-video signals in the bandwidth 1530-1580 nm
(called the enhancement band).

SCMA: Sub-carrier multiplexing with lasers modulated at different frequencies

TDMA: Turning lasers on and off based on grants given by the OLT (scheduling, ranging)

WDMA: Different wavelength for different channels.


SCMA: Sub-carrier multiplexing with lasers modulated at different frequencies. (Subcarrier
multiple access enables dedicated point-to-point connectivity over PON architecture by
allocating a different RF frequency for each subscriber. In this scheme, each subscriber
transmits at essentially the same wavelength but is allotted a unique RF frequency for
encoding its data. A single receiver at the OLT detects the N different RF frequencies and
demultiplexes them in the electrical-frequency domain. The RF frequencies are the
subcarriers, while the transmitted upstream optical wavelength is the main carrier)

TDMA: Turning lasers on and off based on grants given by the OLT (scheduling, ranging)

WDMA: Different wavelength for different channels.

CDMA: the multiple users share a common upstream wavelength. Each subscriber is
assigned a unique and effectively orthogonal code for transmission at any time regardless of
when the others are transmitting. At the OLT receiver, all the overlapping codes are
detected using a single receiver and correlated with sets of matching codes associated with
each user-data channel. High correlation peaks occur for matched codes, and very small
correlation peaks occur for the mismatched codes. This allows simultaneous and
independent data transmissions to occur through a single-OLT receiver

GPON

GPON stands for Gigabit Passive Optical Networks. GPON is defined by ITU-T recommendation
series G.984.1 through G.984.6. GPON can transport not only Ethernet, but also ATM and TDM
(PSTN, ISDN, E1 and E3) traffic. GPON network consists of mainly two active transmission
equipments, namely- Optical Line Termination (OLT) and Optical Network Unit (ONU) or Optical
Network Termination (ONT). GPON supports triple-play services, high-bandwidth, long reach (upto
20km), etc.

EPON is based upon IEEE 802.3 Ethernet that was modified to support point-to-multipoint (P2MP)
connectivity. Ethernet traffic is transported natively and all Ethernet features are fully supported.
GPON, on the other hand, is fundamentally a transport protocol, wherein Ethernet services are
adapted at the OLT and ONT Ethernet interfaces and carried over an agnostic synchronous framing
structure from end to end.

Framing/service adaption: The GPON Transmission Convergence (GTC) layer is responsible for
mapping service-specific interfaces (e.g. Ethernet) into a common service-agnostic framework.

Ethernet frames are encapsulated into GTC Encapsulation Method (GEM) frames, which have a GFP-
like format (derived from Generic Frame Procedure ITU G.7401). GEM frames are, in turn,
encapsulated into a SONET/SDH-like GTC frame (in both upstream and downstream directions) that
is transported synchronously every 125 sec over the PON.

In contrast, EPON carries Ethernet frames natively on the PON with no changes or modifications.
There is no need for extra adaption and encapsulation.
Since PON is P2MP in nature, the OLT must be able to uniquely identify and communicate with each
ONT. EPON uses a Logical Link ID (LLID) to uniquely address an ONT. In addition, VLAN_IDs are used
for further addressing in order to deliver VLAN-based services. In the downstream direction, the OLT
attaches the LLID to the preamble of the frame to identify the destination ONT.

In GPON, one or more Traffic Containers (T-CONT) are created between the OLT and an ONT. This T-
CONT allows for the emulation of a point-to-point virtual connection between the OLT and ONT and
the subsequent TDM multiplexing of the downstream bandwidth between T-CONTs. Within each T-
CONT there can exist multiple Port IDs to identify individual ONT ports within a single ONT.

Optionally, both GPON and EPON support DBA. This is used for real-time variation of timeslot
allocation to ONTs, which increases throughput as a function of upstream demand.

7. Power Line Communication (PLC)

PLC system characteristics

Not designed for high frequencies;

Low power (!) signals

Need high frequencies: current lines designed @50-60Hz to 400Hz

Government regulations specify maximum emission levels -- Must not interfere


with existing uses

Power line hostile to signal propagation (for data communication)

Attenuation

Noise

Electromagnetic compatibility

Power line communication is using electric power line as transmission medium for data
communication.
Motivation:

No new wire !!!

Extensive infrastructure - "Every" building !!!

PLC Challenges

Attenuation: the decrease in amplitude of an electrical signal

Depends on:

Frequency-varied, time-varied, and distance-varied;

Load variation;

Frequency dependent fading

Multiple reflection points in medium

Different types of wires

Sharp turns in wiring

Longer impulse response => inter-symbol interference

Load changes affect channel

Noise in PLC: strong, time-varying, a sum of noise from electrical appliance

3 categories of appliance noise:

Continuous noise (Background Noise)

Impulsive noise - Periodic impulsive E.g. switch power supply

Impulsive noise Random impulsive noise Light dimmer switch Hair dryer,
blender etc

Radio interference

Electromagnetic Compatibility

Power lines are leaky - Radiate high frequency electromagnetic signals

Require filters to prevent leakage

Interference with nearby wireless devices

Other disturbance

Homeplug Series MAC layer properties


Carrier Sense Multiple Access (CSMA). CSMA with overload detection has been proposed for PLC.
CSMA is a contention based access method in which each station listens to the line before
transmitting data. CSMA is efficient under light to medium traffic loads and for many low-duty-cycle
bursty terminals (e.g. Internet browsing).

i. Collision Detection (CSMA/CD) senses the channel for a collision after transmitting. When it senses
a collision, it waits a random amount of time before retransmitting again. But on power lines the
wide variation of the received signal and noise levels make collision detection difficult and
unreliable.

ii. Collision Avoidance (CSMA/CA). As in the CSMA/CD method, each device listens to the signal level
to determine when the channel is idle. Unlike CSMA/CD, it then waits for a random amount of time
before trying to send a packet. Packet size is kept small due to the PLC's hostile channel
characteristics. Though this means more overhead, overall data rate is improved since it means less
retransmission.

The channel access mechanism used by the HomePlug MAC is a variant of the well-known CSMA/CA
protocol. The overall protocol includes a carrier sensing mechanism, a priority resolution mechanism
and a back-off algorithm.

HomePlug 1.0

The carrier sense mechanism helps HomePlug nodes to synchronize with each other. At the heart of
this mechanism are the delimiters. HomePlug technology uses a combination of Physical Carrier
Sense (PCS) and Virtual Carrier Sense (VCS) to determine the state of the medium (i.e., if the
medium is idle or busy and for how long).

PCS is provided by the HomePlug PHY and basically indicates whether a preamble signal is detected
on the medium. VCS is maintained by the HomePlug MAC layer and is updated based on the
information contained in the delimiter. Delimiters contain information not only on the duration of
current transmission but also on which priority traffic can contend for the medium after this
transmission. PCS and VCS information is maintained by the MAC to determine the exact state of the
medium.

The communication in power lines can be divided into two main layers:

The Physical Layer and the Medium Access Control (MAC) Layer.

The Physical Layer defines the modulation techniques to transmit data over the power lines while
the MAC protocol specifies as resource sharing strategy i.e. the access of multiple users to the
network transmission capacity based on a fixed resource sharing protocol. Communicating at the
PLC Physical Layer demands robust modulation techniques like Frequency Shift Keying (FSK), Code-
Division Multiple Access (CDMA) and Orthogonal Frequency Division Multiplexing (OFDM).

For low cost, low data rate applications, such as power line protection and telemetering, FSK is seen
as a good solution.

For data rates up to 1Mbps, the CDMA technique may provide an effective solution. However, for
high data applications beyond that, OFDM is the technology of choice for PLC.

Priority Resolution Period

--Only frames of highest priority contend

Contention Period

--Random back-off procedures

--Initial number of slots depends on priority level

--Resume countdown next available contention period

In PLC, every
time the
contention
window is
doubled, to
ensure the
fairness in the
network. in
addition to the
small number
(7), there is a
high chance
that they will
reach 5 and collide.

ARQ in homeplug:
Three types of acknowledgement frames:
--ACK, NACK and FAIL
Collisions: if absence of expected response or Frame control errors

With OFDM:
FRequency. selection fading, retransmission needed only for one symbol but with single
carrier mode, you need to retransmit everything

WHY DO WE USE OFDM???


SUPPORT very high data rates
The channels are sensed all the time, if the channel is really bad, it is switched, it means in
the OFDM that particular frequency is turned off

HomePlug AV
Channel Access

8. USB & Serial communication

Parallel communication: Examples are printers and scanners

sending a whole byte (or more) of data over multiple parallel wires
Control bits to determine the timing for reading and writing the data

Serial Communication

sending data bit by bit over a single wire

Synchronous serial requires the clock signal

Data rate for the link must be the same for the
transmitter and the receiver

RS-232, USB, Firewire, Ethernet, ..

Pros: Small size connectors since we only need one port for data Tx

cons: Slower as compared to parallel (( times faster- 1 byte of data is sent)

Buses: contention and latency increases with number of devices since it is shared by all devices.

Synchronous Serial Communication the clock signal to be transmitted from the source

Asynchronous Serial Communication Transmitter and receiver do no share a common clock but
speed must be agreed upon beforehand

Fundamentals about serial communications

Synchronous serial: one line for clock and one for data

Asynchronous: Save one line (wire), but one bit needs to be added as a control bit, to show the start
and end of the data.

RS-232 Transmission

Transmission process (9600 baud, 1 bit = 1/9600 = 0.104 ms)

Transmit idles high (when no communication).

It goes low for 1 bit (0.104 ms)

It sends out data, LSB first (7 or 8 bits)

There may be a parity bit (even or odd error detection)

There may be a stop bit (or two)

Difference between RS-232 and USB port

USB is intended as a high speed upward extensible fully standardised interface between 1
computing device using a single port and N peripherals using one port each with all control
being accomplished by signals within the data stream. USB is formidably difficult to provide
low level interfaces for. "Simple" interfaces are common but these provide and hide a very
large degree of related complexity.

RS232 was intended as a 1:1 relatively low speed semi-standardised interface between 1
computing device and 1 peripheral per port with hardware control being an integral part of
operation. RS232 is relatively easy to provide low level physical interfaces for.

There are other pins for control, but they are not necessarily used. Their
main function is for buffer retention control. The protocol in RS-232 is quite
simple. It is assumed that both sides are initially silent (each TX is low), and
then when a side wants to transmit a byte it does one or more high pulses
(the "start bits"), sends each bit of the byte transmitted sequentially and
then finishes with some more pulses ("stop bits"). Optionally, there may be a
parity bit. It is assumed that both sides previously have the same
configuration for start and stop bits and the timing for sending each bit (the
baud rate).

There may be more signaling for error correction, but that's not required. So
an RS-232 port can be easily made using I/O pins in any microcontroller, the
only thing you'll need is voltage conversion since RS-232 lines are 12V and
microcontrollers usually work at 3.3V.

USB uses a pair of differential lines, in which a bit is made high by placing a
voltage difference between them in one direction, and low by placing the
same difference in the other direction. This is much more effective to damp
noise, so that's why USB can go longer distances and have much higher
bandwidths. Both sides transmit and receive over the same pair, and there's
a complex data protocol to detect collision, do error correction, discover
device characteristics, etc., not to mention the support in the spec for
standard device-specific protocols like mice, keyboards, etc. In short, to have
an USB port you either need a dedicated IC for it or a firmware in your
microcontroller that's absolutely not trivial to write, especially if you want to
support specific device capabilities.

USB system and data transmission

USB provides several benefits compared to other communication interfaces such as ease of use, low
cost, low power consumption and, fast and reliable data transfer.

USB = Universal Serial Bus

Replacement for different serial and parallel interfaces;

USB is able to hot swap or hot plug

Plug and Play


Connecting a new USB device => automatic process

USB 1.0: It started with very low speed, in 1996. Because only the keyboards and mice were used to
connect to pcs (low speed devices)

The market increasing need for bigger storage and faster communication links lead to the
development of USB 2.0 in early 2000. This new USB standard kept the compatibility with LS and FS
and added High Speed (HS) at 480 Mbits/sec.

USB 3.0: with superspeed of about 5Gbps. After a need of transferring data files (like music, sound).

The USB specification defines three


data speeds, shown to the right.
Name Speed
These speeds are the fundamental
clocking rates of the system, and as Low Speed 1.5 Mbit/s
such do not represent possible Full Speed 12 Mbit/s
throughput, which will always be
High Speed 480 Mbit/s
lower as the result of the protocol
overheads.

Low Speed

This was intended for cheap, low Low-speed and full-speed modes were
data rate devices like mice. The low present in the original USB 1.0
speed captive cable is thinner and
specification, while high-speed was added
more flexible than that required for
in USB 2.0. This explains the oddity that
full and high speed.
full-speed is definitely not the maximum
LS was, and still is, used for human speed USB can transmit anymore, though it
interface devices that do not require remains confusing on first glance. Note
a lot of bandwidth, such as keyboard, that the speeds listed are theoretical
mouse and joystick. maximums & are not reached in practice -
not only does interference and such affect
Full Speed
actual transmission rates, but some
overhead is required for the communication
This was originally specified for all
other devices. between the USB controller & connected
device that is not visible to the
FS was widely adopted for mass storage user. Additionally, in places where data
devices, printers, scanners and audio rates are most visible to the user such as
devices.
the transfer of large files to an external
High Speed storage device, the speed is limited by the
write speed of the storage device rather
The high speed additions to the than the data transfer rate of the USB
specification were introduced in USB connection.
2.0 as a response to the higher
speed of Firewire.
Connectors

Host Responsibility

Detect devices

Manage Data Flow

Error checking

Provide power

Exchange data

USB Basic - Host Controller

Different specifications for host controllers are available:

Open Host Controller Interface (OHCI) For USB 1.0 & 1.1 from Compaq, Microsoft, and
National Semiconductor Adopted as the standard by the USB-IF Hardware driven more efficient

Universal Host Controller Interface (UHCI) For USB 1.0 & 1.1 from Intel - other companies need
to pay Software-driven cheaper to implement in silicon

Enhanced Host Controller Interface (EHCI) For USB 2.0

Universal Host Controller Interface (UHCI) For USB 3.0 Replaces UHCI/OHCI/EHCI - finally !!

System Speed
Compatibility Bus rate Vs. data rate: The actual data rates are affected by bus loading, transfer
type, overhead, OS, and so forth. Speed detection: - when a USB is connected to a host. - Using
pull-up resistors.

USB Cables

USB uses several different types of connectors,


each suited to different devices. Type A (seen
in the image to the left) is used primarily on
computers & other "host" devices, which
typically have a fair amount of space for
connectors. Printers, portable hard drives, &
other such medium size devices usually use
Type B connectors, while Mini-A and Mini-B
ports are typically reserved for cell phones, MP3
players & other handheld devices. Micro-A and
Micro-B are slowly replacing the vast majority
of Mini-A & Mini-B applications, however.

USB cables themselves are made with a pair of


twisted copper wires, known as Unshielded
Twisted Pair (UTP). Using twisted pairs of wires
instead of straight wires reduces the effects of crosstalk between the wires, as well as
helping mitigate the effects of noise from the environment surrounding the cable. This
limits the length of USB cables to approximately 3 meters long, due to attenuation of the
signal in the cable. USB also has a maximum round-trip delay of 1.5 microseconds, which
limits maximum cable length to about 5 meters due to the speed of electrical impulses
through copper cable.

On the diagrams to the left, Pin , usually simply labelled "+", is used as a positive voltage
carrier, supplying a regulated 5 volts to power whatever device is connected if
necessary. This is relative to Pin 4, which is used as a ground and often labelled "-". Pins 2
and 3 are the "data" pins, called D- and D+ respectively. These are the pair of twisted
cables, and use differential signalling to aid in the removal of noise from the signal.
USB Encoding

USB uses an encoding scheme called "Non-Return to Zero Inverted." This means that it uses a
simple DC voltage level to represent different bit states - one to represent a logic level low, and one
to represent a logic level high. The Non-Return to Zero Inverted (NRZI) encoding scheme is, in
modern times, used primarily for short distance, directed communication between devices making it
ideal for the purposes USB was designed for. Because maintaining accurate clocking can be difficult
with NZRI, USB implements a practice called bit stuffing, which inserts a zero after every six
consecutive ones in the data stream to ensure a transition at least once every seven bits. An
example of NRZI encoding is shown below:

Initially, the clock is set using a SYNC field sent by the host which contains the maximum amount
of transitions in a certain number of bits varying by speed, and the bit stuffing is simply for the
purpose of clock recovery. These stuffed bits are recognized and discarded by the receiver. Data is
transmitted in discrete units called packets, each of which includes a sequence calculated based
on the actual data being transmitted called a CRC or Cyclic Redundancy Check. If the CRC
calculated based on the received data doesn't match the received CRC from the packet, an error is
detected & the packet resent. After 3 sequential unsuccessfully received packets, a notification is
sent to the sender & the program waiting for input that the transfer was unsuccessful.

9. Bluetooth

802.15.1: Bluetooth
Goals

Cable replacement RS232

Low cost,

low power

small size

Bluetooth physical layer properties

Up to version 1.2 of the standard, the maximum data rate of a Bluetooth transmission
channel is 780 kbit/s.

All devices that communicate directly with each other have to share this data rate. The
maximum data rate for a single user thus depends on the following factors:

the number of devices that exchange data with each other at the same time;

activity of the other devices.

The highest transmission speed can be achieved


if only two devices communicate with each other and only one of them has a large amount
of data to transmit.

In 2004, the Bluetooth 2.0+EDR (Enhanced Datarate) standard [2] was released. This enables
datarates of up to 2178 kbit/s by using additional modulation techniques.

For bidirectional data transmission, the channel is divided into timeslots of 625
microseconds. All devices that exchange data with each other thus use the same channel
and are assigned timeslots at different times. This is the reason for the variable data rates
shown in Figure 6.1. If a device has a large amount of data to send, up to five consecutive
timeslots can be used before the channel is given to another device. If a device has only a
small amount of data to send, only a single timeslot is used. This way, all devices that
exchange data with each other at the same time can dynamically adapt their use of the
channel based on their data buffer occupancy.

Minimizing interference on the shared 2.4 GHz band

As Bluetooth has to share the 2.4-GHz ISM frequency band with other wireless technologies
like Wireless Local Area Network (WLAN), the system does not use a fixed carrier frequency.
Instead, the frequency is changed after each packet. A packet has a length of either one,
three or five slots. This method is called frequency-hopping spread spectrum (FHSS). This
way, it is possible to minimize interference with other users of the ISM band. If some
interference is encountered during the transmission of a packet despite FHSS, the packet is
automatically retransmitted.

For single-slot packets (625 microseconds), the hopping frequency is thus 1600 Hz. If five-
slot packets are used, the hopping frequency is 320 Hz.

Piconet

It is a Bluetooth network, in which several devices communicate with each other. In order
for several Bluetooth piconets to coexist in the same area, each piconet uses its own
hopping sequence. In the ISM band, 79 channels are available. Thus, it is possible for several
WLAN networks and many Bluetooth piconets to coexist in the same area.

The interference created by WLAN and Bluetooth remains low and hardly noticeable as long
as the load in both the WLAN and the Bluetooth piconet(s) is low. As has been shown in
Chapter 4, a WLAN network only sends short beacon frames while no user data is
transmitted.

If a WLAN network, however, is highly loaded, it blocks a 25-MHz frequency band for most
of the time. Therefore, almost a third of the available channels for Bluetooth are constantly
busy. In this case, the mutual interference of the two systems is high, which leads to a high
number of corrupted packets.

To prevent this, Bluetooth 1.2 introduces a method called Adaptive Frequency Hopping
(AFH). If all devices in a piconet are Bluetooth 1.2 compatible, the master device (see Section
6.3) performs a channel assessment to measure the interference encountered on each of
the 79 channels. The link manager (see Section 6.4.3) uses this information to create a
channel bitmap and marks each channel that is not to be used for the frequency-hopping
sequence of the piconet. The channel bitmap is then sent to all devices of the piconet and
thus, all members of the piconet are aware of how to adapt their hopping sequence.

Available choices are the Received Signal Strength Indication (RSSI) method or other
methods that exclude a channel because of a high packet error rate.

Power Classes

As Bluetooth has been designed for small, mobile and battery-driven devices, the standard
defines three power classes.

Class3:

Devices like mobile phones usually implement power class 3 with a transmission power of up
to 1 mW.

The devices are usually designed to work reliably over a distance of 1 m or through a single
wall
Class2:

Class 2 devices send with a transmission power of up to 2.5 mW.

The devices are usually designed to work reliably over a distance of 1 m or through a single
wall

Class1:

Class 1 devices use a transmission power of up to 100 mW.

The devices can achieve distances of over 100 m or penetrate several walls.

Only devices such as some Universal Serial Bus (USB) Bluetooth sticks for notebooks and PCs
are usually equipped with a class 1 transmitter. This is because the energy consumption as
compared to a class 3 transmitter is very high and should therefore only be used for devices
where the energy consumption does not play a critical role.

The Range of a Piconet

The range of a piconet also depends on the reception qualities of the devices and the
antenna design. In practice, newer Bluetooth devices have a much improved antenna and
receiver design, which increases the size of a piconet without increasing the transmission
power of the devices. All Bluetooth devices can communicate with each other,
independently of the power class. As all connections are bidirectional, however, it is always
the device with the lowest transmission power that limits the range of a piconet.

Master/Slave concept in Piconet

As described previously all devices which communicate with each other for a certain time
form a piconet. As shown in Figure 6.2, the frequency-hopping sequence of the channel is
calculated from the hardware address of the first device that initiates a connection to
another device and thus creates a new temporary piconet. Therefore, devices can
communicate with each other in different piconets in the same area without disturbing each
other. A piconet consists of one master device that establishes the connection and up to
seven slave devices. This seems to be a small number at first. However, as most Bluetooth
applications only require point-to-point connections as described in Section 6.1, this limit is
sufficient for most applications. Even if Bluetooth is used with a personal computer (PC) to
connect with a keyboard and a mouse, there are still five more devices that can join the PCs
piconet at any time. Each device can be a master or a slave of a piconet. Per definition, the
device that initiates a new piconet becomes the master device as described in the following
scenario.
Master-slave role switch is sometimes necessary, as a slave cannot initiate a new connection
when already in a piconet.

Bluetooth network architecture and setup

Baseband layer: Layer 2

Framing of data packets

ACL-Data Link

For packet data transmission, Bluetooth uses Asynchronous Connectionless (ACL) packets.

As shown in Figure 6.5, an ACL packet consists of a 68- to 72-bit access code, an 18-bit
header and a payload (user data) field of variable size between 0 and 2744 bits.

Before the 18 header bits are transmitted, they are coded into 54 bits by a forward error
correction (1/3 FEC) algorithm. This ensures that transmission errors can be corrected in
most cases. Depending on the size of the payload field, an ACL packet requires one, three or
five slots of 625 microseconds.

The access code at the beginning of the packet is used primarily for the identification of the
piconet to which the current packet belongs. Thus, the access code is derived from the
device address of the piconet master.
The actual header of an ACL packet consists of a number of bits for the following purposes:
The first three bits of the header are the logical transfer address (LT_ADDR) of the slave,
which the master assigns during connection establishment. As three bits are used, up to
seven slaves can be addressed. After the LT_ADDR, the 4-bit packet-type field indicates the
structure of the remaining part of the packet.

Apart from the number of slots used for a packet, another difference is the use of FEC for
the payload. If FEC is used, the receiver is able to correct transmission errors. The
disadvantage of using FEC, though, is the reduction in the number of user data bits that can
be carried in the payload field. If a 2/3 FEC is used, one error correction bit is added for two
data bits. Instead of two bits, three bits will thus be transferred (2/3). Furthermore, ACL
packets can be sent with a cyclic redundancy check (CRC) checksum to detect transmission
errors, which the receiver was unable to correct.

To prevent a buffer overflow, a device can set the flow bit to indicate to the other end to
stop data transmission for some time.

The ARQN bit informs the other end if the last packet has been received correctly. If the bit
is not set, the packet has to be repeated.

The sequence bit (SEQN) is used to ensure that no packet is accidentally lost. This is done by
toggling the bit in every packet.

The last field in the header is the Header Error Check (HEC) field. It ensures that the packet is
ignored if the receiver cannot calculate the checksum correctly.

SCO-Voice Link

As no bandwidth is guaranteed for an ACL connection, this type of data transmission is not
well suited to the transmission of bidirectional real-time data such as a voice conversation.
For this kind of application, the baseband layer offers a second transmission mode that uses
synchronous connection-oriented (SCO) packets. The difference to ACL packets is the fact
that SCO packets are exchanged between a master and a slave device in fixed intervals. The
interval is chosen in a way that results in a total bandwidth of exactly 64 kbit/s.

When an SCO connection between a master and a slave device is established, the slave
device is allowed to send its SCO packets autonomously even if no SCO packet is received
from the master. This can be done very easily as the timing for the exchange of SCO packets
between two devices is fixed. Therefore, the slave does not depend on a grant from the
master, and thus it is implicitly ensured that only this slave sends in the timeslot. This way, it
is furthermore ensured that the slave device can send its packet containing voice data even
if it has not received the voice packet of the master device.

The header of an SCO packet is equal to the header of an ACL packet with the exception that
the flow, ARQN and SEQN fields are not used. The length of the payload field is always 30
bytes. Depending on the error correction mechanism used, this equals 10, 20 or 30 user data
bytes.

As no CRC and FEC are used for SCO packets, it is not possible to detect whether the user
data in the payload field was received correctly. Thus, defective data is forwarded to higher
layers if a transmission error occurs. This produces audible errors in the reproduced voice
signal. Furthermore, the bandwidth limit of 64 kbit/s of SCO connections prevents the use of
this transmission mechanism for other types of interactive applications such as audio
streaming in MP3 format that usually requires a higher datarate.

eSCO

Bluetooth 1.2 thus introduces a new packet type called eSCO, which improves the SCO
mechanism as follows. The datarate of an eSCO channel can be chosen during channel
establishment. Therefore, a constant datarate of up to 288 kbit/s in full-duplex mode (in
both directions simultaneously) can be achieved. The eSCO packets use a checksum for the
payload part of the packet. If a transmission error occurs, the packet can be retransmitted if
there is still enough time before the next regular eSCO packet has to be transmitted. Figure
6.7 shows this scenario. Retransmitting a bad packet and still maintaining a certain
bandwidth is possible, as an eSCO connection with a constant bandwidth of 64 kbit/s only
uses a fraction of the total bandwidth available in the piconet. Thus, there is still some time
to retransmit a bad packet in the transmission gap to the next packet. Despite transferring
the packet several times, the datarate of the overall eSCO connection remains constant. If a
packet cannot be transmitted by the time another regular packet has to be sent, it is simply
discarded. Thus, it is ensured that the data stream is not slowed down and the constant
bandwidth and delay times required for audio transmissions are maintained.

Higher modulation schemes for improving data rates

For some applications such as wireless printing or transmission of large pictures from a
camera to a PC, the maximum transmission rate of Bluetooth up to version 1.2 is not
sufficient. Thus, the Bluetooth standard was enhanced with a high-speed data transfer mode
called Bluetooth 2.0+EDRs.

The core of EDR is the use of a new modulation technique for the payload part of an ACL or
eSCO packet. While the header and the payload of the packet types described before are
modulated using GFSK, the payload of an EDR ACL and eSCO packet is modulated using
DQPSK or 8DPSK. These modulation techniques allow the encoding of several bits per
transmission step. Thus, it is possible to increase the data rate while the total channel
bandwidth of 1 MHz and the slot time of 625 microseconds remain constant. To be
backward compatible, the header of the new packets is still encoded using standard GFSK
modulation.
Link Layer

The link control layer is located on top of the baseband layer that was discussed previously.
As the name suggests, this protocol layer is responsible for the establishment, maintenance
and correct release of connections.

Inquiry Procedure

If a device wants to scan the vicinity for


other devices, the link controller is
instructed by higher layer protocols to
change into the inquiry state. In this
state, the device starts to send two ID
packets per slot on two different
frequencies to request for listening
devices with unknown frequency
hopping patterns to reply to the inquiry.

If a device is set by the user to be


detectable by other devices, it has to
change into the inquiry scan state
periodically and scan for ID packets on alternating frequencies. The frequency that a device
listens to is changed every 1.28 seconds. To save power, or to be able to maintain already
ongoing connections, it is not necessary to remain in the inquiry scan state continuously. The
Bluetooth standard suggests a scan time of 11.25 milliseconds per 1.28-second interval. The
combination of fast frequency change of the searching device on the one hand and a slow
frequency change of the detectable device on the other hand results in a 90% probability
that a device can be found within a scan period of 10 seconds.

If a device receives an ID packet, it returns an FHS packet, which includes its address,
frequency hopping and synchronization information. After receiving an FHS packet, the
searching device can continue its search. Alternatively, the inquiry procedure can also be
terminated to establish an ACL connection with the detected device by performing a paging
procedure.

If a user wants its device to remain invisible, it is possible to deactivate the inquiry scan
functionality. Thus, a device can only initiate a paging procedure and thus a connection with
the users device, if it already knows the devices hardware address. It is useful to activate
this setting once a user has paired all devices (see Section 6.5.1) that are frequently used
together.

Paging Procedure

To establish an ACL connection by initiating a paging procedure, a device must be aware of


the hardware address of the device to connect to from either a previous connection or as a
result of an inquiry procedure. The paging procedure works in a similar way as the inquiry
procedure, that is, ID packets are sent in a rapid sequence on different frequencies. Instead
of a generic address, the hardware address of the target device is included. The target
device in turn replies with an ID packet and thus enables the requesting device to return an
FHS packet that contains its hopping sequence. Figure 6.8 shows how the paging procedure
is performed and how the devices enter the connected state upon success.

After successful paging, both devices enter the connection-active state and data transfer can
start over the established ACL connection.

Low Power Connected States

During an active connection, the power consumption of a device mainly depends on its
power class (see Section 6.2). Even while active, it is possible that for some time, no data is
to be transferred. Especially for devices such as smartphones, it is very important to
conserve power during these periods to maximize the operating time on a battery charge.
The Bluetooth standard thus specifies three additional power-saving sub-states of the
connected state.

Hold State

The first substate is the connection-hold state. To change into this state, master and slave
have to agree on the duration of the hold state. Afterward, the transceiver can be
deactivated for the agreed time. At the end of the hold period, master and slave implicitly
change back into the connection-active state.

Sniff State

For applications that only transmit data very infrequently, the connection-hold state is too
inflexible. Thus, the connection-sniff state might be used instead, which offers the following
alternative power-saving scheme. When activating the sniff state, master and slave agree on
an interval and the time during the interval in which the slave has to listen for incoming
packets. In practice, it can be observed that the sniff state is activated after a longer
inactivity period (e.g., 15 seconds) and that an interval of several seconds (e.g., 2 seconds) is
used. This reduces the power consumption of the complete Bluetooth chip to below 1 mW.
If renewed activity is detected, some devices immediately leave the sniff state even though
this is not required by the standard

Park State

The connection-park state can be used to even further reduce the power consumption of
the device. In this state, the slave device returns its piconet address (LT_ADDR) to the
master and only checks very infrequently if the master would like to communicate.

Link Management Layer

The next layer in the protocol stack (see Figure 6.4) is the link manager layer. While the
previously discussed link controller layer is responsible for sending and receiving data
packets depending on the state of the connection with the remote device, the link
managers task is to establish and maintain connections. This includes the following
operations:

establishment of an ACL connection with a slave and assignment of a link address


(LT_ADDR);

release of connections;

configuration of connections, for example, negotiation of the maximum number of


timeslots that can be used for ACL or eSCO packets;

HCI Transport Layer

A transport layer is needed to get HCI packet from the host to the Bluetooth controller

Three transport layers defined by Bluetooth

USB: univesal serial bus

RS-232: serial interface with error correction

UART: Universal Asynchronous Receiver Transmitter

L2CAP (Logical Layer Control and Application Protocol)

In the next step of the overall connection establishment, an L2CAP connection is established
over the existing ACL link. The L2CAP protocol layer is located above the HCI layer and allows
the multiplexing of several logical connections to a single device via a single ACL connection.
Thus it is possible, for example, to open a second L_CH between a PC and a mobile phone to
exchange an address book entry, while a Bluetooth dial-up connection is already established
which connects the PC to the Internet via the mobile phone. If further ACL connections exist
to other devices at the same time, L2CAP is also able to multiplex data to and from different
devices.

Service Discovery Protocol (SDP)

Theoretically, it would be possible to begin the transfer of user data between two devices
right after establishing an ACL and L2CAP connection. Bluetooth, however, can be used for
many different applications, and many devices thus offer several different services to
remote devices at the same time. A mobile phone, for example, offers services like wireless
Internet connections (Dial-up Network, DUN), file transfers to and from the local file system,
exchange of addresses and calendar entries, and so on. For a device to detect which services
are offered by a remote device and how they can be accessed, each Bluetooth device
contains a service database that can be queried by other devices. The service database is
accessed via the L2CAP PSM 0x0001 and the protocol to exchange information with the
database is called the Service Discovery Protocol (SDP). The database query can be skipped if
a device already knows as to how a remote service can be accessed. As Bluetooth is very
flexible, it offers services the option to change their connection parameters at runtime. One
of these connection parameters is the RFCOMM channel number.
10. Security technology in WiFi/Bluetooth based on your own report

You might also like