You are on page 1of 27

Hyper transport Technology

INTRODUCTION
The demand for faster processors, memory and I/O is a familiar refrain in market
applications ranging from personal computers and servers to networking systems and
from video games to office automation equipment. Once information is digitized, the
speed at which it is processed becomes the foremost determinate of product success.
Faster system speed leads to faster processing. Faster processing leads to faster system
performance. Faster system performance results in greater success in the marketplace.
This obvious logic has led a generation of processor and memory designers to focus on
one overriding objective – squeezing more speed from processors and memory devices.
Processor designers have responded with faster clock rates and super pipelined
architectures that use level 1 and level 2 caches to feed faster execution units even faster.
Memory designers have responded with dual data rate memories that allow data access
on both the leading and trailing clock edges doubling data access. I/O developers have
responded by designing faster and wider I/O channels and introducing new protocols to
meet anticipated I/O needs. Today, processors hit the market with 2+ GHz clock rates,
memory devices provide sub5 ns access times and standard I/O buses are 32- and 64-bit
wide, with new higher speed protocols on the horizon.

Increased processor speeds, faster memories, and wider I/O channels are not
always practical answers to the need for speed. The main problem is integration of more
and faster system elements. Faster execution units, faster memories and wider, faster I/O
buses lead to crowding of more high-speed signal lines onto the physical printed circuit
board. One aspect of the integration problem is the physical problems posed by speed.
Faster signal speeds lead to manufacturing problems due to loss of signal integrity and
greater susceptibility to noise. Very high-speed digital signals tend to become high
frequency radio waves exhibiting the same problematic characteristics of high-frequency
analog signals. This wreaks havoc on printed circuit board’s manufactured using
standard, low-cost materials and technologies.
Signal integrity problems caused by signal crosstalk, signal and clock skew and
signal reflections increase dramatically as clock speed increases. The other aspect of the
Hyper transport Technology

integration problem is the I/O bottleneck that develops when multiple high-speed
execution units are combined for greater performance. While faster execution units
relieve processor performance bottlenecks, the bottleneck moves to the I/O links. Now
more data sits idling, waiting for the processor and I/O buses to clear and movement of
large amounts of data from one subsystem to another slows down the overall system
performance ratings.
Hyper transport Technology

CAUSES LEAD TO DEVELOPMENTOFHYPERTRANSPORT


TECHNOLOGY

1. I/O Band width problem


2. High pin count
3. High power consumption

While microprocessor performance continues to double every eighteen months, the


Performance of the I/O bus architecture has lagged, doubling in performance
approximately every three years, as illustrated in Figure 1.

This I/O bottleneck constrains system performance, resulting in diminished actual


Performance gains as the processor and memory subsystems evolve. Over the past 20
years, a number of legacy buses, such as ISA, VL-Bus, AGP, LPC, PCI-32/33, and
PCI-X, have emerged that must be bridged together to support a varying array of devices.
Servers and workstations require multiple high-speed buses, including PCI-64/66,
Hyper transport Technology

AGP Pro, and SNA buses like InfiniBand. The hodge-podge of buses increases system
complexity, adds many transistors devoted to bus arbitration and bridge logic, while
delivering less than optimal performance. A number of new technologies are responsible
for the increasing demand for additional bandwidth. High-resolution, texture-mapped 3D
graphics and high-definition streaming video are escalating bandwidth needs between
CPUs and graphics processors. Technologies like high-speed networking (Gigabit
Ethernet, InfiniBand, etc.) and wireless communications (Bluetooth) are allowing more
devices to exchange growing amounts of data at rapidly increasing speeds. Software
technologies are evolving, resulting in breakthrough methods of utilizing multiple system
processors. As processor speeds rise, so will the need for very fast, high-volume inter-
processor data traffic. While these new technologies quickly exceed the capabilities of
today’s PCI bus, existing interface functions like MP3 audio, v.90 modems, USB, 1394,
and 10/100Ethernet are left to compete for the remaining bandwidth. These functions are
now commonly integrated into core logic products. Higher integration is increasing the
number of pins needed to bring these multiple buses into and out of the chip packages.
Nearly all of these existing buses are single ended, requiring additional power and ground
pins to provide sufficient current return paths. High pin counts increase RF radiation,
which makes it difficult for system designers to meet FCC and VDE requirements.
Reducing pin count helps system designers to reduce power consumption and meet
thermal requirements. In response to these problems, AMD began developing the Hyper
Transport™ I/O link architecture in 1997. Hyper Transport technology has been designed
to provide system architects with significantly more bandwidth, low-latency responses,
lower pin counts, compatibility with legacy PC buses, extensibility to new SNA buses,
and transparency to operating system software, with little impact on peripheral drivers.

As CPUs advanced in terms of clock speed and processing power, the I/O
subsystem that supports the processor could not keep up. In fact, different links
developed at different rates within the subsystem. The basic elements found on a
motherboard include the CPU, Northbridge, Southbridge, PCI bus, and system memory.
Other components are found on a motherboard, such as network controllers, USB ports,
etc., but most generally communicate with the rest of the system through the Southbridge.
Hyper transport Technology

Figure 1 shows a layout of these components.

Figure 1: Common motherboard layout

Many of the links above have advanced over the years. They each began with
standard PCI-like performance (33MHz 32-bit wide, for just over 1Gbps throughput), but
each has developed differently over time.The link between the CPU and Northbridge has
progressed to a 133MHz(effectively a 266MHz as it is sampled twice per clock cycle) 64-
bit wide bus. This provides a throughput of close to 17Gbps. The Northbridge to system
memory link has advanced to support PC2100memory: it is a 64-bit wide 133MHz (also
sampled twice per clock cycle) bus. This link also has a bandwidth of almost
17Gbps. The Northbridge to graphics controller connection has stayed at 32-bits wide
and grown to a 66MHz bus, but with 4xAGP it is sampled four times per clock. 8xAGP
Hyper transport Technology

(sampling the data eight times per clock) will pull the throughput of this link even with
the other two at nearly 17Gbps.

Until recently, however, the Northbridge-Southbridge link has remained the same
standard PCI bus. Although most devices connected to the Southbridge do not demand
high bandwidth, their demands are growing as they evolve, and the aggregate bandwidth
they could require easily exceeds the bandwidth of the Northbridge-Southbridge link.
Many server applications, such as database functions and data mining, require access to a
large amount of data. This requires as much throughput from the disk and network as
possible, which is gated by the Northbridge-Southbridge link.
Hyper transport Technology

HYPER TRANSPORT TECHNOLOGY SOLUTION

Hyper Transport technology, formerly codenamed Lightning Data Transfer


(LDT), was developed at AMD with the help of industry partners to provide a high-
speed, high-performance, point-to-point link for interconnecting integrated circuits on a
board. With atop signaling rate of 1.6 GHz on each wire pair, a Hyper Transport
technology link can support a peak aggregate bandwidth of 12.8 Gbytes/s.The Hyper
Transport I/O link is a complementary technology for InfiniBand and1Gb/10Gb Ethernet
solutions. Both InfiniBand and high-speed Ethernet interfaces are high-performance
networking protocol and box-to-box solutions, while Hyper Transport is intended to
support “in-the-box” connectivity. The Hyper Transport specification provides both link-
and system-level power management capabilities optimized for processors and other
system devices. The ACPI compliant power management scheme is primarily message-
based, reducing pin-count requirements. Hyper Transport technology is targeted at
networking, telecommunications, compute rand high performance embedded applications
and any other application in which high speed, low latency, and scalability is necessary.

Hyper Transport technology addresses this bottleneck by providing a point-to


point architecture that can support bandwidths of up to 51.2Gbps in each direction. Not
all devices will require this much bandwidth, which is why Hyper Transport technology
operates at many different frequencies and widths. Currently, the specification supports a
frequency of up to 800MHz (sampled twice per period) and a width of up to 32-bits in
each direction. Hyper Transport technology also implements fast switching mechanisms,
so it provides low latency as well as high bandwidth. By providing up to 102.4Gbps
aggregate bandwidth, Hyper Transport technology enables I/O-intensive applications to
use the throughput they demand.

In order to ease the implementation of Hyper Transport technology and provide


stability, it was designed to be transparent to existing software and operating systems.
Hyper transport Technology

Hyper Transport technology supports plug-and-play features and PCI-like enumeration,


so existing software can interface with a Hyper Transport technology link the same way it
does with current PCI buses. This interaction is designed to be reliable, because the same
software will be used as before. In fact it may become more reliable, as data transfers will
benefit from the error detection features Hyper Transport technology provides.
Applications will benefit from Hyper Transport technology without needing extra support
or updates from the developer.

The physical implementation of Hyper Transport technology is straightforward,


as it requires no glue logic or additional hardware. Hyper Transport technology
specifications also stress a low pin count. This helps to minimize cost, as fewer parts are
required to implement Hyper Transport technology, and reduces Electro-Magnetic
Interference (EMI), a common problem in board layout design. Because Hyper Transport
technology is designed to require no additional hardware, is transparent to existing
software, and simplifies EMI issues, it is a relatively inexpensive, easy-to-implement
technology.

Table 1. Feature and Function Summary


Hyper transport Technology

DESIGN GOALS

In developing Hyper Transport technology, the architects of the technology


considered the design goals presented in this section. They wanted to develop a new I/O
protocol for” in-the-box” I/O connectivity that would:
1. Improve system performance

- Provide increased I/O bandwidth


- Reduce data bottlenecks by moving slower devices out of critical information paths
- Ensure low latency responses
- Reduce power consumption
2. Simplify system design

- Reduce the number of buses within the system


- Use as few pins as possible to allow smaller packages and to reduce cost
3. Increase I/O flexibility

- Provide modular bridge architecture


- Allow for differing upstream and downstream bandwidth requirements

4. Maintain compatibility with legacy systems

- Complement standard external buses


- Have little or no impact on existing operating systems and drivers

5. Ensure extensibility to new system network architecture (SNA) buses

6. Provide highly scalable multiprocessing systems


Hyper transport Technology

HYPER TRANSPORT TECHNICAL OVERVIEW

Flexible I/O Architecture

The resulting protocol defines a high-performance and scalable interconnect


between CPU, memory, and I/O devices. Conceptually, the architecture of the Hyper
Transport I/O link can be mapped into five different layers, which structure is similar to
the Open System Interconnection (OSI) reference model.
In Hyper Transport technology:
1. The physical layer defines the physical and electrical characteristics of the protocol.
This layer interfaces to the physical world and includes data, control, and clock lines
2. The data link layer includes the initialization and configuration sequence, periodic
Cyclic redundancy check (CRC), disconnect/reconnect sequence, information packet
for flow control and error management, and double word framing for other packets.
3. The protocol layer includes the commands, the virtual channels in which they run,
and the ordering rules that govern their flow.
4. The transaction layer uses the elements provided by the protocol layer to perform
actions, such as reads and writes.
5. The session layer includes rules for negotiating power management state changes,
as well as interrupt and system management activities.
Hyper transport Technology

PHYSICAL LAYER

Each Hyper Transport link consists of two point-to-point unidirectional data


paths, as illustrated in Figure. Data path widths of 2, 4, 8, and 16 bits can be
implemented either upstream or downstream, depending on the device-specific
bandwidth requirements. Commands, addresses, and data (CAD) all use the same set of
wires for signaling, dramatically reducing pin requirements. All Hyper Transport
technology commands, addresses, and data travel in packets. All packets are multiples of
four bytes (32 bits) in length. If the link uses data paths narrower than 32 bits, successive
bit-times are used to complete the packet transfers.

Figure. Hyper Transport™ Technology Data Paths

The Hyper Transport link was specifically designed to deliver a high-performance


and scalable interconnect between CPU, memory, and I/O devices, while using as few
pins as possible. To achieve very high data rates, the Hyper Transport link uses low-
swing differential signaling with on-die differential termination. To achieve scalable
bandwidth, the Hyper Transport link permits seamless scalability of both frequency and
data width.
Hyper transport Technology

DATA LINK LAYER

The data link layer includes the initialization and configuration sequence, periodic
cyclic redundancy check (CRC), disconnect/reconnect sequence, information packets for
flow control and error management, and double word framing for other packets

Initialization

Hyper Transport technology-enabled devices with transmitter and receiver links


of equal width can be easily and directly connected. Devices with asymmetric data paths
can also be linked together easily. Extra receiver pins are tied to logic 0, while extra
transmitter pins are left open. During power-up, when RESET# is asserted and the
Control signal is at logic 0, each device transmits a bit pattern indicating the width of its
receiver. Logic within each device determines the maximum safe width for its
transmitter. While this may be narrower than the optimal width, it provides reliable
communications between devices until configuration software can optimize the link to
the widest common width. For applications that typically send the bulk of the data in one
direction, component vendors can save costs by implementing a wide path for the
majority of the traffic and a narrow path in the lesser used direction. Devices are not
required to implement equal width upstream and downstream links.

PROTOCOL & TRANSACTION LAYER

The protocol layer includes the commands, the virtual channels in which they run,
and the ordering rules that govern their flow. The transaction layer uses the elements
provided by the protocol layer to perform actions, such as read request and responses.
Hyper transport Technology

COMMANDS

All Hyper Transport technology commands are either four or eight bytes long and
begin with a 6-bit command type field. The most commonly used commands are Read
Request, Read Response, and Write.. A virtual channel contains requests or responses
with the same ordering priority.

Figure. Command Format

When the command requires an address, the last byte of the command is concatenated
with an additional four bytes to create a 40-bit address.

Hyper Transport commands and data are separated into one of three types of
virtual channels: non-posted requests, posted requests and responses. Non posted
requests require a response from the receiver. All read requests and some write requests
are non-posted requests. Posted requests do not require a response from the receiver.
Write requests are posted requests. Responses are replies to non-posted requests. Read
responses or target done responses to non-posted writes are types of response messages.
Hyper transport Technology

Command packets are 4 or 8 bytes and include all of the information needed for
inter-device or system-wide communications except in the case of reads and writes, when
the data packet is required for the data payload. Hyper Transport writes require an 8-byte
Write Request control packet, followed by the data packet. Hyper Transport reads require
an 8-byte Read Request control packet(issued from the host or other device), followed by
a 4-byte Read Response control packet (issued by the peripheral or responding device),
followed by the data packet.
Hyper transport Technology

SESSION LAYER

The session layer includes link width optimization and link frequency
optimization along with interrupt and power state capabilities.

Link Width Optimization

The initial link-width negotiation sequence may result in links that do not operate
at their maximum width potential. All 16-bit, 32-bit, and asymmetrically-sized
configurations must be enabled by a software initialization step. At cold reset, all links
power-up and synchronize according to the protocol. Firmware (or BIOS) then
interrogates all the links in the system, reprograms them to the desired width, and takes
the system through a warm reset to change the link widths.

Link Frequency Initialization

At cold reset, all links power-up with 200-MHz clocks. For each link, firmware
reads a specific register of each device to determine the supported clock frequencies. The
reported frequency capability, combined with system-specific information about the
board layout and power requirements, is used to determine the frequency to be used for
each link. Firmware then writes the two frequency registers to set the frequency for each
link. Once all devices have been configured, firmware initiates an LDTSTOP# disconnect
or RESET# of the affected chain to cause the new frequency to take effect.
Hyper transport Technology

ENHANCED LOW VOLTAGE DIFFERENTIAL


SIGNALLING

The signaling technology used in Hyper Transport technology is a type of low


voltage differential signaling (LVDS ). However, it is not the conventional IEEE LVDS
standard. It is an enhanced LVDS technique developed to evolve with the performance of
future process technologies. This is designed to help ensure that the Hyper Transport
technology standard has a long lifespan. LVDS has been widely used in these types of
applications because it requires fewer pins and wires. This is also designed to reduce cost
and power requirements because the transceivers are built into the controller chips.

Hyper Transport technology uses low-voltage differential signaling with


differential impedance (ZOD) of 100 ohms for CAD, Clock, and Control signals, as
illustrated in. Characteristic line impedance is 60 ohms. The driver supply voltage is 1.2
volts, instead of the conventional 2.5 volts for standard LVDS. Differential signaling and
the chosen impedance provide a robust signaling system for use on low-cost printed
circuit boards. Common four-layer PCB materials with specified di-electric, trace, and
space dimensions and tolerances or controlled impedance boards are sufficient to
implement a Hyper Transport I/O link. The differential signaling permits trace lengths up
to 24 inches for 800 Mbit/s operation.
Hyper Transport technology helps platforms save power by transmitting
signals based on enhanced Low Voltage Differential Signaling (LVDS). By using two
signals for each bit, less voltage is needed per signal, and the noise effects for each signal
are similar, making it easier to filter those effects out. In this way, LVDS supports a very
high frequency signal, enabling Hyper Transport technology to support clock speeds up
to 800MHz.
Hyper transport Technology

Figure . Enhanced Low-Voltage Differential Signaling (LVDS)


Hyper transport Technology

MINIMAL PIN COUNT

The designers of Hyper Transport technology wanted to use as few pins as possible to
enable smaller packages, reduced power consumption, and better thermal characteristics,
while reducing total system cost. This goal is accomplished by using separate
unidirectional data paths and very low-voltage differential signaling.
The signals used in Hyper Transport technology are summarized in Table 2.

 Commands, addresses, and data (CAD) all share the same bits.
Each data path includes a Control (CTL) signal and one or more Clock (CLK)
signals.

- The CTL signal differentiates commands and addresses from data packets.
- For every grouping of eight bits or less within the data path, there is a forwarded
CLK signal. Clock forwarding reduces clock skew between the reference clock
signal and the signals traveling on the link. Multiple forwarded clocks limit the
number of signals that must be routed closely in wider Hyper Transport links.

 For most signals, there are two pins per bit.

 In addition to CAD, Clock, Control, VLDT power, and ground pins, each
Hyper Transport device has Power OK (PWROK) and Reset (RESET#) pins. These
pins are single-ended because of their low-frequency use.

 Devices that implement Hyper Transport technology for use in lower power
applications such as notebook computers should also implement Stop (LDTSTOP#)
and Request (LDTREQ#). These power management signals are used to enter and
exit low-power states.
Hyper transport Technology

Table . Signals Used in Hyper Transport™ Technology

At first glance, the signaling used to implement a Hyper Transport I/O link would
seem to increase pin counts because it requires two pins per bit and uses separate
upstream and downstream data paths. However, the increase in signal pins is offset by
two factors: By using separate data paths, Hyper Transport I/O links are designed to
operate at much higher frequencies than existing bus architectures. This means that buses
delivering equivalent or better bandwidth can be implemented using fewer signals.
Differential signaling provides a return current path for each signal, greatly reducing the
number of power and ground pins required in each package.

Table . Total Pins Used for Each Link Width


Hyper transport Technology

GREATLY INCREASED BANDWIDTH

Commands, addresses, and data traveling on a Hyper Transport link are double
pumped, where transfers take place on both the rising and falling edges of the clock
signal. For example, if the link clock is 800 MHz, the data rate is 1600 MHz. An
implementation of Hyper Transport links with 16 CAD bits in each direction with
a 1.6-GHz data rate provides bandwidth of 3.2 Gigabytes per second in each
direction, for an aggregate peak bandwidth of 6.4 Gbytes/s, or 48 times the peak
bandwidth of a 33-MHz PCI bus. A low-cost, low-power Hyper Transport link using two
CAD bits in each direction and clocked at 400 MHz provides 200 Mbytes/s of bandwidth
in each direction, or nearly four times the peak bandwidth of PCI 32/33. Such a link can
be implemented with just 24 pins, including power and ground pins,
Hyper transport Technology

HYPER TRANSPORT & INFINI BAND ARCHITECTURE

Many large companies depend on enterprise-class servers to provide accurate and


dependable computations on large amounts of data. This requires as much network
bandwidth as possible, which in turn demands a fast, wide pipe from the network
controller to system memory. Hyper Transport technology provides a speedy, dependable
solution for the internal link, while InfiniBand Architecture offers the secure, easily
scalable, reliable external link. Hyper Transport technology and InfiniBand Architecture
complement each other well to form a complete and compelling high-bandwidth solution
for the server market. Many features of Hyper Transport technology and InfiniBand
Architecture correspond closely, and allow a Hyper Transport technology-based platform
supporting an HCA to function more closely in concert with the network itself. The
complementary characteristics of Hyper Transport technology and InfiniBand
Architecture include not only their bandwidth, but are found within their protocols,
reliability features, and support for scalability as well.

Figure . Point-to-point technologies


Hyper transport Technology

Hyper Transport technology provides the bandwidth support necessary to take full
advantage of InfiniBand Architecture throughput. A 1X InfiniBand link could demand as
much as 5Gbps bandwidth. While PCI 64/66, at over 4Gbps, may be adequate, it can hold
back network performance. Even PCI-X, at just over 8.5Gbps, cannot handle a single 4X
InfiniBand Architecture channel. Figure illustrates how InfiniBand Architecture’s
bandwidth needs compare to the support internal I/O technologies can provide.

Regarding bandwidth, Hyper Transport technology provides an appropriate


framework for InfiniBand Architecture. A Hyper Transport technology link can easily
handle even a 12X InfiniBand Architecture channel, whereas PCI-based buses are unable
to handle the slower links.
Hyper Transport technology and InfiniBand Architecture work well together in
terms of bandwidth, protocol, reliability, and scalability. A server network based on
InfiniBand Architecture and made up of Hyper Transport technology-based server
platforms will enjoy a significant performance improvement, while ensuring more secure
protection, reliable data transfer, and ease of service and scalability. Hyper Transport
technology and InfiniBand Architecture will revolutionize the way enterprise business
data centers perform.
Hyper transport Technology

IMPLEMENTATION

Hyper Transport technology supports multiple connection topologies including


daisy chain topologies, switch topologies and star topologies.

Daisy Chain Topology

Hyper Transport technology has a daisy-chain topology, giving the opportunity to


connect multiple Hyper Transport input/output bridges to a single channel. Hyper-
Transport technology is designed to support up to 32 devices per channel and can mix
and match components with different link widths and speeds. This capability makes it
possible to create Hyper Transport technology devices that are building blocks capable of
spanning a range of platforms and market segments.

Figure . Daisy Chain Implementation

The Hyper Transport technology tunneling feature makes daisy chains of up to 31


independent devices possible. A Hyper Transport technology tunnel is a device with two
Hyper Transport technology connections containing a functional device in-between.
Essentially, the Hyper Transport technology host initializes a daisy chain. A host and one
Hyper transport Technology

single ended slave is the smallest possible chain, and a host with 31 tunnel devices is the
largest possible daisy chain. Hyper Transport technology can route the data of up to 31
attached devices at an aggregate transfer rate of 3.2 gigabytes per second over an 8-bit
Hyper Transport technology I/O link, and up to 12.8 gigabytes per second over a 32-bit
link. This gives the designer a significantly larger and faster fabric while still using
existing PCI I/O drivers. In fact, the total end-to-end length for a Hyper Transport
technology chain can be several meters, providing for great flexibility in system
configuration.

Switch Topology

A Hyper Transport technology switch is a device designed for use in latency


sensitive environments supporting multiple processors or special-purpose processors.
These processors are designed to increase available bandwidth, reduce latency, and
support the effective use of multiple Hyper Transport technology links running at
different speeds. A Hyper Transport technology switch passes data to and from one
Hyper Transport technology chain to another. This allows system architects to use
Hyper Transport technology switches to build a switching fabric while isolating a
Hyper Transport technology chain from a host for offloading peer-to-peer traffic. By
extending a fabric beyond a single chain of 31 tunnel devices, Hyper Transport
technology enables a multi-processor/host fabric, or creates a minimal latency tree
topology.
In a switch topology, a device is inserted into the chain allowing it to be branched.
This device is invisible to the original host controller, which believes the devices on the
chain are daisy-chained. All devices stemming from the switch appear to be on the same
bus while the intelligence of the switch determines bandwidth allocation. Branches
connected to Hyper Transport technology switches may contain links of varying widths
based on the discretion of the system designer. The Hyper Transport technology host
communicates directly with the switch chip, which in turn manages multiple independent
slaves including tunnels, bridges, and end device chips. Each port on the switch benefits
Hyper transport Technology

from the full bandwidth of the Hyper Transport technology I/O link because the switch
directs the flow of electrical signals between the slave devices connected to it.

Figure . Hyper Transport™ Technology Switch Configuration.

A four-port Hyper Transport technology switch could aggregate data from


multiple downstream ports into a single high speed uplink, or it could route port-to port
connections. For downstream chains that are connected to the switch, the Hyper
Transport technology switch port functions as a host port on that chain. So, for peer-to-
peer traffic on that chain, the host reflection is done locally rather than having to be
forwarded back to the actual host. This improves performance considerably.
Hyper Transport technology switches can also support hot-pluggable devices. If
slave devices are attached to switch ports via a connector, they can be hot-plugged while
the rest of the Hyper Transport technology fabric stays up. This also provides system
designers the ability to reroute around failing does without having to shut the entire
network down.
Hyper transport Technology

Star Topology

Whereas daisy chain configurations offer linear bus topologies much like a
network “backbone,” and switch topologies expand these into parallel chains, a star
topology approach that distributes Hyper Transport technology links in a spoke fashion
around a central host or switch offers a great deal of flexibility. With Hyper Transport
technology tunnels and switches, Hyper Transport technology can be used to support any
type of topology, including star topologies and redundant configurations, where dual star
configurations are utilized to create redundant links.

Figure . Hyper Transport™ Technology Star Configuration.


Hyper transport Technology

CONCLUSION

Hyper Transport technology is a new high-speed, high-performance, point-to-


point link for integrated circuits. It provides a universal connection designed to reduce the
number of buses within the system, provide a high-performance link for embedded
applications, and enable highly scalable multiprocessing systems. It is designed to enable
the chips inside of PCs and networking and communications devices to communicate
with each other up to 48 times faster than with existing technologies. Hyper Transport
technology provides an extremely fast connection that complements externally visible
bus standards like the PCI, as well as emerging technologies like InfiniBand and Gigabit
Ethernet. Hyper Transport technology is truly the universal solution for in-the-box
connectivity

You might also like