Professional Documents
Culture Documents
DEPLOYMENT G UIDE
Abstract
This document describes the EMC Isilon scale-out NAS validated with Brocade
Networking Solutions deployement guidelines with Brocade VCS Fabric Technologies.
DEPLOYMENT G UIDE
ADX, AnyIO, Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric OS, ICX, MLX,
MyBrocade, OpenScript, VCS, VDX, and Vyatta are registered trademarks, and HyperEdge, The
Effortless Network, and The On-Demand Data Center are trademarks of Brocade
Communications Systems, Inc., in the United States and/or in other countries. Other brands,
products, or service names mentioned may be trademarks of their respective owners.
Notice: This document is for informational purposes only and does not set forth any warranty,
expressed or implied, concerning any equipment, equipment feature, or service offered or to
be offered by Brocade. Brocade reserves the right to make changes to this document at any
time, without notice, and assumes no responsibility for its use. This informational document
describes features that may not be currently available. Contact a Brocade sales office for
information on feature and product availability. Export of technical data contained in this
document may require an export license from the United States government.
Copyright 2012-2013 EMC Corporation. All rights reserved. Published in the USA.
Published October 2012
EMC believes the information in this publication is accurate of its publication date. The
information is subject to change without notice.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in
the United States and other countries. All other trademarks used herein are the property of
their respective owners.
For the most up-to-date regulatory document for your product line, go to the technical
documentation and advisories section on the EMC online support website.
DEPLOYMENT G UIDE
Contents
Chapter 1 Preface Summary .................................................................................................................................... 6
Introduction ............................................................................................................................................................... 6
Purpose of Deployment Guide............................................................................................................................ 6
Compute.................................................................................................................................................................. 8
IP Addresses ........................................................................................................................................................... 15
DEPLOYMENT G UIDE
Step 2: Verify Fabric ISL and Trunk Configurations between VDX switches .......................... 18
Step 5: Configure vLAGs on RB5 and RB6 connecting to Isilon Cluster .................................... 20
Configure Volume................................................................................................................................... 29
c)
b)
IP Addresses ........................................................................................................................................................... 37
Description ......................................................................................................................................................... 38
Assumptions....................................................................................................................................................... 38
Configuring Advanced Settings for Isilon Best Practices................................................................. 39
Step 1: Connect 10Gb interfaces to RB1 & configure ports for VLAN access ......................... 39
Strategic Solutions Lab Guide
DEPLOYMENT G UIDE
Step 6: Configure additional Options for VMware Clusters and VMs ......................................... 52
A) vSphere Optimizations....................................................................................................................... 52
Appendix A ................................................................................................................................................................... 55
Bill of Materials...................................................................................................................................................... 55
Appendix B ................................................................................................................................................................... 56
Management Network ........................................................................................................................................ 56
Pre-requisites .................................................................................................................................................... 56
Appendix C ................................................................................................................................................................... 57
References ............................................................................................................................................................... 57
Appendix D ................................................................................................................................................................... 58
About Brocade........................................................................................................................................................ 58
DEPLOYMENT G UIDE
Traditional scalable, high capacity and high performance storage systems were built on SANs;
separate networks designed to accommodate storage-specific data flows. However, new
developments in distributed applications and server virtualization see increasing adoption of
Network Attached Storage (NAS) on Ethernet, thereby bringing the same requirements to
Ethernet networks supporting storage that are traditionally found in SANs: scalability,
capacity, predictable latency and reliability. Brocade VCS Fabric technology delivers highperformance, reliable networks for NAS solutions, which can scale when needed without
disruption to meet the new requirements for NAS storage infrastructure such as found with
EMC Isilon Scale-out NAS. And while virtualization has enhanced the efficiency of servers in
the data center, it has also magnified the challenges associated with deploying a storage
infrastructure to provide the anticipated end-to-end cost savings and management
advantages of virtualization.
A VCS Fabric with NAS is ideal providing predictable performance and reliability with
simplified change management. VCS Fabric technology is built with TRILL/FSPF and provides
unique capabilities including distributed intelligence, Automated Migration of Port Profiles
(AMPP), virtual link aggregation groups (vLAG), and lossless Ethernet transport removing
previous limitations of Ethernet for storage traffic.
This document can be used as a reference deployment guide with servers running Red Hat
Enterprise Linux (RHEL) for demonstration purposes.. In addition, an example deployment
Strategic Solutions Lab Guide
DEPLOYMENT G UIDE
with VMware is also detailed in Appendix C. In this example we provide the configuration for
the management network using the Brocade ICX 6610 series switch (optional), the
configuration details are documented in Appendix B.
This deployment guide does not include configuration of disaster recovery or data protection
mechanisms, such as replication or backup procedures, outside the basic redundancies
included within the VCS Fabric and Isilon storage cluster.
Target Audience
This content targets cloud, storage and network architects and engineers who are evaluating
and deploying Isilon NAS solutions in their networks that want guidance for how to deploy
Isilon with Brocade VCS Fabric technology.
The readers of this document are expected to have the enough network expertise/training to
install and configure Brocade VDX series switches, EMC Isilon series storage systems, and
associated infrastructure as required by this implementation. External references are
provided where applicable and it is recommended that the readers are familiar with these
documents.
Key Contributors
The content in this guide was provided by the following key contributors.
Lead Architect: Marcus Thordal, Strategic Solutions Lab
Lead Engineer: Anika Suri, EMC OEM Systems Engineer
DEPLOYMENT G UIDE
In the example deployment configuration we use Brocades VDX portfolio to demonstrate that
any combination of VDX Switches work together in a VCS Fabric enabling switch selection to
match each specific solutions requirement for port density and interface speeds
1/10/40Gbps, as required. The Isilon storage subsystem uses aggregated interfaces (LAG)
across switches for redundancy and increased bandwidth. Within the VCS Fabric, the LAG can
span across multiple switches (vLAG) providing redundancy and flexibility. Should one of the
links fail, the storage will remain available through a redundant Brocade VDX switch.
x86 physical servers running an Operating System like Linux or Windows are common
compute OS deployments with Scale-out NAS environments. For demonstration
purposes Red Hat Enterprise Linux (RHEL) 6 in our example deployment. This
deployment guide provides the flexibility to design and implement the customers
choice of server components. The server infrastructure must conform to the following
attributes:
Isilon Storage
Sufficient number server with the required cores and memory to support
customer applications.
Sufficient network connections to enable redundant connectivity to the
network with the Isilon cluster.
EMC Isilon scale-out storage solutions are designed for the enterprise, and are
powerful yet simple to install, manage and scale to virtually any size. With EMC Isilon
scale-out network-attached storage (NAS), can have massive room for growth-with
over 20 petabytes (PB) of capacity per cluster. The Isilon* array provides the
following essentials:
DEPLOYMENT G UIDE
The Isilon x200 series array provides the most flexible and comprehensive
storage product line, that strikes the right balance between large capacity and
presenting high performance NFS datastores to hosts.
Brocade Network
All network traffic is carried by Brocade Ethernet Fabric network with redundant
cabling and switching of NFS storage traffic. IP Management is carried over separate
networks, as explained in Appendix B. This deployment utilizing Brocade Ethernet
Fabric Technology enables the implementation of a high performance, efficient, and
resilient networks illustrated in the deployment guide. The Brocade VDX Ethernet
Fabric networking solutions provides the following attributes:
Redundant network links for the hosts, storage and between switches.
Operating System is the intelligence behind EMC Isilon scale-out storage systems.
OneFS combines the three layers of traditional storage architectures - file system,
volume manager, and data protection into one unified software layer, creating a single
intelligent file system that spans all nodes within a cluster.
We recommend using the latest OneFS code from EMC. In this example configuration
we are using OneFS 6.5.5.
Strategic Solutions Lab Guide
DEPLOYMENT G UIDE
With the VCS Logical Chassis feature introduced in NOS 4.0 all VDX switches in an
Ethernet fabric are managed as a single logical chassis and appear as a single switch to
any network or components attached.
We recommend using the latest Network OS code from Brocade. In this example
configuration we are using NOS 4.0.
Scaling the network elastically non-disruptively. Brocade VCS fabrics are elastic,
self-forming, and self-healing, allowing administrators to concentrate on service
delivery and not fabric administration.
10
DEPLOYMENT G UIDE
single entity. Multiple Layer 3 gateways help bring fabric benefits to Layer 3
traffic, providing maximum utilization.
Wire-speed performance - High-density 10 Gigabit Ethernet and 40 Gigabit
Ethernet - ~4 sec latency within the VCS fabric
Brocade VCS fabrics can be designed to meet virtually any application requirementsin
enterprise and service provider data center environments alike. Organizations can start small
at the access layer with pilot projects deploying fix form factor VDX series switches, building
out the Ethernet Fabric as network needs grow and adjusting capabilities as required.
Existing VCS fabrics can be elastically scaled with Brocade VDX 6740 and VDX 8770 switches.
The ability to deploy these switches in existing environmentseither at the access or
aggregation layer, or both preserves existing investments while future-proofing the
network for 40 Gigabit Ethernet and 100 Gigabit Ethernet (VDX 8770 only) technologies to
come.
Deployment Topology
The Brocade VCS Fabric and EMC Isilon scale-out NAS Deployment Guide with Brocade VDX
networking series switches are validated proven best-of-breed technologies to create a
complete solution that enables you to make an informed decision when deploying Brocade
VDX switches with EMC Isilon scale-out storage. These defined configurations form the basis
of creating a custom deployment design.
Brocade VDX switches with VCS Fabric technology enable designs with fewer tiers (e.g. a 1tier/spine only, 2-tier/ Spine & Leaf design rather than 3-tier) decrease cost, complexity,
cabling and power/heat for operational efficiency. The network design will be based on the
maximum number ports that are required and the desired oversubscription ratio for the
traffic between compute and storage devices. Based on the short-term a single tier (spline)
design may then grow to a two-tier Spine and Leaf design is the next logical step. In this
deployment guide a two-tier design has spine switches at the top tier and leaf switches at the
bottom tier with servers/compute and storage always attached to leaf switches at the top of
every rack (or for higher density leaf switches, top of every N racks) and leaf switches uplink
to 2 or more spine switches.
For this deployment we recommend that all solution servers, storage arrays, switch
interconnects, and switch uplinks have redundant connections. Ensure that the uplinks are
connected to the existing customer network, if required. The configuration for the
management switch for management ports of all the deployment guide components is
outlined in Appendix B of this document. The deployment topology used in this design guide
(illustrated in Figure 1) illustrates the major components layout demonstrated comprising in
this deployment guide.
Strategic Solutions Lab Guide
11
DEPLOYMENT G UIDE
Note: The Brocade VDX 6740/8770 Hardware Reference Manual and the Brocade VDX 8770
Switch Installation Guide provide instructions on racking, cabling, and powering the VDX
6740s and the VDX 8770s respectively.
12
DEPLOYMENT G UIDE
Deployment Topology
Below is a network diagram of the deployed topology showing a Spine-Leaf architecture with
Brocade VDX 8770s (RB1, RB2) in the spine and VDX 6740s (RB3, RB4, RB5, RB6) as the Top
of Rack (ToR) leaf switches with the Isilon cluster nodes and the physical servers running
RHEL 6 attached. This provides uniform and redundant access for all servers to all storage
and simplifies scale-out when adding more servers and NAS nodes. Low latency, high
bandwidth, high availability and simple management are maintained as physical resources
are added.
13
DEPLOYMENT G UIDE
When connecting the RHEL servers SVR1 and SVR2, it is recommended that a separate
dedicated network interface be used respectively for management and storage access. For
high availability, the best practice is to use redundant interfaces for each of these networks. It
is very common to use on-board 1GbE NICs for management and 10GbE interfaces for
storage. Some network adapters, such as the Brocade Fabric Adapter-1860, provide traffic
separation and this guide shows a fully redundant deployment example. We will be using
10GbE interfaces on the server for storage traffic and 1GbE for management in this
configuration example topology.
14
DEPLOYMENT G UIDE
The configuration for VCS Fabric is in this section and for the Management switch of all the
components is outlined in Appendix B of this document.
IP Addresses
When deploying a NAS infrastructure, the logical network infrastructure and IP topology must
be planned in advance. In our test environment, we use a separate management network with
all IP addresses in the default VLAN 1. For the VCS network, VLAN separation is used for
storage (VLAN50) as shown the table below.
Table 1.
IP Address
Device
Type
Mangement_IP
VLAN1
BR-VDX8770-4
VDX
192.168.90.93
VDX
192.168.90.96
BR-VDX8770-4
BR-VDX6740-48
BR-VDX6740-48
BR-VDX6740-48
BR-VDX6740-48
IS-x200-1
IS-x200-2
IS-x200-3
IS-x200-4
SVR1
SVR2
VDX
VDX
VDX
VDX
NAS
NAS
NAS
NAS
HOST
HOST
Store_IP
VLAN50
x200_IP
InfiniBand
192.168.90.101
192.168.50.101
172.16.1.101
192.168.90.104
192.168.50.104
172.16.1.104
192.168.90.94
192.168.90.95
192.168.90.97
192.168.90.98
192.168.90.102
192.168.90.103
192.168.90.105
192.168.90.106
192.168.50.102
192.168.50.103
192.168.50.105
192.168.50.106
172.16.1.102
172.16.1.103
1. All physical connections have been made and all Management interfaces for VDX switches
and RHEL servers have IP addresses assigned and are accessible via SSH. On Brocade
VDXs all ISL ports connected to the same neighbor VDX switch attempt to form a trunk.
15
DEPLOYMENT G UIDE
This example configuration assumes that trunks have been formed between respective
VDX switches, based on the deployment topology in Figure 2.
2. All VDX switches have the Ports on Demand (POD) licenses already installed, if required.
3. The user has knowledge of user access levels on Brocade VDXs and is familiar with VCS
terminology.
For details on setting IP addresses, Licenses, Trunk Port Groups, RbridgeIDs and VCS IDs
please refer to Network OS Administrators Guide, v4.0.0.
The VDXs deployment process is divided into the stages shown in Table 2. Upon completion of
the deployment, the VCS Fabric is ready for integration with the existing customer
management network and server infrastructure.
Table 2.
Steps
2
4
5
6
7
8
16
DEPLOYMENT G UIDE
When VCS is deployed as a Logical Chassis, it can be managed from a single Virtual IP and
configuration changes are automatically saved across all switches in the fabric. In the
following example we will show configuration for Logical Chassis with RB5 as primary.
RBridge ID is a unique identifier for an RBridge (physical switch in a VCS fabric) and VCS ID is
a unique identifier for a VCS fabric. The factory default VCS ID is 1. All switches in a VCS fabric
must have the same VCS ID. This example configuration we will be setting all VDXs with VCS
ID 1 and RBridge IDs are assigned as per the Deployment Topology in Figure 2.
i)
In Privileged EXEC mode, enter the vcs command with options to set the VCS ID, the
RBridge ID and enable logical chassis mode for the switch. Please note that the VCS ID is
set to the same value on each node that belongs to the cluster, in this example we set it to
1.
ii)
iii)
The switch reboots after this and you are asked if you want to apply the default
configuration; answer yes.
NOTE:. To create a Logical Chassis cluster, the user needs to perform the above steps on every
VDX in the VCS fabric, changing only the RBridge ID each time, based on Figure 2 and all
physical connectivity requirements have been met.
iv)
When you have enabled the logical chassis mode on each node in the cluster, run the show
vcs command to determine which node has been assigned as the cluster principal node,
which can be used to configure the entire VCS fabric. The arrow (>) denotes the cluster
principal node. The asterisk (*) denotes the current logged-in node.
NOTE: Any global and local configuration changes now made are distributed automatically to
all nodes in the logical chassis cluster. You can enter the RBridge ID configuration mode for
any RBridge in the cluster from the cluster principal node, by logging into any of the VDXs in
the fabric or by assigning an optional Virtual IP to the entire fabric, as shown belowStrategic Solutions Lab Guide
17
DEPLOYMENT G UIDE
In the above example, now the entire fabric can be managed with one Virtual IP192.168.90.97.
Note: For details on Logical Chassis, please refer to the Network OS Administration Guide,
v4.0.0.
Step 2: Verify Fabric ISL and Trunk Configurations between VDX switches
It is recommended that the VDXs in this deployment have redundant Fabric ISLs between
them. Between two VDXs this is achieved by connecting minimum two cables between any
pair of 10Gbps ports on the two switches. The ISLs are self-forming as the VDX platform
comes preconfigured with a default port configuration that enables ISL for easy and automatic
VCS fabric formation. In this deployment there are two ISLs between each spine and leaf
VDXs, since we connect ports in the same portgroup on the two switches the ISLs
automatically form a Brocade trunk of 20Gbps each which guarantees frame-based load
balancing across the ISLs. With NOS v4.0.0 the number of ISLs in a trunk can vary from 1-16
depending on customer traffic and oversubscription ratio. Configurations for the trunk need
to be done on the trunk master. For details on port groups and trunks, please refer to the
Network OS Administrator Guide, v4.0.0.
The fabric isl enable, fabric trunk enable, no fabric isl enable, and no fabric trunk enable
can be used to toggle the ports which are part of a trunked ISL.
The following example shows the running configuration of an ISL port on RB5BRCD6740-RB5# show running-config interface TenGigabitethernet 5/0/20
interface TenGigabitEthernet 5/0/20
fabric isl enable
fabric trunk enable
no shutdown
!
..
One can verify ISL configurations using the show fabric isl or show fabric trunk commands
on RB5, as shown belowBRCD6740-RB5# show fabric isl
Rbridge-id: 5 #ISLs: 2
Src Src Nbr Nbr
Index Interface Index Interface Nbr-WWN BW Trunk Nbr-Name
-----------------------------------------------------------------20 Te 5/0/20 20 Te 1/0/20 10:00:00:05:33:40:31:93 10G Yes "BRCD6740-RB1"
21 Te 5/0/21 21 Te 2/0/20 10:00:00:05:33:40:31:94 10G Yes "BRCD6740-RB2"
.
BRCD6740-RB5# show fabric trunk
18
DEPLOYMENT G UIDE
Rbridge-id: 5
Trunk Src Source Nbr Nbr
Group Index Interface Index Interface Nbr-WWN
----------------------------------------------------------------1 19 Te 5/0/19 19 Te 1/0/19 10:00:00:05:33:40:31:92
1 20 Te 5/0/20 20 Te 1/0/20 10:00:00:05:33:40:31:93
.
Each RHEL server uses vLAGs configured on the connected ToR switch in the respective racks
(see Figure 2) to connect to the Isilon storage. In the following section we will go through the
creation and configuration of vLAGs for SVR1, which is connected, on ports 3/0/37 and
4/0/37. This is defined as port-channel 105.
For details on vLAGs, please refer to the Network OS Administrator Guide, v 4.0.0.
i) Create and configure Port Channel 105 for SVR1 -
VDX6740_RB5# conf t
VDX6740_RB5(config)# interface Port-channel 105
VDX6740_RB5(config-Port-channel-105)# description vLAG_SVR1_Storage
VDX6740_RB5(config-Port-channel-105)# switchport
VDX6740_RB5(config-Port-channel-105)# switchport mode access
VDX6740_RB5(config-Port-channel-105)# switchport access vlan 50
VDX6740_RB5(config-Port-channel-105)# no shutdown
ii) Add the physical ports on RB3 & RB4 to the vLAG -
19
DEPLOYMENT G UIDE
iii) Repeat configuration for SVR2 which is connected to RB3 and RB4 on 3/0/38 and
4/0/38 respectively through Port Channel 106.
Step 5: Configure vLAGs on RB5 and RB6 connecting to Isilon Cluster
The Isilon storage system will use bonded interfaces for client connections to increase
performance and availability should one or more 10Gb connections fail. Each Isilon node will
use LACP port channel groups configured on the two leaf VDX 6740 RB5 and RB6. Each node
in the cluster uses vLAGs configured on the connected ToR switches in the respective racks
(see Figure 2).
In the following section we will go through the creation and configuration of vLAGs for Node 1
which is connected on ports 5/0/41 and 6/0/42. This is defined as port-channel 101.
i) Create and configure Port Channel 101 for Isilon Node 1 VDX6740_RB5(config)# interface Port-channel 101
VDX6740_RB5(config-Port-channel-101)# description vLAG_Isilon_Node1
VDX6740_RB5(config-Port-channel-101)# switchport
VDX6740_RB5(config-Port-channel-101)# switchport mode access
VDX6740_RB5(config-Port-channel-101)# switchport access vlan 50
VDX6740_RB5(config-Port-channel-101)# no shutdown
ii) Add the physical ports on RB5 & RB6 (where Isilon node 1 is connected) to the vLAG-
iii) Repeat step 4 to enable vLAGs for Isilon nodes 2-4 based on Figure 2.
Strategic Solutions Lab Guide
20
DEPLOYMENT G UIDE
VDX6740_RB5# conf t
VDX6740_RB5(config)# interface Port-channel 101
VDX6740_RB5(config-Port-channel-101)# qos flowcontrol tx on rx on
ii) Repeat step i) to enable flow control for VDX interfaces connected to Isilon nodes 2-4
as well.
Step 7 - Configure MTU and Jumbo Frames
Brocade VDX Series switches support the transport of jumbo frames. This solution for ScaleOut NAS recommends an MTU set at 9216 (Jumbo frames) for efficient storage and migration
traffic. Jumbo frames are enabled by default on the Brocade ISL trunks. However, to
accommodate end-to-end jumbo frame support on the network for the edge systems, this
feature can be enabled under the vLAG interface. Please note that for end-to-end flow control
and Jumbo frames, they need to be enabled on the host servers and the storage as well with
the same MTU size of 9216.
i)
ii)
Repeat step i) on all Port Channel interfaces connecting to the Isilon (101-104)
and the RHEL servers.
21
DEPLOYMENT G UIDE
After performing Steps 1-6, we recommend the user to validate each vLAG Port-channel
Interface. In the below example, we are validating Port Channel 101.
VDX6740_RB5# show running-config interface Port-channel 101
interface Port-channel 101
vlag ignore-split
mtu 9216
switchport
switchport mode access
switchport access vlan 50
qos flowcontrol tx on rx on
no shutdown
22
DEPLOYMENT G UIDE
The Isilon cluster deployment process is divided into the stages shown in Table 3. Upon
completion of the deployment, the Isilon cluster is ready for integration with customer
network and server infrastructure.
Table 3.
Steps
Setup Node 1
2
4
5
23
DEPLOYMENT G UIDE
SSD
0 (0 Raw)
0 (n/a)
0 (n/a)
24
DEPLOYMENT G UIDE
i) Login to the Isilon Administration Console Web GUI using the node1 management
IP address of the cluster just configured.
ii)
iii)
iv)
25
v)
Description: Datastore
Netmask: 255.255.255.0
Gateway: none
SmartConnect:
192.168.50.111
26
DEPLOYMENT G UIDE
vi)
DEPLOYMENT G UIDE
Note: SmartConnect has two modes available: Basic and Advanced, which
requires an additional license from EMC. Unlicensed Basic mode balances
client connections by using a round robin policy, selecting the next available
node on a rotating basis. For more info on the Advanced policies, see the
OneFS Admin Guide.
a. Zone name:
b. Connect Policy:
c. Service Subnet:
zone1
Round Robin
S-Datastore
27
vii)
viii)
DEPLOYMENT G UIDE
Add available interfaces to the subnet, choose aggregated links from each node
Use LACP for the Aggregation Mode (since we configured the vLAG as LACP)
28
DEPLOYMENT G UIDE
When the switch port channels are configured properly, the Isilon will show
green indicators for all 10Gb interfaces in the cluster
We have now completed the network connectivity setup for the Isilon.
Step 5: Setup Isilon NAS Shares
You will need to configure NFS shares on the Isilon for access by clients. We also enable SMB
sharing on the datastore to allow a Windows management station to upload/manipulate files
to the storage. The SMB share also provides another path to show customers the NAS
systems capabilities.
a)
Configure Volume
1. Login to the Web GUI using the node1 management IP address
2. Navigate to File System -> Smart Pools -> Disk Pools
3. Click Manually Add Disk Pool
a. Pool Name: X200_43TB_6GB-RAM
b. Protection Level: +2:1
c. Add all node resources to the pool
29
b)
4. Click Submit
2. Click Submit
Strategic Solutions Lab Guide
30
DEPLOYMENT G UIDE
DEPLOYMENT G UIDE
2. Click Submit
Steps
ServerDeployment Step
Description
31
ServerDeployment Step
Description
DEPLOYMENT G UIDE
While the choice of servers to implement in the compute layer is flexible, it is recommended
to use enterprise class servers designed for the datacenter. This type of server has redundant
power supplies and work well with Scale-Out architectures. In this deployment we used Red
Hat Enterprise Linux 6 as the Operating System, but any other OS such as Microsoft Windows
or VMware can be used as well.
ii)
iii)
NOTE: The above commands may vary depending on the Linux distribution used. For latest
commands, please refer to Red Hat website (links in References section)
Step 2: Configure interface bonding on Network Interface Cards (NICs)
32
DEPLOYMENT G UIDE
RHEL allows administrators to bind NICs together into a single channel using the bonding
kernel module and a special network interface, called a channel bonding interface. Channel
bonding enables two or more network interfaces to act as one, simultaneously increasing the
bandwidth and providing redundancy. Details of this can be found on the Red Hats website
(refer to References section of this document). We are using 10Gbps interfaces on the host for
storage traffic and 1Gbps interfaces for the management network.
In this example we have configured a bonding device with two SLAVE-Devices (2 separate
interfaces) for LACP aggregation- eth0 and eth1 via the VIM editor, as shown below. Please
note that to be successfully aggregated; both interfaces should have the same speed.
33
34
DEPLOYMENT G UIDE
DEPLOYMENT G UIDE
Please make sure that you can see LACP rate: fast in the output. If for some reason
LACP rate shows up as slow, reboot your server.
Step 3: Enable Flow Control
For better performance, it is recommended to enable flow control on the hosts. Depending on
the kind of Operating System running, this can be enabled via Device Manager (in Windows),
Ethtool (in Linux) or vSphere (in a VMware environment).
Due to the complexity and various parameter options available, we will not be covering this
step in detail in this deployment guide. For details on how to do this, please refer the Red Hat
website (links in References section).
35
DEPLOYMENT G UIDE
RB3
RB4
RB2
RB6
RB5
esx231
esx219
esx221
esx225
Clients
vSphere
Clients
vSphere
36
DEPLOYMENT G UIDE
IP Addresses
When deploying a NAS infrastructure, the logical network infrastructure and IP topology must
be planned in advance; in the test bed we use a separate management network with all ip
addresses in the default VLAN 1. For the VCS network VLAN separation is used for storage
(VLAN50), VM application (VLAN60) and vMotion (VLAN70) shown in the table below.
Device
Type
Mangement_I
P
VLAN1
BR-VDX8770-4
VDX
192.168.90.93
VDX
192.168.90.95
BR-VDX8770-4
BR-VDX674048
BR-VDX674048
BR-VDX674048
BR-VDX674048
IS-x200-1
IS-x200-2
IS-x200-3
IS-x200-4
esx231
esx219
esx221
esx225
vCenter
vmioanalyzer1
vmioanalyzer2
RH5.5
VDX
VDX
VDX
VDX
NAS
NAS
NAS
NAS
ESXi
ESXi
ESXi
ESXi
VMwar
e
VMwar
e
VMwar
e
Store_IP
VLAN50
VM_IP
VLAN60
VMotion_IP
VLAN70
x200_IP
InfiniBand
192.168.90.94
192.168.90.96
192.168.90.97
192.168.90.98
192.168.90.10
1
192.168.90.10
2
192.168.90.10
3
192.168.90.10
4
192.168.90.23
1
192.168.90.21
9
192.168.90.22
1
192.168.90.22
5
192.168.90.10
0
192.168.50.10
1
192.168.50.10
2
192.168.50.10
3
192.168.50.10
4
192.168.50.23
1
192.168.50.21
9
192.168.50.22
1
192.168.50.22
5
192.168.60.24
0
192.168.60.24
1
192.168.60.24
2
VM
192.168.60.23
1
192.168.60.21
9
192.168.60.22
1
192.168.60.22
5
37
192.168.70.23
1
192.168.70.21
9
192.168.70.22
1
192.168.70.22
5
172.16.1.10
1
172.16.1.10
2
172.16.1.10
3
172.16.1.10
4
192.168.60.24
3
192.168.60.24
4
192.168.60.24
5
192.168.60.24
6
VM
w2k8-VM2
VM
Web Server
VM
VMwar
e
VMS (Security)
Description
DEPLOYMENT G UIDE
The Isilon storage system will use bonded interfaces for client connections to increase
performance and availability should one or more 10Gb connections fail. Each node will use
port channel groups configured in the two spine VDX 8770-4 (RB1 and RB2).
Node1, 1/2/41 & 2/2/42, port-channel 101
Node2, 1/2/43 & 2/2/44, port-channel 102
Node3, 1/2/45 & 2/2/46, port-channel 103
Node4, 1/2/47 & 2/2/48, port-channel 104
Assumptions
1. The fabric should already be configured and RBridge and VCS IDs assigned to the
switches.
NOTE: When the VCS is deployed in distributed mode the VCS is configured
as a Logical Chassis from a single entry point using the VCS Virtual IP and
configuration changes are automatically saved across all switches in the
fabric. In the following example we will show configuration for distributed
mode (Logical Chassis)
Table 5.
Steps
2
4
Strategic Solutions Lab Guide
38
5
6
DEPLOYMENT G UIDE
Step 1: Connect 10Gb interfaces to RB1 & configure ports for VLAN access
1. SSH to the VDX switch or connect to the serial console
2. Configure VLANs
----------VDX8770_RB1# conf t
VDX8770_RB1(config)# interface Vlan 50
VDX8770_RB1(config-Vlan-50)# description IsilonTest1_Storage
VDX8770_RB1(config)# interface Vlan 60
VDX8770_RB1(config-Vlan-60)# description IsilonTest1_VM_Application
VDX8770_RB1(config)# interface Vlan 70
VDX8770_RB1(config-Vlan-60)# description IsilonTest1_vMotion
-----------
NOTE:
When the VCS is deployed in distributed mode the VCS is as a Logical
Chassis and therefore VLANs only need to be configured once to be available
across the complete VCS
3. Configure vLAG (LACP Port Channel) for Isilon Node1 connected to RB1 & RB2
4. Add the physical ports on RB1 & RB2 (where Isilon node 1 is connected) to the vLAG
39
DEPLOYMENT G UIDE
VDX8770_RB1(conf-if-te-2/2/42)# end
-----------
Enable QOS Flow Control for both tx and rx on RB1 and RB2
2.
----------VDX8770_RB1# conf t
VDX8770_RB1(config)# interface Port-channel 101
VDX8770_RB1(config-Port-channel-101)# qos flowcontrol tx on rx on
--------------------VDX8770_RB1# show running-config interface Port-channel 101
interface Port-channel 101
vlag ignore-split
switchport
switchport mode access
switchport access vlan 50
qos flowcontrol tx on rx on
no shutdown
-----------
3. Repeat steps 1-2 to enable flow control for Isilon nodes 2-4
40
DEPLOYMENT G UIDE
Each ESXi server uses vLAGs configured on the connected ToR switch in the respective racks
(see figure 1) In the following we will go through the configuration for ESXi_231. Server
ESXi_231 is connected on ports 3/0/37 and 4/0/37. This is defined as port-channel 231.
Prerequisites
1. Two uplinks per server for the virtual switch used for storage traffic.
2. For each ESXi server a VMkernel interface is defined to be used for NFS traffic and IP
address assigned according to the IP topology.
3. vCenter Server (or vCenter Appliance) is already deployed and the servers are already
added/managed by the vCenter.
A) Configure VCS ports with connected uplink interfaces for ESXi storage path
1. Configure vLAG (Static Port Channel) for ESXi_231 connected to RB3 & RB4
2. Add the physical ports on RB3 & RB4 (where ESXi231 is connected) to the vLAG
3. Repeat steps 1-2 for the interfaces on the remaining ESXi nodes.
1. Login to vSphere Client and press Ctrl-Shift-N to open the Network inventory
41
42
DEPLOYMENT G UIDE
43
DEPLOYMENT G UIDE
DEPLOYMENT G UIDE
8. Click OK
9. Edit Settings for dvPortGroup
10. Change name to dvPG-50_Storage
NOTE: It is useful to include the VLAN ID in the port group name for easy
identification.
44
DEPLOYMENT G UIDE
12. Verify NIC Teaming option is Route based on IP hash since we have connected to a
vLAG
45
46
DEPLOYMENT G UIDE
DEPLOYMENT G UIDE
------------VDX6740_RB1# conf t
VDX6740_RB1(config)# vcenter IsilonTest1url https://192.168.90.100 username root password "Password!"
VDX6740_RB1(config)# vcenter IsilonTest1 activate
VDX6740_RB1(config)# vcenter IsilonTest1 interval 10
-------------
47
DEPLOYMENT G UIDE
===========
==============
ESXi_221
ESXi_231
ESXi_211
NFS allows multiple connections from a single host, meaning an ESXi host can mount the
same NFS export multiple times as separate datastores to distribute sessions. For demo
purposes, setup at least one datastore using the Isilon SmartConnect IP address for storage
failover. Add multiple datastores using the same IP if desired.
A) Add Isilon Datastores to ESXi Hosts
1. Login to vSphere Client and press Ctrl-Shift-H to open Hosts and Clusters
Strategic Solutions Lab Guide
48
49
DEPLOYMENT G UIDE
50
DEPLOYMENT G UIDE
DEPLOYMENT G UIDE
51
DEPLOYMENT G UIDE
Name: name for this VASA provider, e.g. EMC Isilon Systems
URL: http://<ip_addr>:8081/vasaprovider
Login: root
Password: root password
f. Enable Use Vendor Provider Certificate checkbox
g. Browse to the Certificate location for the certificate on your desktop
h. Click OK.
Note: to disable VASA later, run the following commands from SSH:
isi services apache2 disable
isi services isi_vasa_d disable
The following items were useful specifically for building the Isilon setup in a closed
environment, but may have application in other environments so we document them here.
A) vSphere Optimizations
1. Disable Shell Warnings for SSH/remote access in vSphere
NOTE:
The default settings for ESXi will show a security warning when SSH is
enabled, and since most production activities do not require SSH, VMware
recommends that administrators only enable SSH when they need it. For
proof of concept and demo labs, or full-time SSH access, its useful to disable
the SSH warning for a clean interface.
52
DEPLOYMENT G UIDE
2. For IO-intensive VMs, use the PVSCSI adapter (Paravirtual) which increases
throughput & reduces CPU overhead
3. Align VMDK files at 8K boundaries for OneFS & create VM templates
Note: Since Windows Vista and Windows Server 2008, all Windows versions align
automatically during OS installation. Previous versions and upgraded systems are not
aligned.
Note: RedHat & CentOS Linux version 6 systems align automatically during OS
installation. Previous versions and upgraded systems are not aligned.
a. Format legacy Windows disks with 8K Blocks with diskpart
i. create partition primary align=8
http://support.microsoft.com/kb/923076
4. Use 8192KB allocation unit (block size) when formatting virtual disks
a.
b.
5. Advanced NFS Settings for vSphere are available from VMware in KB #1007909. Heed
all cautions and recommendations from VMware and Isilon. Your mileage may vary.
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=disp
layKC&externalId=1007909#NFSHEap
B) Windows VM Optimizations
53
DEPLOYMENT G UIDE
STORE1
STORE2
STORE3
STORE4
ISILONVIP-50
C) Windows 8 Optimizations
54
Appendix A
Bill of Materials
The following products are used in this deploymentIdentifier
Spine Switch
Vendor
Brocade
Model
VDX 8770-4
ToR
ToR
ToR
ToR
Management
Network
RH Server
RH Server
Isilon Node
Brocade
Brocade
Brocade
Brocade
Brocade
VDX 6740-48
VDX 6740-48
VDX 6740-48
VDX 6740-48
ICX 6610-48P
Spine Switch
Brocade
x86
x86
EMC
VDX 8770-4
X3630 M3
X3630 M3
Isilon X200
Notes
Modular switch with 10Gb and 40Gb
interfaces
Modular switch with 10Gb and 40Gb
interfaces
48 ports of 10Gb
48 ports of 10Gb
48 ports of 10Gb
48 ports of 10Gb
Red Hat Enterprise Linux (RHEL) 6
Red Hat Enterprise Linux (RHEL) 6
4 total in cluster
55
DEPLOYMENT G UIDE
DEPLOYMENT G UIDE
Appendix B
Management Network
For completion we briefly describe setup of the switch used for the management network in
the test bed. All switches, servers, and storage cluster nodes have management network
interfaces separate from the production or dataflow network. These connect to the Top of
Rack management switch, supplied by the Brocade ICX 6610-48P in this configuration
example. We apply basic switch authentication with SSH logins so the ICX login process is
similar to the VDX, for a consistent management experience.
Pre-requisites
1. No directory authentication exists in this setup, so we will use internal accounts and
passwords.
Configure ICX Switch
1. Connect to the serial console of the ICX switch
2. Enter Enable mode and then Config mode
a. enable
b. conf t
4. Configure authentication
a.
b.
c.
d.
e.
f.
g.
h.
no telnet server
56
Appendix C
References
References
Data Center Infrastructure: Base Reference Architecture
o VCS Fabric Blocks
o Data Center Template, Server Virtualization
o Data Center Template, VCS Fabric Leaf-Spine
Brocade Network OS Administrators Guide, v4.0.0
Brocade Network OS Command Reference, v4.0.0
Brocade 6740/6740T Hardware Reference Manual
Red Hat Portal- Using Interface Channel Bonding
57
DEPLOYMENT G UIDE
DEPLOYMENT G UIDE
Appendix D
About Brocade
Brocade (NASDAQ: BRCD) networking solutions help the worlds leading
organizations transition smoothly to a world where applications and information reside
anywhere. This vision is designed to deliver key business benefits such as unmatched
simplicity, non-stop networking, application optimization, and investment protection.
Innovative Ethernet and storage networking solutions for data center, campus, and service
provider networks help reduce complexity and cost while enabling virtualization and cloud
computing to increase business agility.
To help ensure a complete solution, Brocade partners with world-class IT companies and
provides comprehensive education, support, and professional services offerings.
(www.brocade.com)
58