Professional Documents
Culture Documents
Mike Perks
Kenny Bain
Chandrakandh Mouleeswaran
Table of Contents
1
Introduction ................................................................................................ 1
3.2
3.3
3.3.1
3.3.2
3.3.3
3.4
3.4.1
4.1.1
4.1.2
4.1.3
4.2
4.2.1
4.2.2
Intel Xeon E5-2600 v2 processor family servers with Atlantis ILIO ......................................... 15
4.3
4.3.1
4.3.2
Intel Xeon E5-2600 v2 processor family servers with Atlantis ILIO ......................................... 20
4.4
4.4.1
Intel Xeon E5-2600 v3 processor family servers with VMware VSAN ..................................... 21
4.4.2
4.5
4.6
4.7
4.7.1
4.7.2
4.8
ii
Networking ............................................................................................................. 36
Reference Architecture: Lenovo Client Virtualization with VMware Horizon
version 1.1
4.8.1
4.8.2
4.8.3
4.8.4
4.9
Racks ..................................................................................................................... 38
4.11.1
Deployment example 1: Flex Solution with single Flex System chassis .................................. 39
4.11.2
4.11.3
Deployment example 3: System x server with Storwize V7000 and FCoE .............................. 43
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
Resources ....................................................................................................... 66
Document history .......................................................................................... 67
iii
1 Introduction
The intended audience for this document is technical IT architects, system administrators, and managers who
are interested in server-based desktop virtualization and server-based computing (terminal services or
application virtualization) that uses VMware Horizon (with View). In this document, the term client
virtualization is used as to refer to all of these variations. Compare this term to server virtualization, which
refers to the virtualization of server-based business logic and databases.
This document describes the reference architecture for VMware Horizon 6.x and also supports the previous
versions of VMware Horizon 5.x. This document should be read with the Lenovo Client Virtualization (LCV)
base reference architecture document that is available at this website: lenovopress.com/tips1275.
The business problem, business value, requirements, and hardware details are described in the LCV base
reference architecture document and are not repeated here for brevity.
This document gives an architecture overview and logical component model of VMware Horizon. The
document also provides the operational model of VMware Horizon by combining Lenovo hardware platforms
such as Flex System, System x, NeXtScale System, and RackSwitch networking with OEM hardware
and software such as IBM Storwize and FlashSystem storage, VMware Virtual SAN, and Atlantis Computing
software. The operational model presents performance benchmark measurements and discussion, sizing
guidance, and some example deployment models. The last section contains detailed bill of material
configurations for each piece of hardware.
2 Architectural overview
Figure 1 shows all of the main features of the Lenovo Client Virtualization reference architecture with VMware
Horizon on VMware ESXi hypervisor. It also shows remote access, authorization, and traffic monitoring. This
reference architecture does not address the general issues of multi-site deployment and network management.
Hypervisor
Hypervisor
FIREWALL
Internet
Clients
Hypervisor
View
Connection
Server
Proxy
Server
vCenter Pools
vCenter Server
Internal
Clients
Figure 1: LCV reference architecture with VMware Horizon
Shared
Storage
3 Component model
Figure 2 is a layered view of the LCV solution that is mapped to the VMware Horizon virtualization
infrastructure.
View Horizon
Administrator
Administrator
GUIs for
Support
Services
Client
Agent
Client
Agent
HTTP/HTTPS
Client
Agent
Management Protocols
vCenter Server
View Composer
Management Services
Dedicated
Virtual
Desktops
Stateless
Virtual
Desktops
VM
Agent
VM
Agent
VM
Agent
VM
Agent
Local SSD
Storage
Hosted
Desktops
and Apps
Hosted
Desktop
Hosted
Desktop
Hosted
Application
Accelerator
VM
Accelerator
VM
Accelerator
VM
Hypervisor
Hypervisor
Hypervisor
vCenter
Operations for
View
VM
Repository
VM Linked
Clones
User
Profiles
Directory
DNS
DHCP
OS Licensing
Lenovo Thin
Client Manager
User
Data Files
Shared Storage
Lenovo Client Virtualization- VMware Horizon
Support
Services
Administrator
vCenter Operations
This tool provides end-to-end visibility into the health, performance, and
for View
View Connection
The VMware Horizon Connection Server is the point of contact for client devices
Server
that are requesting virtual desktops. It authenticates users and directs the virtual
desktop request to the appropriate virtual machine (VM) or desktop, which
ensures that only valid users are allowed access. After the authentication is
complete, users are directed to their assigned VM or desktop.
If a virtual desktop is unavailable, the View Connection Server works with the
management and the provisioning layer to have the VM ready and available.
In a VMware vCenter Server instance, View Composer is installed. View
View Composer
Composer is required when linked clones are created from a parent VM.
vCenter Server
vCenter Server for VMware ESXi hypervisor requires an SQL database. The
vCenter SQL server might be Microsoft Data Engine (MSDE), Oracle, or SQL
Server. Because the vCenter SQL server is a critical component, redundant
servers must be available to provide fault tolerance. Customer SQL databases
(including respective redundancy) can be used.
VMware Horizon can be configured to record events and their details into a
Microsoft SQL Server or Oracle database. Business intelligence (BI) reporting
engines can be used to analyze this database.
Clients
VMware Horizon supports a broad set of devices and all major device operating
platforms, including Apple iOS, Google Android, and Google ChromeOS. Each
client device has a VMware View Client, which acts as the agent to
communicate with the virtual desktop.
RDP, PCoIP
The virtual desktop image is streamed to the user access device by using the
display protocol. Depending on the solution, the choice of protocols available are
Remote Desktop Protocol (RDP) and PC over IP (PCoIP).
Hypervisor ESXi
ESXi is a bare-metal hypervisor for the compute servers. ESXi also contains
support for VSAN storage. For more information, see VMware Virtual SAN on
page 6.
Accelerator VM
Shared storage
Shared storage is used to store user profiles and user data files. Depending on
the provisioning model that is used, different data is stored for VM images. For
more information, see Storage model.
For more information, see the Lenovo Client Virtualization base reference architecture document that is
available at this website: lenovopress.com/tips1275.
The paging file (or vSwap) is transient data that can be redirected to Network File System (NFS)
storage. In general, it is recommended to disable swapping, which reduces storage use (shared or
local). The desktop memory size should be chosen to match the user workload rather than depending
on a smaller image and swapping, which reduces overall desktop performance.
User profiles (from Microsoft Roaming Profiles) are stored by using Common Internet File System
(CIFS).
Dedicated virtual desktops or stateless virtual desktops that need mobility require the following items to be on
NFS or block I/O shared storage:
NFS or block I/O is used to store all virtual desktops associated data, such as the master image,
replicas, and linked clones.
NFS is used to store View Composer persistent disks when View Persona Management is used for
user profile and user data files. This feature is not recommended.
NFS is used to store all virtual images for linked clones. The replicas and linked clones can be stored
on local solid-state drive (SSD) storage. These items are discarded when the VM is shut down.
For more information, see the following VMware knowledge base article about creating linked clones on NFS storage:
http://kb.vmware.com/kb/2046165
Min Number
Performance
Capacity
Type
of Servers
Tier
Tier
Hyperconverged
Simple
Hybrid
Simple
All Flash
Simple
In-Memory
Cluster of 3
1
1
1
Memory or
local flash
DAS
Memory
Shared
or flash
storage
Local flash
Shared
flash
HA
USX HA
Hypervisor HA
Hypervisor HA
Comments
Good balance between
performance and capacity
Functionally equivalent to
Atlantis ILIO Persistent VDI
Good performance, but lower
capacity
Memory
Memory
N/A (daily
Functionally equivalent to
or flash
or flash
backup)
VM Home
VM swap (.vswp)
VMDK (.vmdk)
Snapshots (.vmsn)
Internally, VM objects are split into multiple components that are based on performance and availability
requirements that are defined in the VM storage profile. These components are distributed across multiple
hosts in a cluster to tolerate simultaneous failures and meet performance requirements. VSAN uses a
distributed RAID architecture to distribute data across the cluster. Components are distributed with the use of
the following main techniques:
For more information about VMware Horizon virtual desktop types, objects, and components, see VMware
Virtual SAN Design and Sizing Guide for Horizon Virtual Desktop Infrastructures, which is available at this
website: vmware.com/files/pdf/products/vsan/VMW-TMD-Virt-SAN-Dsn-Szing-Guid-Horizon-View.pdf
Table 2: VMware Horizon default storage policy values for linked clones
a storage object is
OS Disk
per object
Replica
Max
VM_HOME
disk stripes
Min
Persistent Disk
(Floating Pool)
OS Disk
Number of
(Dedicated Pool)
Replica
Definition
Stateless
VM_HOME
Storage Policy
Persistent
12
0%
100%
0%
10%
0%
0%
0%
10%
0%
No
Yes
No
No
No
No
No
No
No
0%
100%
0%
0%
0%
100%
0%
0%
0%
distributed.
Flash-memory
read cache
reservation
Number of
failures to
tolerate (FTT)
Force
provisioning
Object-space
reservation
4 Operational model
This section describes the options for mapping the logical components of a client virtualization solution onto
hardware and software. The Operational model scenarios section gives an overview of the available
mappings and has pointers into the other sections for the related hardware. Each subsection contains
performance data, has recommendations on how to size for that particular hardware, and a pointer to the BOM
configurations that are described in section 5 on page 45. The last part of this section contains some
deployment models for example customer scenarios.
Traditional
Compute/Management Servers
x3550, x3650, nx360
Hypervisor
ESXi
Graphics Acceleration
NVIDIA GRID K1 or K2
Shared Storage
IBM Storwize V7000
IBM FlashSystem 840
Networking
10GbE
10GbE FCoE
8 or 16 Gb FC
Converged
Flex System Chassis
Compute/Management Servers
Flex System x240
Hypervisor
ESXi
Shared Storage
IBM Storwize V7000
IBM FlashSystem 840
Networking
10GbE
10GbE FCoE
8 or 16 Gb FC
Compute/Management Servers
x3550, x3650, nx360
Hypervisor
ESXi
Graphics Acceleration
NVIDIA GRID K1 or K2
Shared Storage
IBM Storwize V3700
Networking
10GbE
Hyper-converged
Compute/Management Servers
x3650
Hypervisor
ESXi
Graphics Acceleration
NVIDIA GRID K1 or K2
Shared Storage
Not applicable
Networking
10GbE
Not Recommended
right-most column represents hyper-converged systems and the software that is used in these systems. For
the purposes of this reference architecture, the traditional and converged columns are merged for enterprise
solutions; the only significant differences are the networking, form factor, and capabilities of the compute
servers.
Converged systems are not generally recommended for the SMB space because the converged hardware
chassis can be more overhead when only a few compute nodes are needed. Other compute nodes in the
converged chassis can be used for other workloads to make this hardware architecture more cost-effective.
The VMware ESXi 6.0 hypervisor is recommended for all operational models. Similar performance results were
also achieved with the ESXi 5.5 U2 hypervisor. The ESXi hypervisor is convenient because it can boot from a
USB flash drive or boot from SAN and does not require any extra local storage.
4.8 Networking
4.9 Racks
To show the enterprise operational model for different sized customer environments, four different sizing
models are provided for supporting 600, 1500, 4500, and 10000 users.
4.9 Racks
To show the hyper-converged operational model for different sized customer environments, four different sizing
models are provided for supporting 300, 600, 1500, and 3000 users. The management server VMs for a
hyper-converged cluster can either be in a separate hyper-converged cluster or on traditional shared storage.
11
Stateless
Dedicated
188 users
197 users
232 users
234 users
239 users
244 users
243 users
246 users
Table 4 lists the results for the Login VSI 4.1 knowledge worker workload.
Table 4: Performance with knowledge worker workload
Processor with knowledge worker workload
Stateless
Dedicated
189 users
190 users
191 users
200 users
12
These results indicate the comparative processor performance. The following conclusions can be drawn:
The Xeon E5-2650v3 processor has performance that is similar to the previously recommended Xeon
E5-2690v2 processor (IvyBridge), but uses less power and is less expensive.
The Xeon E5-2690v3 processor does not have significantly better performance than the Xeon
E5-2680v3 processor; therefore, the E5-2680v3 is preferred because of the lower cost.
Between the Xeon E5-2650v3 (2.30 GHz, 10C 105W) and the Xeon E5-2680v3 (2.50 GHz, 12C 120W) series
processors are the Xeon E5-2660v3 (2.6 GHz 10C 105W) and the Xeon E5-2670v3 (2.3GHz 12C 120W)
series processors. The cost per user increases with each processor but with a corresponding increase in user
density. The Xeon E5-2680v3 processor has good user density, but the significant increase in cost might
outweigh this advantage. Also, many configurations are bound by memory; therefore, a faster processor might
not provide any added value. Some users require the fastest processor and for those users, the Xeon
E5-2680v3 processor is the best choice. However, the Xeon E5-2650v3 processor is recommended for an
average configuration.
Previous Reference Architectures used Login VSI 3.7 medium and heavy workloads. Table 5 gives a
comparison with the newer Login VSI 4.1 office worker and knowledge worker workloads. The table shows that
Login VSI 3.7 is on average 20% to 30% higher than Login VSI 4.1.
Table 5: Comparison of Login VSI 3.7 and 4.1 Workloads
Processor
Workload
Stateless
Dedicated
188 users
197 users
3.7 Medium
254 users
260 users
243 users
246 users
3.7 Medium
316 users
313 users
191 users
200 users
3.7 Heavy
275 users
277 users
Table 6 compares the E5-2600 v3 processors with the previous generation E5-2600 v2 processors by using
the Login VSI 3.7 workloads to show the relative performance improvement. On average, the E5-2600 v3
processors are 25% - 30% faster than the previous generation with the equivalent processor names.
13
Workload
Stateless
Dedicated
3.7 Medium
202 users
205 users
3.7 Medium
254 users
260 users
3.7 Medium
240 users
260 users
3.7 Medium
316 users
313 users
3.7 Heavy
208 users
220 users
3.7 Heavy
275 users
277 users
The default recommendation for this processor family is the Xeon E5-2650v3 processor and 512 GB of system
memory because this configuration provides the best coverage for a range of users. For users who need VMs
that are larger than 3 GB, Lenovo recommends the use of 768 GB and the Xeon E5-2680v3 processor.
Lenovo testing shows that 150 users per server is a good baseline and has an average of 76% usage of the
processors in the server. If a server goes down, users on that server must be transferred to the remaining
servers. For this degraded failover case, Lenovo testing shows that 180 users per server have an average of
89% usage of the processor. It is important to keep this 25% headroom on servers to cope with possible
failover scenarios. Lenovo recommends a general failover ratio of 5:1.
Table 7 lists the processor usage with ESXi for the recommended user counts for normal mode and failover
mode.
Table 7: Processor usage
Processor
Workload
Stateless Utilization
Dedicated Utilization
Two E5-2650 v3
Office worker
79%
78%
Two E5-2650 v3
Office worker
86%
86%
Two E5-2680 v3
Knowledge worker
76%
74%
Two E5-2680 v3
Knowledge worker
92%
90%
Table 8 lists the recommended number of virtual desktops per server for different VM memory sizes. The
number of users is reduced in some cases to fit within the available memory and still maintain a reasonably
balanced system of compute and memory.
Table 8: Recommended number of virtual desktops per server
Processor
E5-2650v3
E5-2650v3
E5-2680v3
VM memory size
2 GB (default)
3 GB
4 GB
System memory
384 GB
512 GB
768 GB
150
140
150
180
168
180
14
Table 9 lists the approximate number of compute servers that are needed for different numbers of users and
VM sizes.
Table 9: Compute servers needed for different numbers of users and VM sizes
Desktop memory size (2 GB or 4 GB)
600 users
1500 users
4500 users
10000 users
10
30
68
25
56
Failover ratio
4:1
4:1
5:1
5:1
600 users
1500 users
4500 users
10000 users
11
33
72
27
60
Failover ratio
4:1
4.5:1
4.5:1
5:1
For stateless desktops, local SSDs can be used to store the VMware replicas and linked clones for improved
performance. Two replicas must be stored for each master image. Each stateless virtual desktop requires a
linked clone, which tends to grow over time until it is refreshed at log out. Two enterprise high speed 200 GB
SSDs in a RAID 0 configuration should be sufficient for most user scenarios; however, 400 GB or even 800 GB
SSDs might be needed. Because of the stateless nature of the architecture, there is little added value in
configuring reliable SSDs in more redundant configurations.
4.2.2 Intel Xeon E5-2600 v2 processor family servers with Atlantis ILIO
Atlantis ILIO provides storage optimization by using a 100% software solution. There is a cost for processor
and memory usage while offering decreased storage usage and increased input/output operations per second
(IOPS). This section contains performance measurements for processor and memory utilization of ILIO
technology and gives an indication of the storage usage and performance. Dedicated and stateless virtual
desktops have different performance measurements and recommendations.
VMs under ILIO are deployed on a per server basis. It is also recommended to use a separate storage logical
unit number (LUN) for each ILIO VM to support failover. Therefore, the performance measurements and
recommendations in this section are on a per server basis. Note that these measurements are currently for the
E5-2600 v2 processor using Login VSI 3.7.
15
Workload
Dedicated
Medium
205 users
189 users
Medium
260 users
232 users
Heavy
220 users
198 users
There is an average difference of 20% - 30% in the work that is done by the two vCPUs of the Atlantis ILIO VM.
It is recommended that higher-end processors (such as E5-2690v2) are used to maximize density.
The ILIO Persistent VDI VM uses 5 GB of RAM. In addition, the ILIO RAM cache requires more RAM and
Atlantis Computing provides a calculator for this RAM. Lenovo testing found that 275 VMs used 35 GB out of
the 50 GB RAM. In practice, most servers host less VMs, but each VM is much larger. Proof of concept (POC)
testing can help determine the amount of RAM, but for most situations 50 GB of RAM should be sufficient.
Assuming 4 GB for the hypervisor, 59 GB (50 + 5 + 4) of system memory should be reserved. It is
recommended that at least 384 GB of server memory is used for ILIO Persistent VDI deployments.
Table 11 lists the recommended number of virtual desktops per server for different VM memory sizes for a
medium workload. This configuration can be a more cost-effective, higher-density route for larger VMs that
balance RAM and processor utilization.
Table 11: Recommended number of virtual desktops per server with ILIO Persistent VDI
Processor
E5-2690v2
E5-2690v2
E5-2690v2
VM memory size
2 GB (default)
3 GB
4 GB
384 GB
512 GB
768 GB
59 GB
59 GB
59 GB
325 GB
452 GB
709 GB
125
125
125
150
150
150
Table 12 lists the number of compute servers that are needed for different numbers of users and VM sizes. A
server with 384 GB system memory is used for 2 GB VMs, 512 GB system memory is used for 3 GB VMs, and
768 GB system memory is used for 4 GB VMs.
Table 12: Compute servers needed for different numbers of users with ILIO Persistent VDI
600 users
1500 users
4500 users
10000 users
12
36
80
10
30
67
Failover ratio
4:1
5:1
5:1
4:1
The amount of disk storage that is used depends on several factors, including the size of the original image,
the amount of user unique storage, and the de-duplication and compression ratios that can be achieved.
16
Here is a best case example: A Windows 7 image uses 21 GB out of an allocated 30 GB. For 160 VMs that are
using full clones, the actual storage space that is needed is 3360 GB. For ILIO, the storage space that is used
is 60 GB out of an allocated datastore of 250 GB. This configuration is a saving of 98% and is best case, even
if you add the 50 GB of disk space that is needed by the ILIO VM.
It is still a best practice to separate the user folder and any other shared folders into separate storage. That
leaves all of the other possible changes that might occur in a full clone must be stored in the ILIO data store.
This configuration is highly dependent on the environment. Testing by Atlantis Computing suggests that 3.5 GB
of unique data per persistent VM is sufficient. Comparing against the 4800 GB that is needed for 160 full clone
VMs, this configuration still represents a saving of 88%. It is recommended to reserve 10% - 20% of the total
storage that is required for the ILIO data store.
As a result of the use of ILIO Persistent VDI, the only read operations are to fill the cache for the first time. For
all practical purposes, the remaining reads are few and at most 1 IOPS per VM. Writes to persistent storage
are still needed for starting, logging in, remaining in steady state, and logging off, but the overall IOPS count is
substantially reduced.
Assuming the use of a fast, low-latency shared storage device, such as the IBM FlashSystem 840 system, a
single VM boot can take 20 - 25 seconds to get past the display of the logon window and get all of the other
services fully loaded. This process takes this time because boot operations are mainly read operations,
although the actual boot time can vary depending on the VM.
Login time for a single desktop varies, depending on the VM image but can be extremely quick. In some cases,
the login will take less than 6 seconds. Scale-out testing across a cluster of servers shows that one new login
every 6 seconds can be supported over a long period. Therefore, at any one instant, there can be multiple
logins underway and the main bottleneck is the processor.
Workload
Stateless
Stateless with
local SSD
Two E5-2650v2 8C 2.7 GHz
Medium
202 users
181 users
159 users
Medium
240 users
227 users
224 users
Heavy
208 users
196 users
196 users
There is an average difference of 20% - 35% in the work that is done by the two vCPUs of the Atlantis ILIO VM.
It is recommended that higher-end processors (such as the E5-2690v2) are used to maximize density. The
17
maximum number of users that is supported is slightly higher for ILIO Diskless VDI, but the RAM requirement
is also much higher.
For the ILIO Persistent VDI that uses local SSDs, the memory calculation is similar to that for persistent virtual
desktops. It is recommended that at least 384 GB of server memory is used for ILIO Persistent VDI
deployments. For more information about recommendations for ILIO Persistent VDI that use local SSDs for
stateless virtual desktops, see Table 11 and Table 12. The same configuration can also be used for stateless
desktops with shared storage; however, the performance of the write operations likely becomes much worse.
The ILIO Diskless VDI VM uses 5 GB of RAM. In addition, the ILIO RAM cache and RAM data store requires
extra RAM. Atlantis Computing provides a calculator for this RAM. Lenovo testing found that 230 VMs used 69
GB of RAM. In practice, most servers host less VMs and each VM has more differences. POC testing can help
determine the amount of RAM, but 128 GB should be sufficient for most situations. Assuming 4 GB for the
hypervisor, 137 GB (128 + 5 + 4) of system memory should be reserved. In general, it is recommended that a
minimum of 512 GB of server memory is used for ILIO Diskless VDI deployments.
Table 14 lists the recommended number of stateless virtual desktops per server for different VM memory sizes
for a medium workload.
Table 14: Recommended number of virtual desktops per server with ILIO Diskless VDI
Processor
E5-2690v2
E5-2690v2
E5-2690v2
VM memory size
2 GB (default)
3 GB
4 GB
512 GB
512 GB
768 GB
137 GB
137 GB
137 GB
375 GB
375 GB
631 GB
125
100
125
150
125
150
Table 15 shows the number of compute servers that are needed for different numbers of users and VM sizes. A
server with 512 GB system memory is used for 2 GB and 3 GB VMs, and 768 GB system memory is used for 4
GB VMs.
Table 15: Compute servers needed for different numbers of users with ILIO Diskless VDI
Desktop memory size (2GB or 4GB)
600 users
1500 users
4500 users
10000 users
11
30
67
25
56
Failover ratio
4:1
4.5:1
5:1
5:1
600 users
1500 users
4500 users
10000 users
12
36
80
10
30
67
Failover ratio
4:1
5:1
5:1
4:1
18
Disk storage is needed for the master images and each SnapClone data store for ILIO Diskless VDI VMs. This
storage does not need to be fast because it is used only to initially load the master image or to recover an ILIO
Diskless VDI VM that was rebooted.
As with persistent virtual desktops, the addition of the ILIO technology reduces the IOPS that is needed for
boot, login, remaining in steady state, and logoff. This reduces the time to bring a VM online and reduces user
response time.
Workload
Hosted Desktops
Office Worker
222 users
Office Worker
298 users
Knowledge Worker
244 users
Lenovo testing shows that 170 hosted desktops per server is a good baseline. If a server goes down, users on
that server must be transferred to the remaining servers. For this degraded failover case, Lenovo recommends
204 hosted desktops per server. It is important to keep a 25% headroom on servers to cope with possible
failover scenarios. Lenovo recommends a general failover ratio of 5:1.
Table 17 lists the processor usage for the recommended number of users.
Table 17: Processor usage
Processor
Workload
Utilization
Two E5-2650 v3
Office worker
73%
Two E5-2650 v3
Office worker
81%
Two E5-2690 v3
Knowledge worker
64%
Two E5-2690 v3
Knowledge worker
79%
19
Table 18 lists the number of compute servers that are needed for different number of users. Each compute
server has 128 GB of system memory for the four VMs.
Table 18: Compute servers needed for different numbers of users and VM sizes
600 users
1500 users
4500 users
10000 users
10
27
59
22
49
Failover ratio
3:1
4:1
4.5:1
5:1
4.3.2 Intel Xeon E5-2600 v2 processor family servers with Atlantis ILIO
Atlantis ILIO provides in-memory storage optimization by using a 100% software solution. There is an effect on
processor and memory usage while offering decreased storage usage and increased IOPS. This section
contains performance measurements for processor and memory utilization of ILIO technology and describes
the storage usage and performance.
VMs under ILIO are deployed on a per server basis. It is recommended to use a separate storage LUN for
each ILIO VM to support failover. The performance measurements and recommendations in this section are on
a per server basis. Note that these measurements are currently for the E5-2600 v2 processor using Login VSI
3.7.
The performance measurements and recommendations are for the use of ILIO Persistent VDI with hosted
desktops. Table 19 lists the processor performance results for the Xeon E5-2600 v2 series of processors.
Table 19: Performance results for hosted desktops using the E5-2600 v2 processors
Workload
Processor
RDP
PCoIP
desktops
ILIO Persistent VM
desktops
ILIO Persistent VM
Medium
Two E5-2690v2
266 users
243 users
220 users
205 users
Heavy
Two E5-2690v2
213 users
197 users
173 users
163 users
On average, there is a difference of 20% - 30% that can be attributed to work that is done by the two vCPUs of
the Atlantis ILIO VM. It is recommended that higher-end processors, such as E5-2690v2, are used to maximize
density.
The ILIO Persistent VDI VM uses 5 GB of RAM. In addition, the ILIO RAM cache requires more RAM. Atlantis
Computing provides a calculator for this RAM. Lenovo testing found that the four VMs used 32 GB. In practice,
most servers host less VMs and each VM is much larger. POC testing can help determine the amount of RAM.
However, for most circumstances, 60 GB should be enough. It is recommended that at least 192 GB of server
memory is used for ILIO Persistent VDI deployments of hosted desktops.
Table 20 shows the recommended number of shared hosted desktops per compute server that uses two Xeon
E5-2690v2 series processors, which allows for some processor headroom for the hypervisor and a 5:1 failover
ratio in the compute servers.
20
Normal case
Normal utilization
Failover case
Failover utilization
Medium
150
73%
180
88%
Heavy
160
73%
192
87%
Table 21 shows the number of compute servers that is needed for different numbers of users. Each compute
server has 256 GB of system memory for the four VMs and the ILIO Persistent VDI VM.
Table 21: Compute servers needed for different numbers of users and VM sizes
600 users
1500 users
4500 users
10000 users
10
30
67
25
56
Failover ratio
4:1
4:1
5:1
5:1
The amount of disk storage that is used depends on several factors, including the size of the original Windows
Server image, the amount of unique storage, and the de-duplication and compression ratios that can be
achieved. A Windows 2008 R2 image uses 19 GB. For four VMs, the actual storage space that is needed is 76
GB. For ILIO, the storage space that is used is 25 GB, which is a saving of 67%.
As a result of the use of ILIO Persistent VDI, the only read I/O operations that are needed are those to fill the
cache for the first time. For all practical purposes, the remaining reads are few and at most 1 IOPS per VM.
Writes to persistent storage are still needed for booting, logging in, remaining in steady state, and logging off,
but the overall IOPS count is substantially reduced.
4.4.1 Intel Xeon E5-2600 v3 processor family servers with VMware VSAN
VMware VSAN is tested by using the office worker and knowledge worker workloads of Login VSI 4.1. Four
Lenovo x3650 M5 servers with E5-2680v3 processors were networked together by using a 10 GbE TOR switch
with two 10 GbE connections per server.
21
Table 22 lists the Login VSI results for stateless desktops by using linked clones and the VMware default
storage policy of number of failures to tolerate (FTT) of 0 and stripe of 1.
Table 22: Performance results for stateless desktops
Storage Policy
WorkLoad
Stateless
desktops
OS Disk
tested
Used
Capacity
1 SSD and
1 SSD and
6 HDDs
3 HDDs
Office worker
1000
5.26 TB
888
886
Knowledge worker
800
4.45 TB
696
674
Table 23 lists the Login VSI results for persistent desktops by using linked clones and the VMware default
storage policy of fault tolerance of 1 and 1 stripe.
Table 23: Performance results for dedicated desktops
Storage Policy
Test Scenario
OS
Persisten
desktops
t Disk
tested
Disk
Office worker
Knowledge worker
FTT = 1
FTT = 1
Stripe = 1
Stripe = 1
FTT = 1
FTT = 1
Stripe = 1
Stripe = 1
Dedicated
Used
Capacity
1 SSD and
1 SSD and
6 HDDs
3 HDDs
1000
10.86 TB
905
895
800
9.28 TB
700
668
These results show that there is no significant performance difference between disk groups with three and
seven HDDs and there are enough IOPS for the disk writes. Persistent desktops might need more hard drive
capacity or larger SSDs to improve performance for full clones or provide space for growth of linked clones.
The Lenovo M5210 raid controller for the x3650 M5 can be used in two modes: integrated MegaRaid (iMR)
mode without the flash cache module or MegaRaid (MR) mode with the flash cache module and at least 1 GB
of battery backed flash memory. In both modes, RAID 0 virtual drives were configured for use by the disk
groups. For more information, see this website: lenovopress.com/tips1069.html.
Table 24 lists the measured queue depth for the M5210 raid controller in iMR and MR modes. Lenovo
recommends using the M5210 raid controller with the flash cache module because it has a much better queue
depth and has better IOPS performance.
Table 24: Raid Controller Queue Depth
Raid Controller
M5210
234
895
128
Lenovo testing shows that 125 users per server is a good baseline and has an average of 77% usage of the
processors in the server. If a server goes down, users on that server must be transferred to the remaining
servers. For this degraded failover case, Lenovo testing shows that 150 users per server have an average of
89% usage rate. It is important to keep 25% headroom on servers to cope with possible failover scenarios.
Lenovo recommends a general failover ratio of 5:1.
22
Table 25 lists the processor usage for the recommended number of users
Table 25: Processor usage
Processor
Workload
Stateless Utilization
Dedicated Utilization
Two E5-2680 v3
Office worker
51%
50%
Two E5-2680 v3
Office worker
62%
59%
Two E5-2680 v3
Knowledge worker
62%
61%
Two E5-2680 v3
Knowledge worker
81%
78%
Table 25 lists the recommended number of virtual desktops per server for different VM memory sizes. The
number of users is reduced in some cases to fit within the available memory and still maintain a reasonably
balanced system of compute and memory.
Table 26: Recommended number of virtual desktops per server for VSAN
Processor
E5-2680 v3
E5-2680 v3
E5-2680 v3
VM memory size
2 GB (default)
3 GB
4 GB
System memory
384 GB
512 GB
768 GB
125
125
125
150
150
150
Table 27 shows the number of servers that is needed for different numbers of users. By using the target of 125
users per server, the maximum number of users is 4000. The minimum number of servers that is required for
VSAN is 3 and this requirement is reflected in the extra capacity for 300 user case because the configuration
can actually support up to 450 users.
Table 27: Compute servers needed for different numbers of users for VSAN
300 users
600 users
1500 users
3000 users
12
24
10
20
Failover ratio
3:1
4:1
5:1
5:1
The processor and I/O usage graphs from esxtop are helpful to understand the performance characteristics of
VSAN. The graphs are for 150 users per server and three servers to show the worst-case load in a failover
scenario.
Figure 5 shows the processor usage with three curves (one for each server). The Y axis is percentage usage
0% - 100%. The curves have the classic Login VSI shape with a gradual increase of processor usage and then
flat during the steady state period. The curves then go close to 0 as the logoffs are completed.
23
Figure 6: VSAN SSD reads and writes for 450 virtual desktops
Figure 7 shows the HDD reads and writes with 36 curves, one for each HDD. The Y axis is 0 - 2,000 IOPS for
the reads and 0 - 1,000 IOPS for the writes. The number of read IOPS has an average peak of 200 IOPS and
24
many of the drives are idle much of the time. The number of write IOPS has a peak of 500 IOPS; however, as
can be seen, the writes occur in batches as data is destaged from the SSD cache onto the greater capacity
HDDs. The first group of write peaks is during the logon and last group of write peaks correspond to the logoff
period.
Stateless 450 users on 3 servers
Figure 7: VSAN HDD reads and writes for 450 virtual desktops
Enter maintenance mode by using the VSAN migration mode of ensure accessibility (VSAN default)
Enter maintenance mode by using the VSAN migration mode of no data migration
Server power off by using the VSAN migration mode of ensure accessibility (VSAN default)
For each use case, Login VSI was run and then the compute server was removed. This process was done
during the login phase as new virtual desktops are logged in and during the steady state phase. For the steady
state phase, 114 - 120 VMs must be migrated from the failed server to the other three servers with each
server gaining 38 - 40 VMs.
Table 28 lists the completion time or downtime for the VSAN system with the three different use cases.
25
Completion
Downtime
Completion
Downtime
Time (seconds)
(seconds)
Time (seconds)
(seconds)
Maintenance Mode
Ensure Accessibility
605
598
Maintenance Mode
No Data Migration
408
430
Ensure Accessibility
N/A
226
N/A
316
For the two maintenance mode cases, all of the VMs migrated smoothly to the other servers and there was no
significant interruption in the Login VSI test.
For the power off use case, there is a significant period for the system to readjust. During the login phase for a
Login VSI test, the following process was observed:
1. All logged in users were logged out from the failed node.
2. Login failed for all new users logging to the desktops running on the failed node.
3. Desktop status changed to "Agent Unreachable" for all desktops in the failed node.
4. All desktops are migrated to other nodes.
5. Desktops status changed to "Available"
6. Login continued successfully for all new users.
In a production system, users with persistent desktops that are running on the failed server must login again
after their VM was successfully migrated to another server. Stateless users can continue working almost
immediately, assuming that the system is not at full capacity and other stateless VMs are ready to be used.
Figure 8 shows the processor usage for the four servers during the login phase and steady state phase by
using Login VSI with the knowledge worker workload when one of the servers is powered off. The processor
spike for the three remaining servers is apparent.
Login Phase
26
There is an impact on performance and time lag if a hyper-converged server suffers a catastrophic failure yet
VSAN can recover quite quickly. However, this situation is best avoided and it is important to build in
redundancy at multiple levels for all mission critical systems.
4.4.2 Intel Xeon E5-2600 v3 processor family servers with Atlantis USX
Atlantis USX is tested by using the knowledge worker workload of Login VSI 4.1. Four Lenovo x3650 M5
servers with E5-2680v3 processors were networked together by using a 10 GbE TOR switch. Atlantis USX was
installed and four 400 GB SSDs per server were used to create an all-flash hyper-converged volume across
the four servers that were running ESXi 5.5 U2.
This configuration was tested with 500 dedicated virtual desktops on four servers and then three servers to see
the difference if one server is unavailable. Table 29 lists the processor usage for the recommended number of
users.
Table 29: Processor usage for Atlantis USX
Processor
Workload
Servers
Utilization
Two E5-2680 v3
Knowledge worker
66%
Two E5-2680 v3
Knowledge worker
89%
From these measurements, Lenovo recommends 125 user per server in normal mode and 150 users per
server in failover mode. Lenovo recommends a general failover ratio of 5:1.
Table 30 lists the recommended number of virtual desktops per server for different VM memory sizes.
Table 30: Recommended number of virtual desktops per server for Atlantis USX
Processor
E5-2680 v3
E5-2680 v3
E5-2680 v3
VM memory size
2 GB (default)
3 GB
4 GB
System memory
384 GB
512 GB
768 GB
63 GB
63 GB
63 GB
321 GB
449 GB
705 GB
125
125
125
150
150
150
Table 31 lists the approximate number of compute servers that are needed for different numbers of users and
VM sizes.
Table 31: Compute servers needed for different numbers of users for Atlantis USX
Desktop memory size
300 users
600 users
1500 users
3000 users
12
24
10
20
Failover ratio
3:1
4:1
5:1
5:1
27
An important part of a hyper-converged system is the resiliency to failures when a compute server is no
unavailable. Login VSI was run and then the compute server was powered off. This process was done during
the steady state phase. For the steady state phase, 114 - 120 VMs were migrated from the failed server to the
other three servers with each server gaining 38 - 40 VMs.
Figure 9 shows the processor usage for the four servers during the steady state phase and when one of the
servers is powered off. The processor spike for the three remaining servers is noticeable.
Dedicated GPU with one GPU per user, which is called virtual dedicated graphics acceleration (vDGA)
mode.
Shared GPU with users sharing a GPU, which is called virtual shared graphics acceleration (vSGA)
mode and is not recommended because of user contention for shared use of the GPU.
GPU hardware virtualization (vGPU) that partitions each GPU for 1 - 8 users. This option requires
Horizon 6.1 and is not considered in this release of the Reference Architecture.
VMware also provides software emulation of a GPU, which can be processor-intensive and disruptive to other
users who have a choppy experience because of reduced processor performance. Software emulation is not
recommended for any user who requires graphics acceleration.
The performance of graphics acceleration was tested on the NVIDIA GRID K1 and GRID K2 adapters by using
the Lenovo System x3650 M5 server and the Lenovo NeXtScale nx360 M5 server. Each of these servers
supports up to two GRID adapters. No significant performance differences were found between these two
servers when used for graphics acceleration and the results apply to both.
28
Because the vDGA option offers a low user density (8 for GRID K1 and 4 for GRID K2), it is recommended that
this configuration is used only for power users, designers, engineers, or scientists that require powerful
graphics acceleration. Horizon 6.1 is needed to support higher user densities of up to 64 users per server with
two GRID K1 adapters by using the hardware virtualization of the GPU (vGPU mode).
Lenovo recommends that a high powered CPU, such as the E5-2680v3, is used for vDGA and vGPU because
accelerated graphics tends to put an extra load on the processor. For the vDGA option, with only four or eight
users per server, 128 GB of server memory should be sufficient even for the high end GRID K2 users who
might need 16 GB or even 24 GB per VM.
The Heaven benchmark is used to measure the per user frame rate for different GPUs, resolutions, and image
quality. This benchmark is graphics-heavy and is fairly realistic for designers and engineers. Power users or
knowledge workers usually have less intense graphics workloads and can achieve higher frame rates.
Table 32 lists the results of the Heaven benchmark as frames per second (FPS) that are available to each user
with the GRID K1 adapter by using vDGA mode with DirectX 11.
Table 32: Performance of GRID K1 vDGA mode by using DirectX 11
Quality
Tessellation
Anti-Aliasing
Resolution
FPS
High
Normal
1024x768
15.8
High
Normal
1280x768
13.1
High
Normal
1280x1024
11.1
Table 33 lists the results of the Heaven benchmark as FPS that is available to each user with the GRID K2
adapter by using vDGA mode with DirectX 11.
Table 33: Performance of GRID K2 vDGA mode by using DirectX 11
Quality
Tessellation
Anti-Aliasing
Resolution
FPS
Ultra
Extreme
1680x1050
28.4
Ultra
Extreme
1920x1080
24.9
Ultra
Extreme
1920x1200
22.9
Ultra
Extreme
2560x1600
13.8
The GRID K2 GPU has more than twice the performance of the GRID K1 GPU, even with the high quality,
tessellation, and anti-aliasing options. This result is expected because of the relative performance
characteristics of the GRID K1 and GRID K2 GPUs. The frame rate decreases as the display resolution
increases.
Because there are many variables when graphics acceleration is used, Lenovo recommends that testing is
done in the customer environment to verify the performance for the required user workloads.
For more information about the bill of materials (BOM) for GRID K1 and K2 GPUs for Lenovo System x3650
M5 and NeXtScale nx360 M5 servers, see the following corresponding BOMs:
29
BOM for enterprise and SMB compute servers section on page 45.
Virtual
System
service VM
processors
memory
vCenter Server
4 GB
vCenter SQL
4 GB
Storage
Windows
HA
Performance
OS
needed
characteristic
15 GB
2008 R2
Yes
Up to 2000 VMs.
15 GB
2008 R2
Yes
Server
processors and
memory for more
than 2500 users.
4
View Connection
10 GB
40 GB
2008 R2
Yes
Server
Up to 2000
connections.
Table 35 lists the number of management VMs for each size of users following the high-availability and
performance characteristics. The number of vCenter servers is half of the number of vCenter clusters because
each vCenter server can handle two clusters of up to 1000 desktops.
Table 35: Management VMs needed
Horizon management service VM
600 users
1500 users
4500 users
10000 users
vCenter servers
2 (1+1)
2 (1+1)
2 (1+1)
2 (1+1)
2 (1 + 1)
2 (1 + 1)
4 (3 + 1)
7 (5 + 2)
Each management VM requires a certain amount of virtual processors, memory, and disk. There is enough
capacity in the management servers for all of these VMs. Table 36 lists an example mapping of the
management VMs to the four physical management servers for 4500 users.
Table 36: Management server VM mapping (4500 users)
Management service for 4500
Management
Management
Management
Management
stateless users
server 1
server 2
server 3
server 4
It is assumed that common services, such as Microsoft Active Directory, Dynamic Host Configuration Protocol
(DHCP), domain name server (DNS), and Microsoft licensing servers exist in the customer environment.
For shared storage systems that support block data transfers only, it is also necessary to provide some file I/O
30
servers that support CIFS or NFS shares and translate file requests to the block storage system. For high
availability, two or more Windows storage servers are clustered.
Based on the number and type of desktops, Table 37 lists the recommended number of physical management
servers. In all cases, there is redundancy in the physical management servers and the management VMs.
Table 37: Management servers needed
Management servers
600 users
1500 users
4500 users
10000 users
For more information, see BOM for enterprise and SMB management servers on page 57.
31
Protocol
Size
IOPS
Write %
NFS or Block
CIFS/NFS
5 GB
75%
CIFS
100 MB
0.8
75%
Table 39 summarizes the peak IOPS and disk space requirements for dedicated or shared stateless virtual
desktops on a per-user basis. Persistent virtual desktops require a high number of IOPS and a large amount of
disk space for the VMware linked clones. Note that the linked clones also can grow in size over time. Stateless
users that require mobility and have no local SSDs also fall into this category. The last three rows of Table 39
are the same as in Table 38 for stateless desktops.
Table 39: Dedicated or shared stateless virtual desktop shared storage performance requirements
Dedicated virtual desktops
Protocol
Size
IOPS
Write %
Replica
Block/NFS
30 GB
Linked clones
Block/NFS
10 GB
18
85%
NFS or Block
CIFS/NFS
5 GB
75%
CIFS
100 MB
0.8
75%
The sizes and IOPS for user data files and user profiles that are listed in Table 38 and Table 39 can vary
depending on the customer environment. For example, power users might require 10 GB and five IOPS for
user files because of the applications they use. It is assumed that 100% of the users at peak load times require
concurrent access to user data files and profiles.
Many customers need a hybrid environment of stateless and dedicated desktops for their users. The IOPS for
dedicated users outweigh those for stateless users; therefore, it is best to bias towards dedicated users in any
storage controller configuration.
The storage configurations that are presented in this section include conservative assumptions about the VM
size, changes to the VM, and user data sizes to ensure that the configurations can cope with the most
demanding user scenarios.
This reference architecture describes the following different shared storage solutions:
32
Block I/O to IBM Storwize V7000 / Storwize V3700 storage using Fibre Channel (FC)
Block I/O to IBM Storwize V7000 / Storwize V3700 storage using FC over Ethernet (FCoE)
Block I/O to IBM Storwize V7000 / Storwize V3700 storage using Internet Small Computer System
Interface (iSCSI)
Block I/O to IBM FlashSystem 840 with Atlantis ILIO storage acceleration
33
1500 users
4500 users
10000 users
Image size
30 GB
30 GB
30 GB
30 GB
16
120 GB
240 GB
480 GB
960 GB
RAID 1 (2)
RAID 1 (2)
Two RAID 1
Four RAID 1
arrays (4)
arrays (8)
Table 41 lists the Storwize storage configuration that is needed for each of the stateless user counts. Only one
Storwize control enclosure is needed for a range of user counts. Based on the assumptions in Table 41, the
IBM Storwize V3700 storage system can support up to 7000 users only.
Table 41: Storwize storage configuration for stateless users
Stateless storage
600 users
1500 users
4500 users
10000 users
12
28
80
168
12
Table 42 lists the Storwize storage configuration that is needed for each of the dedicated or shared stateless
user counts. The top four rows of Table 42 are the same as for stateless desktops. Lenovo recommends
clustering the IBM Storwize V7000 storage system and the use of a separate control enclosure for every 2500
or so dedicated virtual desktops. For the 4500 and 10000 user solutions, the drives are divided equally across
all of the controllers. Based on the assumptions in Table 42, the IBM Storwize V3700 storage system can
support up to 1200 users.
34
Table 42: Storwize storage configuration for dedicated or shared stateless users
Dedicated or shared stateless storage
600 users
1500 users
4500 users
10000 users
12
28
80
168
12
40
104
304
672
12
12
32
64
16 (2 x 8)
36 (4 x 9)
desktops
Refer to the BOM for shared storage on page 61 for more details.
1000 users
3000 users
5000 users
10000 users
2 TB flash module
12
4 TB flash module
12
Capacity
4 TB
12 TB
20 TB
40 TB
Refer to the BOM for OEM storage hardware on page 65 for more details.
35
4.8 Networking
The main driver for the type of networking that is needed for VDI is the connection to shared storage. If the
shared storage is block-based (such as the IBM Storwize V7000), it is likely that a SAN that is based on 8 or
16 Gbps FC, 10 GbE FCoE, or 10 GbE iSCSI connection is needed. Other types of storage can be network
attached by using 1 Gb or 10 Gb Ethernet.
Also, there is user and management virtual local area networks (VLANs) available that require 1 Gb or 10 Gb
Ethernet as described in the Lenovo Client Virtualization reference architecture, which is available at this
website: lenovopress.com/tips1275.
Automated failover and redundancy of the entire network infrastructure and shared storage is important. This
failover and redundancy is achieved by having at least two of everything and ensuring that there are dual paths
between the compute servers, management servers, and shared storage.
If only a single Flex System Enterprise Chassis is used, the chassis switches are sufficient and no other TOR
switch is needed. For rack servers, more than one Flex System Enterprise Chassis TOR switches are required.
For more information, see BOM for networking on page 63.
600 users
1500 users
4500 users
10000 users
36
600 users
1500 users
4500 users
10000 users
600 users
1500 users
4500 users
10000 users
600 users
1500 users
4500 users
10000 users
Table 48 shows the number of 1 GbE connections that are needed for the administration network and switches
for each type of device. The total number of connections is the sum of the device counts multiplied by the
number of each device.
Table 48: 1 GbE connections needed
Device
TOR switches
37
4.9 Racks
The number of racks and chassis for Flex System compute nodes depends upon the precise configuration that
is supported and the total height of all of the component parts: servers, storage, networking switches, and Flex
System Enterprise Chassis (if applicable). The number of racks for System x servers is also dependent on the
total height of all of the components. For more information, see the BOM for racks section on page 64.
UDP
4172
SSL Decryption
Authentication
High Availability
PCoIP Proxy
TCP 80
TCP/UDP 4172
Hypervisor
External
Clients
TCP
80
APM
Hypervisor
TCP
443
View
Connection
Server
Hypervisor
DMZ
vCenter Pools
Active Directory
Internal
Clients
38
users can also be secured by directing all traffic through APM. Alternatively, trusted internal users can directly
use VMware connection servers.
Various deployment models are described in the F5 BIG-IP deployment guide. For more information, see
Deploying F5 with VMware View and Horizon, which is available from this website:
f5.com/pdf/deployment-guides/vmware-view5-iapp-dg.pdf
For this reference architecture, the BIG-IP APM was tested to determine if it introduced any performance
degradation because of the added functionality of authentication, high availability, and proxy serving. The
BIG-IP APM also includes facilities to improve the performance of the PCoIP protocol.
External clients often are connected over a relatively slow wide area network (WAN). To reduce the effects of a
slow network connection, the external clients were connected by using a 10 GbE local area network (LAN).
Table 49 shows the results with and without the F5 BIG-IP APM by using LoginVSI against a single compute
server. The results show that APM can slightly increase the throughput. Testing was not done to determine the
performance with many thousands of simultaneous users because this scenario is highly dependent on a
customers environment and network configuration.
Table 49: Performance comparison of using F5 BIG-IP APM
Without F5 BIG-IP
With F5 BIG-IP
208 users
218 users
4.11.1 Deployment example 1: Flex Solution with single Flex System chassis
As shown in Table 50, this example is for 1250 stateless users that are using a single Flex System chassis.
There are 10 compute nodes supporting 125 users in normal mode and 156 users in the failover case of up to
two nodes not being available. The IBM Storwize V7000 storage is connected by using FC directly to the Flex
System chassis.
39
Table 50: Deployment configuration for 1250 stateless users with Flex System x240 Compute Nodes
Stateless virtual desktop
1250 users
10
40
Compute
Compute
Compute
Compute
Compute
Compute
Compute
Compute
Compute
Manage
Manage
Total height
14U
WSS
WSS
4500 users
Compute servers
Management servers
V7000 Storwize controller
V7000 Storwize expansion
Flex System EN4093R switches
Flex System FC3171 switches
Flex System Enterprise Chassis
10 GbE network switches
1 GbE network switches
SAN network switches
Total height
Number of racks
36
4
1
3
6
6
3
2 x G8264R
2 x G8052
2 x SAN24B-5
44U
2
Figure 11 shows the deployment diagram for this configuration. The first rack contains the compute and
management servers and the second rack contains the shared storage.
40
TOR Switches
2 x G8052
2 x SAN24B-5
2 x G8124
M1 VMs
vCenter Server
Connection Server
Cxx
M4 VMs
vCenter Server
Connection Server
Each compute
server has
125 user VMs
M2 VMs
vCenter Server
Connection Server
vCenter SQL Server
M3 VMs
vCenter Server
Connection Server
vCenter SQL Server
Figure 11: Deployment diagram for 4500 stateless users using Storwize V7000 shared storage
Figure 12 shows the 10 GbE and Fibre Channel networking that is required to connect the three Flex System
Enterprise Chassis to the Storwize V7000 shared storage. The detail is shown for one chassis in the middle
and abbreviated for the other two chassis. The 1 GbE management infrastructure network is not shown for the
purpose of clarity.
Redundant 10 GbE networking is provided at the chassis level with two EN4093R switches and at the rack
level by using two G8264R TOR switches. Redundant SAN networking is also used with two FC3171 switches
and two top of rack SAN24B-5 switches. The two controllers in the Storwize V7000 are redundantly connected
to each of the SAN24B-5 switches.
41
Figure 12: Network diagram for 4500 stateless users using Storwize V7000 shared storage
42
4.11.3 Deployment example 3: System x server with Storwize V7000 and FCoE
This deployment example is derived from an actual customer deployment with 3000 users, 90% of which are
stateless and need a 2 GB VM. The remaining 10% (300 users) need a dedicated VM of 3 GB. Therefore, the
average VM size is 2.1 GB.
Assuming 125 users per server in the normal case and 150 users in the failover case, then 3000 users need 24
compute servers. A maximum of four compute servers can be down for a 5:1 failover ratio. Each compute
server needs at least 315 GB of RAM (150 x 2.1), not including the hypervisor. This figure is rounded up to
384 GB, which should be more than enough and can cope with up to 125 users, all with 3 GB VMs.
Each compute server is a System x3550 server with two Xeon E5-2650v2 series processors, 24 16 GB of
1866 MHz RAM, an embedded dual port 10 GbE virtual fabric adapter (A4MC), and a license for FCoE/iSCSI
(A2TE). For interchangeability between the servers, all of them have a RAID controller with 1 GB flash upgrade
and two S3700 400 GB MLC enterprise SSDs that are configured as RAID 0 for the stateless VMs.
In addition, there are three management servers. For interchangeability in case of server failure, these extra
servers are configured in the same way as the compute servers. All the servers have a USB key with ESXi 5.5.
There also are two Windows storage servers that are configured differently with HDDs in RAID 1 array for the
operating system. Some spare, preloaded drives are kept to quickly deploy a replacement Windows storage
server if one should fail. The replacement server can be one of the compute servers. The idea is to quickly get
a replacement online if the second one fails. Although there is a low likelihood of this situation occurring, it
reduces the window of failure for the two critical Windows storage servers.
All of the servers communicate with the Storwize V7000 shared storage by using FCoE through two TOR
RackSwitch G8264CS 10GbE converged switches. All 10 GbE and FC connections are configured to be fully
redundant. As an alternative, iSCSI with G8264 10GbE switches can be used.
For 300 persistent users and 2700 stateless users, a mixture of disk configurations is needed. All of the users
require space for user folders and profile data. Stateless users need space for master images and persistent
users need space for the virtual clones. Stateless users have local SSDs to cache everything else, which
substantially decreases the amount of shared storage. For stateless servers with SSDs, a server must be
taken offline and only have maintenance performed on it after all of the users are logged off rather than being
able to use vMotion. If a server crashes, this issue is immaterial.
It is estimated that this configuration requires the following IBM Storwize V7000 drives:
This configuration requires 96 drives, which fit into one Storwize V7000 control enclosure and three expansion
enclosures.
Figure 13 shows the deployment configuration for this example in a single rack. Because the rack has 36 items,
it should have the capability for six power distribution units for 1+1 power redundancy, where each PDU has 12
43
C13 sockets.
Figure 13: Deployment configuration for 3000 stateless users with System x servers
44
Description
9532AC1
A5SX
A5TE
A5RM
A5S0
A5SG
45
Quantity
1
1
1
1
1
1
1
1
1
1
1
1
2
24
16
24
1
1
System x3550 M5
Code
Description
5463AC1
A5BL
A5C0
A58X
A59V
A5AG
A5AX
System x3550 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3550 M5 8x 2.5" Base Chassis
System x3550 M5 Planar
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x Advanced LCD Light path Kit
6311
A1ML
A5AB
A5AK
A59F
A59W
A3YZ
A3Z2
A4TR
A5UT
A5AF
A5AG
46
Quantity
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
2
1
1
1
1
1
1
2
1
2
1
1
1
1
2
24
16
24
1
System x3650 M5
Code
Description
5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5
System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System Documentation and Software-US English
x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
A5AX
6311
A1ML
A5FY
A5G3
A4VH
A5EY
A5G6
A3YZ
A3Z2
A4TR
300GB 15K 6Gbps SAS 2.5" G3HS HDD
A5UT
Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x
9297
2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD)
A5UV
Brocade 8Gb FC Dual-port HBA for System x
3591
7595
2U Bracket for Brocade 8GB FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)
Select SSD storage for stateless virtual desktops
5978
Select Storage devices - Lenovo-configured RAID
2302
RAID configuration
A2K6
Primary Array - RAID 0 (2 drives required)
2499
Enable selection of Solid State Drives for Primary Array
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
Select amount of system memory
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
Select flash memory for ESXi hypervisor
A5R7
32GB Enterprise Value USB Memory Key
47
Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
2
1
1
1
1
1
2
1
2
1
1
1
1
2
24
16
24
1
NeXtScale nx360 M5
Code
Description
5465AC1
A5HH
A5J0
A5JU
A5JX
A1MK
A1ML
Quantity
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
2
24
16
1
Description
5456HC1
A41D
A4MM
6201
A42S
A4AK
48
Quantity
1
1
6
6
1
1
Description
5465AC1
A5HH
A5J0
A5JU
A4MB
A5JX
A1MK
49
Quantity
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
2
24
16
2
2
1
Description
50
Quantity
1
1
1
1
1
1
1
1
1
1
8
16
1
1
System x3550 M5
Code
Description
5463AC1
A5BL
A5C0
A58X
A59V
A5AG
A5AX
System x3550 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3550 M5 8x 2.5" Base Chassis
System x3550 M5 Planar
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x Advanced LCD Light path Kit
6311
A1ML
A5AB
A5AK
A59F
A59W
A3YZ
A3Z2
A5UT
A5AF
A5AG
51
Quantity
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
8
16
1
System x3650 M5
Code
Description
5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5
System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System Documentation and Software-US English
x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
A5AX
6311
A1ML
A5FY
A5G3
A4VH
A5EY
A5G6
A3YZ
A3Z2
A5UT
9297
52
Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
8
16
1
NeXtScale nx360 M5
Code
Description
5465AC1
A5HH
A5J0
A5JU
A5JX
A1MK
A1ML
Quantity
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
1
8
16
1
Description
5456HC1
A41D
A4MM
6201
A42S
1
1
6
6
1
A4AK
53
Quantity
System x3550 M5
Code
Description
5463AC1
A5BL
A5C0
A58X
A59V
A5AG
A5AX
6311
A1ML
A5AB
A5AK
A59F
A59W
A59X
A3YZ
A3Z2
5977
A5UT
A5AF
A5AG
System x3550 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3550 M5 8x 2.5" Base Chassis
System x3550 M5 Planar
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x Advanced LCD Light path Kit
System x3550 M5 Slide Kit G4
System Documentation and Software-US English
System x3550 M5 4x 2.5" HS HDD Kit
System x3550 M5 4x 2.5" HS HDD Kit PLUS
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
54
Quantity
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
1
1
24
16
24
4
2
6
1
1
System x3650 M5
Code
Description
5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5
System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 900W High Efficiency Platinum AC Power Supply
2.8m, 13A/125-10A/250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System x Enterprise Slides Kit
System Documentation and Software-US English
x3650 M5 16x 2.5" HS HDD Assembly Kit (Dual RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
A5EW
6400
A1ML
A5FY
A5G3
A4VH
A5FV
A5EY
A5GG
A3YZ
A3Z2
5977
A5UT
9297
55
Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
2
2
1
1
1
24
16
24
4
2
14
1
1
Description
5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5
System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 900W High Efficiency Platinum AC Power Supply
2.8m, 13A/125-10A/250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System x Enterprise Slides Kit
System Documentation and Software-US English
x3650 M5 16x 2.5" HS HDD Assembly Kit (Dual RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
A5EW
6400
A1ML
A5FY
A5G3
A4VH
A5FV
A5EY
A5GG
A3YZ
A3Z2
5977
A5UT
9297
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
Select drive configuration for hyper-converged system (all flash or SDD/HDD combination)
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
A4TP
1.2TB 10K 6Gbps SAS 2.5" G3HS HDD
2498
Install largest capacity, faster drives starting in Array 1
Select GRID K1 or GRID K2 for graphics acceleration
AS3G
NVIDIA Grid K1 (Actively Cooled)
A470
NVIDIA Grid K2 (Actively Cooled)
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5R7
32GB Enterprise Value USB Memory Key
56
Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
2
2
1
1
1
12
4
2
14
1
2
2
1
Description
9532AC1
A5SX
A5TE
A5RM
A5S0
A5SG
5978
8039
A4TR
A5B8
A5RP
57
Quantity
1
1
1
1
1
1
1
1
1
8
1
1
1
1
System x3550 M5
Code
Description
5463AC1
A5BL
A5C0
A58X
A59V
A5AG
A5AX
System x3550 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3550 M5 8x 2.5" Base Chassis
System x3550 M5 Planar
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x Advanced LCD Light path Kit
6311
A1ML
A5AB
A5AK
A59F
A59W
A3YZ
A3Z2
5978
A2K7
A4TR
A5B8
A5UT
A5AF
A5AG
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD)
A5UV
Brocade 8Gb FC Dual-port HBA for System x
3591
7595
2U Bracket for Brocade 8GB FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)
58
Quantity
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
8
1
1
1
1
1
1
2
1
2
System x3650 M5
Code
Description
5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5
System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System Documentation and Software-US English
x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
A5AX
6311
A1ML
A5FY
A5G3
A4VH
A5EY
A5G6
A3YZ
A3Z2
5978
A2K7
A4TR
A5B8
A5UT
Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x
9297
2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD)
A5UV
Brocade 8Gb FC Dual-port HBA for System x
3591
7595
2U Bracket for Brocade 8GB FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)
59
Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
1
8
1
1
1
1
1
2
1
2
NeXtScale nx360 M5
Code
Description
5465AC1
A5HH
A5J0
A5JU
A5JX
A1MK
A1ML
A5KD
A5JF
5978
A2K7
ASBZ
A5JZ
nx360 M5 RAID Riser
A5V2
nx360 M5 2.5" Rear Drive Cage
A5K3
nx360 M5 1x2, 2.5" 12G HDD short cable, HW RAID (stack-up)
A5B8
8GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A40Q
Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for System x
A5UX
nx360 M5 ML2 Bracket for Emulex VFA5 ML2 Dual Port 10GbE SFP+
A5JV
nx360 M5 ML2 Riser
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A4NZ
Emulex VFA5 ML2 FCoE/iSCSI License for System x (FoD)
Brocade 8Gb FC Dual-port HBA for System x
3591
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)
Quantity
1
1
1
1
1
1
1
1
1
1
1
2
1
1
1
8
1
1
1
1
1
2
1
2
Description
5456HC1
A41D
A4MM
6201
A42S
A4AK
60
Quantity
1
1
6
6
1
1
Description
619524F
AHE1
AHE2
AHF9
AHH2
AS26
Quantity
1
20
0
0
4
1
Description
6195524
IBM Storwize V7000 Disk Control Enclosure
AHE1
300 GB 2.5-inch 15K RPM SAS HDD
AHE2
600 GB 2.5-inch 15K RPM SAS HDD
AHF9
600 GB 10K 2.5-inch HDD
AHH2
400 GB 2.5-inch SSD (E-MLC)
AHCB
64 GB to 128 GB Cache Upgrade
AS26
Power Cord PDU connection
Select network connectivity of 10 GbE iSCSI or 8 Gb FC
AHB5
10Gb Ethernet 4 port Adapter Cards (Pair)
AHB1
8Gb 4 port FC Adapter Cards (Pair)
61
Quantity
1
20
0
0
4
2
1
1
1
Description
609924C
IBM Storwize V3700 Disk Control Enclosure
ACLB
300 GB 2.5-inch 15K RPM SAS HDD
ACLC
600 GB 2.5-inch 15K RPM SAS HDD
ACLK
600 GB 10K 2.5-inch HDD
ACME
400 GB 2.5-inch SSD (E-MLC)
ACHB
Cache 8 GB
ACFA
Turbo Performance
ACFN
Easy Tier
Select network connectivity of 10 GbE iSCSI or 8 Gb FC
ACHM
10Gb iSCSI - FCoE 2 Port Host Interface Card
ACHK
8Gb FC 4 Port Host Interface Card
ACHS
8Gb FC SW SPF Transceivers (Pair)
Quantity
1
20
0
0
4
2
1
1
2
2
2
Description
609924E
ACLB
ACLC
ACLK
ACME
62
Quantity
1
20
0
0
4
RackSwitch G8052
Code
Description
7309HC1
6201
3802
A3KP
Quantity
1
2
3
1
RackSwitch G8124E
Code
Description
7309HC6
6201
3802
A1DK
Quantity
1
2
1
1
RackSwitch G8264
Code
Description
7309HC3
6201
A3KP
5053
A1DP
A1DM
Quantity
1
2
1
2
1
0
RackSwitch G8264CS
Code
Description
7309HCK
6201
A2ME
A1DK
A1DP
A1DM
5075
63
Quantity
1
2
2
1
1
0
12
Description
9363RC4
5897
Quantity
1
6
System x rack
Code
Description
9363RC4
6012
Quantity
1
6
Description
8721HC1
IBM Flex System Enterprise Chassis Base Model
A0TA
IBM Flex System Enterprise Chassis
A0UC
IBM Flex System Enterprise Chassis 2500W Power Module Standard
6252
2.5 m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable
A1PH
IBM Flex System Enterprise Chassis 2500W Power Module
3803
2.5 m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable
A0UA
IBM Flex System Enterprise Chassis 80mm Fan Module
A0UE
IBM Flex System Chassis Management Module
3803
3 m Blue Cat5e Cable
5053
IBM SFP+ SR Transceiver
A1DP
1 m IBM QSFP+ to QSFP+ Cable
A1PJ
3 m IBM Passive DAC SFP+ Cable
A1NF
IBM Flex System Console Breakout Cable
5075
BladeCenter Chassis Configuration
6756ND0
Rack Installation >1U Component
675686H
IBM Fabric Manager Manufacturing Instruction
Select network connectivity for 10 GbE, 10 GbE FCoE or iSCSI, 8 Gb FC, or 16 Gb FC
A3J6
IBM Flex System Fabric EN4093R 10Gb Scalable Switch
A3HH
IBM Flex System Fabric CN4093 10Gb Scalable Switch
5075
IBM 8 Gb SFP + SW Optical Transceiver
5605
Fiber Cable, 5 meter multimode LC-LC
A0UD
IBM Flex System FC3171 8Gb SAN Switch
5075
IBM 8 Gb SFP + SW Optical Transceiver
5605
Fiber Cable, 5 meter multimode LC-LC
A3DP
IBM Flex System FC5022 16Gb SAN Scalable Switch
A22R
Brocade 16Gb SFP + SW Optical Transceiver
5605
Fiber Cable, 5 meter multimode LC-LC
64
Quantity
1
1
2
2
4
4
4
1
2
2
2
4
1
4
1
1
2
2
4
4
2
4
4
2
4
4
Description
9840-AE1
IBM FlashSystem 840
AF11
4TB eMLC Flash Module
AF10
2TB eMLC Flash Module
AF1B
1 TB eMLC Flash Module
AF14
Encryption Enablement Pack
Select network connectivity for 10 GbE iSCSI, 10 GbE FCoE, 8 Gb FC, or 16 Gb FC
AF17
iSCSI Host Interface Card
AF1D
10 Gb iSCSI 8 Port Host Optics
AF15
FC/FCoE Host Interface Card
AF1D
10 Gb iSCSI 8 Port Host Optics
AF15
FC/FCoE Host Interface Card
AF18
8 Gb FC 8 Port Host Optics
3701
5 m Fiber Cable (LC-LC)
AF15
FC/FCoE Host Interface Card
AF19
16 Gb FC 4 Port Host Optics
3701
5 m Fiber Cable (LC-LC)
Quantity
1
12
0
0
1
2
2
2
2
2
2
8
2
2
4
Description
3873HC2
Brocade 6505 FC SAN Switch
00MT457
Brocade 6505 12 Port Software License Pack
Select network connectivity for 8 Gb FC or 16 Gb FC
88Y6416
Brocade 8Gb SFP+ Optical Transceiver
88Y6393
Brocade 16Gb SFP+ Optical Transceiver
Quantity
1
1
24
24
Description
3873HC3
Brocade 6510 FC SAN Switch
00MT459
Brocade 6510 12 Port Software License Pack
Select network connectivity for 8 Gb FC or 16 Gb FC
88Y6416
Brocade 8Gb SFP+ Optical Transceiver
88Y6393
Brocade 16Gb SFP+ Optical Transceiver
65
Quantity
1
2
48
48
Resources
For more information, see the following resources:
VMware vSphere
vmware.com/products/datacenter-virtualization/vsphere
VMware VSAN
vmware.com/products/virtual-san
planning.pdf
F5 BIG-IP
f5.com/products/big-ip
66
Document history
Version 1.0
Version 1.1
30 Jan 2015
5 May 2015
67
68