Professional Documents
Culture Documents
Click on the Supporting Materials tab to access the Student Resource Guide and course navigation information.
Copyright © 2011 EMC Corporation. All rights reserved.
These materials may not be copied without EMC's written consent.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to
change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR
WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS
IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC² , EMC, EMC ControlCenter, AdvantEdge, AlphaStor, ApplicationXtender, Avamar, Captiva, Catalog Solution, Celerra,
Centera, CentraStar, ClaimPack, ClaimsEditor, ClaimsEditor, Professional, CLARalert, CLARiiON, ClientPak, CodeLink,
Connectrix, Co-StandbyServer, Dantz, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences,
Documentum, EmailXaminer, EmailXtender, EmailXtract, enVision, eRoom, Event Explorer, FLARE, FormWare, HighRoad,
InputAccel,InputAccel Express, Invista, ISIS, Max Retriever, Navisphere, NetWorker, nLayers, OpenScale, PixTools,
Powerlink, PowerPath, Rainfinity, RepliStor, ResourcePak, Retrospect, RSA, RSA Secured, RSA Security, SecurID,
SecurWorld, Smarts, SnapShotServer, SnapView/IP, SRDF, Symmetrix, TimeFinder, VisualSAN, VSAM-Assist, WebXtender,
where information lives, xPression, xPresso, Xtender, Xtender Solutions; and EMC OnCourse, EMC Proven, EMC Snap,
EMC Storage Administrator, Acartus, Access Logix, ArchiveXtender, Authentic Problems, Automated Resource Manager,
AutoStart, AutoSwap, AVALONidm, C-Clip, Celerra Replicator, CLARevent, Codebook Correlation Technology, Common
Information Model, CopyCross, CopyPoint, DatabaseXtender, Digital Mailroom, Direct Matrix, EDM, E-Lab, eInput,
Enginuity, FarPoint, FirstPass, Fortress, Global File Virtualization, Graphic Visualization, InfoMover, Infoscape, MediaStor,
MirrorView, Mozy, MozyEnterprise, MozyHome, MozyPro, NetWin, OnAlert, PowerSnap, QuickScan, RepliCare, SafeLine,
SAN Advisor, SAN Copy, SAN Manager, SDMS, SnapImage, SnapSure, SnapView, StorageScope, SupportMate, SymmAPI,
SymmEnabler, Symmetrix DMX, UltraFlex, UltraPoint, UltraScale, Viewlets, VisualSRM are trademarks of EMC Corporation.
All other trademarks used herein are the property of their respective owners.
Here is a typical VMware ESX environment. The Virtual Machine clients are accessing Virtual
Machines on the ESX Servers. When the VNXe is introduced, the ESX Datastores are migrated to
the VNXe for the benefits of storage consolidation, management and protection. The clients’
Virtual Machine Datastores now reside on the VNXe.
The VNXe can store VMware ESX Datastores in the same way that a VMware Server can have
locally attached storage. From a client perspective, the users see no difference –they can access
their Virtual Machines as if the storage was contained within the ESX Server. In fact, because the
ESX Server connects to the VNXe using the iSCSI protocol, ESX also believes that the storage
located on the VNXe is locally attached.
The iSCSI protocol is a block-level protocol, like the SCSI protocol, but instead of being limited to a
locally connected cable, it can be transmitted over network switches and routers. The ESX server
connects to VNXe using iSCSI hardware (HBA) or software initiator. Once configured, the iSCSI
Initiator establishes a connection with the VNXe. Optionally, this connection can be secured by
using CHAP, the Challenge Handshake Authentication Protocol, to validate each end of the
connection.
Once the iSCSI connection to the VNXe is established and the storage is formatted at the
operating system level, ESX can utilize it to store its data.
If multiple iSCSI attached hosts are being configured, it may be helpful to configure the Internet
Storage Name Server (iSNS) on the VNXe. The iSNS service provides a central location to
configure some of the information required to utilize the iSCSI protocol, which reduces the
amount of manual configuration that must be done on each host.
Alternatively, clients can access ESX Storage that resides on the VNXe using NFS. As with iSCSI,
this is transparent to the client.
In the most simple configuration, ESX Servers can connect to storage on the VNXe using the same
network interface cards (NICs) that clients use to access their Virtual Machines. However, as you
can see from this diagram, there is contention for bandwidth on both the ESX Server NICs and the
network overall. To alleviate this potential bottleneck, and ensure that the ESX Servers can access
the virtual disks in an optimal environment, it is recommended to have at least one additional NIC
in the ESX Server that is connected to a private network, that is used for only VNXe to ESX traffic.
By doing this, the load can be distributed across multiple components.
Creating VMware storage is accomplished through Unisphere, the VNXe Management Interface. The
Create VMware Storage Wizard steps are listed. Step details will be described on the following slides and
the entire process with be demonstrated later in the course.
Two types of datastores can be created for VMware: NFS or VMFS (iSCSI). In this example, NFS is
selected. Later in this course, the Creating a VMware VMFS (iSCSI) Datastore demonstration will
show iSCSI. The desired option is selected here. If NFS is chosen, and Show Advanced is selected,
deduplication and compression can be enabled.
Deduplication increases file storage efficiency by eliminating redundant data from the files stored
in the file system, thereby saving storage space and money. For each file system, file-level
deduplication gives the storage server the ability to process files in order to compress them, and
the ability to share the same instance of the data only if they happen to be identical. Deduplication
functionality operates on whole files and is applicable to files that are static or nearly static.
A Storage Server is a component that is needed to make the VMware datastores available on the
network. The Storage Server can be created during the Initial Configuration Wizard, or at a later
time. Since the VNXe supports both NFS and iSCSI datastores, the Storage Server can be either a
Shared Folder Server or an iSCSI Server.
Storage pools are associated with Storage Servers. A storage pool is a group of disks of similar
type and speed. Your system may only have one storage pool if you have only one type of disk
device. When you allocate storage for application use, you will be asked which storage pool you
would like to use. Depending on your system and it's disk configuration, one to two broad
categories of storage pools will be available for new storage: Capacity and Performance. The
Capacity storage pool, comprised of NL SAS drives, provides the optimum quantity of storage for a
system at generally lower performance. The Performance storage pool, comprised of SAS drives,
provides the best performance for frequent storage read and write operations but yields fewer
total bytes of storage than the capacity storage pool.
When creating VMware storage, you are asked to select the appropriate Storage Pool. In this
example, the system has only one type of drive, SAS; therefore one Storage Pool; the Performance
pool. If needed, you can create a custom storage pool to satisfy specific application
requirements.
Virtual Provisioning can be enabled here. Virtual Provisioning allows you to reserve more storage
than you might actually need initially. The storage is consumed as needed.
Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 14
Allocating and Managing Storage for VMware
When creating storage, you are prompted to select the protection strategy. The choices are no
protection, configure protection storage but no snapshot schedule, and configure protection
storage and a snapshot schedule.
In this example, Do not configure protection storage is selected since snapshots will be covered
in another course.
A path failover solution is recommended to protect against a single path failure, and data
unavailability. Two solutions are supported:
• PowerPath on ESX 4 or ESXi
• ESX native failover on any ESX host
PowerPath:
Install PowerPath software on the virtual machine if you configured the storage-system iSCSI
connections to the Windows virtual machine with NICs. If you configured the storage-system
connections to the ESX server, install PowerPath software on the ESX server. Do not configure
the storage-system connections to both the virtual machine and ESX server.
PowerPath can be downloaded from the VNXe Online portal. Use the online PowerPath
documentation for instructions.
ESX or ESXi Server native failover:
ESXi and ESX Server contain native failover to manage the I/O paths between server and storage.
To use the ESXi/ESX Server native failover with the VNXe, one of three failover policies must be
configured.
The three native failover policy options are fixed with failover mode, round robin and most
recently used.
Fixed with failover mode:
Fixed with failover mode uses the designated preferred path, if one is configured. Otherwise, it
uses the first working path discovered when the system reboots. If the preferred path is not
available, the software randomly selects the next available path on the SP if that SP owns the
LUNs (datastores). If that SP is unavailable, it selects a path on the secondary SP. The host
automatically reverts back to the preferred path as soon as the path becomes available.
Round Robin:
Round Robin determines which host paths are on the SP that owns the datastores, and alternates
through these host paths, issuing IO to each path for a specific period of time before rotating to
the next path. If all paths to the SP that owns the virtual disk are unavailable, the host switches to
paths on the secondary SP. If the original paths subsequently become available, the host will not
automatically switch back to the original paths.
Most Recently Used:
Most recently used selects the path the ESX host used most recently. If this path becomes
unavailable, the host switches to an alternative path and continues to use the new path while it is
available. If the loss path resulted in virtual disks being moved to the secondary SP, after the
original paths are available, the affected datastores must be manually restored to the primary SP
to resume the original workload balance.
Host access is configured next. Add ESX Host is selected. By specifying the IP address of a vCenter
or ESX host, and providing appropriate credentials, the VNXe will discover the host and the ESX
servers and Virtual Machines within that host.
The VNXe discovers the vCenter Server and lists all of the ESX Servers that it is managing. The
user selects the appropriate ESX Server from the list.
When the Datastore is complete, you have an option to configure replication. In this example,
replication is not configured. Replication is covered in a separate course.
To attach a VNXe Datastore to VMware, a VMkernel port with network access to the Datastore(s)
must be configured. Before creating a VMkernel port, a virtual switch must be created with a NIC
that has access to the VNXe. In this example, the virtual switch and network interface already
exist.
Use the vSphere client to configure the VMkernel. To configure a VMkernel, select the
Configuration tab, Networking then Add Networking. Select VMkernel.
The appropriate virtual switch is selected, the VLAN and IP information is entered and the
Summary is displayed.
To add the VNXe NFS Datastore to the ESX Server, use the vSphere client to select Configuration,
Storage, then Add Storage. Select NFS File System. Enter the IP address, Folder and Datastore
name.
Once the VNXe VMware Datastore has been created, data can be migrated from the ESX Server to
the VNXe. If this is a new Datastore that will be consumed over time, no data migration is
necessary.
Data can be migrated either with suspended migration or VMotion migration.
Please refer to the appropriate VMware documentation for details on migration. The example
included in this course uses Vmotion to migrate the VMFS Datastore from the ESX Server to the
VNXe.
The VMotion method is described here. The Virtual Machine can be either powered on or off. If
the Virtual Machine is powered on, either the VM can be migrated to another server or the
Virtual Machine files to another datastore, but not both at once. If the Virtual Machine is
powered off, either or both VM and Virtual Machine files can be migrated at the same time. For
online vMotion or storage vMotion, there is little to no disruption to the VM client.
In this example, the Virtual Machine is powered off. This example moves the Datastore from the
local ESX storage to a VNXe NFS Datastore.
The Virtual Machine, the Virtual Machine’s storage, or both can be migrated.
From the ESX Server, right-click on the VM to be migrated and select Migrate. In this example,
Change datastore is selected.
Select the VNXe Datastore to migrate to. In this example, the NFS Datastore named Share2 is
selected.
Next, select the appropriate format; in this example, same format as source is selected.
The migration process is complete and the VM Datastore resides on the VNXe.
This concludes the instructional portion of this training. These are the key points that have been
covered.
Please proceed to take the assessment.