You are on page 1of 79

EMC

Atmos

Virtual
Edition with EMC

VNX


Series
Deployment Guide
h8281.2


2

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

Copyright , 2011 EMC Corporation. All rights reserved.
Published October, 2011
EMC believes the information in this publication is accurate as of its publication date.
The information is subject to change without notice.
The information in this publication is provided as is. EMC Corporation makes no
representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or
fitness for a particular purpose. Use, copying, and distribution of any EMC software
described in this publication requires an applicable software license.
EMC2, EMC, Data Domain, RSA, EMC Centera, EMC ControlCenter, EMC LifeLine, EMC
OnCourse, EMC Proven, EMC Snap, EMC Source-One, EMC Storage Administrator,
Acartus, Access Logix, AdvantEdge, AlphaStor, ApplicationXtender, ArchiveXtender,
Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart,
AutoSwap, AVALONidm, Avamar, Captiva, C-Clip, Celerra, Celerra Replicator, Centera,
CenterStage, CentraStar, ClaimPack, CLARiiON, ClientPak, Codebook Correlation
Technology, Common Information Model, Configuration Intelligence, Configuresoft,
Connectrix, CopyCross, CopyPoint, CX, Dantz, DatabaseXtender, Data Domain, Direct
Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences,
Documentum, elnput, E-Lab, EmailXaminer, EmailXtender, Enginuity, eRoom, Event
Explorer, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File
Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, InfoMover,
Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, ISIS, Max Retriever,
MediaStor, MirrorView, Navisphere, NetWorker, nLayers, OnAlert, OpenScale,
PixTools, PowerPath, PowerSnap, QuickScan, Rainfinity, RepliCare, RepliStor,
ResourcePak, Retrospect, RSA, SafeLine, SAN Advisor, SAN Copy, SAN Manager,
Smarts, SnapImage, SnapSure, SnapView, SRDF, StorageScope, SupportMate,
SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder,
UltraFlex, UltraPoint, UltraScale, Unisphere, Vblock, VMAX, VPLEX, Viewlets, Virtual
Matrix, Virtual Matrix Architecture, Virtual Provisioning, VisualSAN, VisualSRM,
Voyence, WebXtender, xPression, xPresso, YottaYotta, the EMC logo, the RSA logo,
and where information lives are registered trademarks or trademarks of EMC
Corporation in the United States and other countries. All other trademarks used
herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to the technical
documentation and advisories section on EMC Online Support.
EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide
Part Number h8281.2

3
EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

Contents

Preface ..................................................................................... 11
Chapter 1 Introduction ........................................................................ 13
Introduction to EMC VNX Series .................................................................... 14
Software suites available ..................................................................................... 14
Software packs available ..................................................................................... 15
Introduction to EMC Atmos ............................................................................ 15
Introduction to Atmos on VNX solution ......................................................... 20
Atmos on VNX solution topology ................................................................... 20
Chapter 2 Setup and Configuration ..................................................... 23
Introduction .................................................................................................. 24
Atmos VE on VNX test environment ............................................................... 24
Hardware resources ............................................................................................. 26
Software resources .............................................................................................. 26
Configure VNX storage .................................................................................. 27
Configure file-based storage (NFS) for Atmos VE ................................................... 28
Storage layoutfile-based storage (NFS).............................................................. 28
Configure storageboot disks of Atmos VE nodes................................................ 28
Configure storagedata and metadata disks of Atmos VE .................................... 29
Configure storage for application servers ............................................................. 29
Configure block-based storage (FC) for Atmos VE ................................................. 30
Storage layoutblock-based storage (FC) ............................................................ 30
Configure storageboot disk of Atmos VE ............................................................. 30
Configure storagedata and metadata disks of Atmos VE .................................... 31
Configure storage for application servers ............................................................. 31
Configure network ......................................................................................... 32
Configure private and public networks ................................................................. 32
Configure storage network ............................................................................ 33
Configure network for file-based storage (NFS) ..................................................... 33
Configure network for block-based storage (FC) ................................................... 34
Contents


4 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

4


Depl
oyme
Configure vSphere......................................................................................... 35
Configure virtual machines for Atmos VE nodes.................................................... 36
Post-configuration steps for the Atmos VE nodes ................................................. 37
Configure Atmos Virtual Edition .................................................................... 39
Configure multisite object configuration ............................................................... 39
Chapter 3 Monitor and Management ................................................... 43
Introduction .................................................................................................. 44
EMC Unisphere ............................................................................................. 44
Atmos administration GUI ............................................................................. 44
Ganglia Atmos grid report ............................................................................. 46
EMC VSI for VMware vSphere Unified Storage Management .......................... 46
Migrate Atmos VE nodes ............................................................................... 47
Migrate Atmos nodes with vMotion and Storage vMotion ..................................... 48
Observations ....................................................................................................... 48
Conclusions ......................................................................................................... 51
Chapter 4 Atmos on VNX Performance ................................................. 53
Introduction .................................................................................................. 54
Atmos on VNX performancefile and block .................................................. 54
Test method ......................................................................................................... 54
Result analysis ..................................................................................................... 54
Conclusion ........................................................................................................... 56
Atmos on VNX performanceobject and non-object ..................................... 56
Test method ......................................................................................................... 56
Result analysis ..................................................................................................... 57
Conclusion ........................................................................................................... 60
Chapter 5 Storage Efficiency ............................................................... 61
Introduction .................................................................................................. 62
Storage efficiency with thin provisioning ...................................................... 62
Configure thin provisioning on file-based object store .......................................... 62
Configure thin provisioning on block-based object store ...................................... 62
Test method ......................................................................................................... 63
Result analysis ..................................................................................................... 63
Conclusion ........................................................................................................... 68
Storage efficiency with compression ............................................................. 68
Configure compression on file-based object store ................................................ 68
Configure compression on block-based object store ............................................ 70
Test method ......................................................................................................... 72
Contents

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

5


Depl
oyme
Result analysis ..................................................................................................... 73
Conclusions ......................................................................................................... 75
Appendix A Using Grinder to Generate REST Workload on Atmos ............ 77
Grinder tool ................................................................................................... 78
Grinder test driver system configuration ............................................................... 78
Grinder script configuration .................................................................................. 78
Run a Grinder test and generate report ................................................................. 78


Contents


6 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

6


Depl
oyme


7
EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

Figures

Figure 1. Optimize applications across the business with Atmos ...................... 16
Figure 2. Atmos physical hardware ................................................................... 18
Figure 3. Atmos Virtual Edition .......................................................................... 19
Figure 4. Atmos administration GUI ................................................................... 20
Figure 5. Atmos on VNX solution topology ......................................................... 21
Figure 6. Atmos on VNX high-level typical configuration .................................... 25
Figure 7. File-based storage layout for Atmos VE ............................................... 28
Figure 8. Block-based storage layout for Atmos VE ............................................ 30
Figure 9. vSphere Client .................................................................................... 32
Figure 10. Private network configured ................................................................. 33
Figure 11. Storage network diagramNFS ........................................................... 34
Figure 12. Network diagramFCoE ...................................................................... 35
Figure 13. Atmos node properties ....................................................................... 37
Figure 14. Atmos VE nodes with DRS resources pools ......................................... 38
Figure 15. Automatic startup for Atmos VE nodes ................................................ 38
Figure 16. RMG list .............................................................................................. 39
Figure 17. Tenant list .......................................................................................... 39
Figure 18. Tenant Basic Information window ....................................................... 40
Figure 19. Policy Specification window ............................................................... 41
Figure 20. Configure storage pool ....................................................................... 44
Figure 21. Add RMG ............................................................................................ 45
Figure 22. System Summary window ................................................................... 45
Figure 23. Ganglia Atmos grid report ................................................................... 46
Figure 24. VSI icon .............................................................................................. 46
Figure 25. Provision storage ................................................................................ 47
Figure 26. REST TPS without vMotion .................................................................. 49
Figure 27. REST TPS with vMotion ....................................................................... 49
Figure 28. Performance comparison for large objects .......................................... 50
Figure 29. Performance comparison for small objects ......................................... 51
Figure 30. Performance of object store for NFS and FCoE configurations .............. 55
Figure 31. Performance of small objects for NFS and FCoE configurations ........... 55
Figure 32. Performance of large objects for NFS and FCoE configurations ............ 56
Figure 33. Performance of object store for NFS and FCoE configurations .............. 58
Figure 34. Small objects in NFS and FC datastoresREST and OLTP workload ..... 59
Figure 35. Large objects in NFS and FC datastoresREST and OLTP workload ..... 60
Figure 36. Thin provisioned FC ............................................................................ 62
Figure 37. Performance of object storethin provisioned NFS ............................. 64
Figure 38. Performance of object storethin provisioned FC ................................ 64
Figure 39. Large objectsbaseline compared with thin provisioned FCoE ............ 65
Figures

8 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

8


Depl
oyme
Figure 40. Large objectsbaseline compared with thin provisioned NFS ............. 66
Figure 41. Small objectsbaseline compared with thin provisioned FCoE ........... 66
Figure 42. Small Objectsbaseline compared with thin provision NFS ................ 67
Figure 43. Small objectsthin provisioned NFS compared with FCoE ................... 67
Figure 44. Large objectsthin provisioned NFS compared with FCoE ................... 68
Figure 45. Compression file system .................................................................... 69
Figure 46. Compression ratio .............................................................................. 70
Figure 47. Create LUN ......................................................................................... 71
Figure 48. LUN Propertiescompression ............................................................ 72
Figure 49. Performance of large objectsbaseline and LUN compression ........... 74
Figure 50. Performance of small objectsbaseline and LUN compression .......... 75




9
EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

Tables

Table 1. Hardware resources ............................................................................ 26
Table 2. Software resources ............................................................................. 26
Table 3. Virtual machine configuration ............................................................. 36
Table 4. Application server configuration specifications .................................. 57
Table 5. Thin provisionedNFS ........................................................................ 63
Table 6. Thin provisionedFC .......................................................................... 63
Table 7. LUN CompressionFC ........................................................................ 73


Tables

10 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

10


Depl
oyme




11
EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide


Preface
As part of an effort to improve and enhance the performance and capabilities of its
product line, EMC from time to time releases revisions of its hardware and software.
Therefore, some functions described in this guide may not be supported by all
revisions of the software or hardware currently in use. For the most up-to-date
information on product features, refer to your product release notes.
If a product does not function properly or does not function as described in this
document, please contact your EMC representative.
Note This document was accurate as of the time of publication. However,
as information is added, new versions of this document may be released to
the EMC Online Support website. Check the EMC Online Support website to
ensure that you are using the latest version of this document.
Purpose
This document describes how to configure a REST-based and SOAP-based object
store using EMC

Atmos

Virtual Edition (Atmos VE) and EMC

VNX

series. This
solution is called Atmos on VNX solution in this document.
In addition, several aspects of this solution such as management, performance, and
storage efficiency are reviewed and the details are provided in this document.
Scope
This document focuses primarily on how to deploy an object store based on this
solution. The document focuses on key configurations that were found suitable and
includes high-level description of the end-users public network and the required
network load balancer.
To learn more about this solution, contact an EMC representative
Audience
This document is intended for internal EMC personnel, partners, and customers who
are looking to build an object store using Atmos VE on the available VNX series
storage and servers.
Preface


12 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

12


Depl
oyme
Related documents
The following documents, located on the EMC Online Support website, provide
additional, relevant information. Access to these documents is based on the login
credentials. If you do not have access to the following documents, contact your EMC
representative:
EMC Atmos Virtual Edition Best Practice Guide
EMC AtmosAdministrators Guide
Using EMC VNX Storage with VMware vSphereTechBook
EMC VSI for VMware vSphere: Unified Storage Management Product Guide
VMware documents
The following documents are available for download from the VMware website:
ESXi Configuration Guide
Resource Management guide
Note There are multiple versions of these guides on the VMware website.
Refer to the guide that corresponds to the appropriate vSphere release.



13
EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

Chapter 1 Introduction
This chapter presents the following topics:
Introduction to EMC VNX Series .................................................................. 14
Introduction to EMC Atmos ......................................................................... 15
Introduction to Atmos on VNX solution ....................................................... 20
Atmos on VNX solution topology ................................................................ 20


Introduction

14 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

14


Depl
oyme
Introduction to EMC VNX Series
The EMC

VNX series delivers uncompromising scalability and flexibility for the


midtier while providing market-leading simplicity and efficiency to minimize total cost
of ownership. The VNX series is powered by Intel(r) Xeon(r) Processors, for intelligent
storage that automatically and efficiently scales in performance, while ensuring data
integrity and security. Customers can benefit from the new VNX features such as:
Next-generation unified storage, optimized for virtualized applications
Extended cache using Flash drives with FAST Cache and Fully Automated
Storage Tiering for Virtual Pools (FAST VP), which can be optimized for the
highest system performance and lowest storage cost simultaneously on both
block and file.
Multiprotocol support for file, block, and object with object access through
Atmos Virtual Edition (Atmos VE).
Simplified management with EMC Unisphere for a single management
interface for all NAS, SAN, and replication needs.
Up to three times improvement in performance with the latest Intel multicore
CPUs, optimized for Flash.
6 Gb/s SAS back end with the latest drive technologies supported:
o 3.5 100 GB and 200 GB Flash, 3.5 300 GB, and 600 GB 15k or 10k
rpm SAS, and 3.5 2 TB 7.2k rpm NL-SAS
o 2.5 300 GB and 600 GB 10k rpm SAS
Expanded EMC UltraFlex I/O connectivityFibre Channel (FC), Internet
Small Computer System Interface (iSCSI), Common Internet File System (CIFS),
Network File System (NFS) including parallel NFS (pNFS), Multi-Path File
System (MPFS), and Fibre Channel over Ethernet (FCoE) connectivity for
converged networking over Ethernet.
The VNX series includes five new software suites and three new software packs,
making it easier and simpler to attain the maximum overall benefits.
Software suites available
VNX FAST SuiteAutomatically optimizes for the highest system performance
and the lowest storage cost simultaneously (FAST VP is not part of the FAST
Suite for the VNX5100).
VNX Local Protection SuitePractices safe data protection and repurposing.
VNX Remote Protection Suite Protects data against localized failures,
outages, and disasters.
VNX Application Protection SuiteAutomates application copies and proves
compliance.
VNX Security and Compliance SuiteKeeps data safe from changes,
deletions, and malicious activity.
Introduction

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

15


Depl
oyme
Software packs available
Total Efficiency PackIncludes all five software suites (not available for the
VNX5100 and VNXe series).
Total Protection PackIncludes local, remote, and application protection
suites (not available for the VNXe3100).
Total Value PackIncludes all three protection software suites and the
Security and Compliance Suite (the VNX5100 and VNXe3100 exclusively
support this package).
Introduction to EMC Atmos
EMC

Atmos

is a multi-petabyte platform for information storage and distribution. It


combines massive scalability with automated data placement to efficiently deliver
content worldwide.
The Atmos platform provides the following features:
Information lifecycle managementAtmos includes robust policy-based
information management functions that automate data placement and
protection. The policy engine supports advanced information services, such
as, erasure codes, compression, replication, deduplication, and disk drive
spin-down. You can define different policies based on the data value to the
business over time, or based on the needs of different customers or
departments within the organization.
MultitenancyEnables to segregate storage into logical units called tenants.
This multi-tenant architecture enables you to deploy multiple applications
within the same infrastructure with each application securely partitioned so
that data is only accessible by the tenant who owns it.
A browser-based administration toolEnables to efficiently manage a
globally deployed system.
Representational State Transfer (REST) and Simple Object Access Protocol
(SOAP) web service APIsProvides a standards-based web services API for
creating and managing content. It also supports file-based access for
convenient integration to virtually any application.
A universal namespace A distributed, hierarchical structure that presents
file system addressing services to the content stored in Atmos. It aggregates
multiple storage segments, and sites into a single addressable storage entity
separated through the use of secure multitenancy.
Advanced auto-managing and auto-healing capabilitiesThe Atmos platform
consists of a set of redundant, distributed services that handle the underlying
data discovery, data management, and data storage tasks. These services are
able to restart themselves and rebuild objects when a storage disk is
corrupted.
Atmos is global scale storage on physical or virtual appliances. These are groups
of appliances that make a storage cloud that can be managed as a single system.
Atmos is based on a unique set of data services with no limits on namespace or
Introduction

16 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

16


Depl
oyme
location, intelligent protection and efficiency services, web-based services, and
multi-tenancy. Users can access an Atmos object store globally with REST over
HTTP or HTTP/S or locally over file services. Atmos can address challenges with
storing and managing vast amounts of unstructured content for custom or
packaged applications.
Figure 1 illustrates Atmos can optimize applications across the business.

Figure 1. Optimize applications across the business with Atmos
Custom applications can leverage the Atmos Software Development Kit (SDK),
which provides ways to integrate applications of nearly any language or
framework, including:
CAS applications
Mobility capabilities, such as Windows or Linux desktop access to Atmos
through the new Atmos GeoDrive, or iOS access through EMCs partner
Oxygen Cloud
Cross-site content storage for content management applications such as EMC
Documentum

or Microsoft SharePoint
Movement of inactive content, typically 80 percent of content, from Tier 1 NAS
to the cloud
Multisite storage of medical images
Atmos is managed as a single system over many sites and can be delivered as a self-
service experience to consumers.
The EMC Atmos platform has two storage optionsthe Atmos physical hardware and
Atmos VE.
The Atmos physical hardware includes Atmos nodes that run the Atmos
software, and Disk Array Enclosures (DAEs) that provide storage. EMC has
Introduction

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

17


Depl
oyme
strengthened the Atmos scale-out storage platform by leveraging the
capabilities of the latest Intel(r) Xeon(r) processor 5600 for a more than 50%
performance improvement, at reduced power/TB for greater efficiency. Figure
2 on page 18 illustrates this storage option.
Atmos VE includes a virtualized environment that is based on VMware
vSphere. With Atmos VE, the Atmos nodes that run the Atmos software are
mapped to virtual machines that are provisioned on a VMware vSphere
supported storage array such as EMC VNX series. Figure 3 on page 19
illustrates this storage option.
This paper focuses on the configuration that uses Atmos VE on top of
vSphere-managed storage similar to site #4 in Figure 1 on page 16.
Atmos software is a tiered capacity-based software license. For Atmos VE, the
capacity is calculated based on the usable capacity provisioned to virtualized
environments. For example, if 30 TB of usable capacity is provisioned to ESXi, then 30
TB of Atmos capacity license is required.
An Atmos capacity license includes all features offered by Atmos. There are no add-
ons or modular software for Atmos. Figure 2 on page 18 shows the physical hardware
of Atmos.

Introduction

18 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

18


Depl
oyme

Figure 2. Atmos physical hardware
Introduction

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

19


Depl
oyme

Figure 3. Atmos Virtual Edition
Despite the physical differences between the Atmos physical hardware and the
Atmos VE storage options, it is important to note that in both options the same Atmos
software is used. As a consequence, object stores implemented using both storage
options appear the same to end-users. Furthermore, even to Atmos system
administrators, such objects stores, once installed and configured, can be similarly
managed using the web-based Atmos administration GUI as shown in Figure 4 on
page 20.
Introduction

20 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

20


Depl
oyme

Figure 4. Atmos administration GUI
Introduction to Atmos on VNX solution
The Atmos on VNX solution includes the framework and components for deploying an
object store on VNX platforms using the Atmos object technology and the VMware
vSphere virtualization software suite. The Atmos on VNX solution builds on the
strengths of its three core technologies: Atmos VE, VMware vSphere, and VNX.
Atmos on VNX solution topology
The topology of the Atmos on VNX solution includes four layersstorage, vSphere
managed servers, Atmos nodes, and the Atmos access/integration layer.

Introduction

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

21


Depl
oyme
Figure 5 illustrates the topology of the Atmos on VNX solution.

Figure 5. Atmos on VNX solution topology
Following is a review of each layer in this topology:
VNX storageThis layer includes the VNX platform that is used to store the
object store. This solution is viable for both VNX platform for File and VNX
platform for Block. Because storage in this solution is presented to VMware
vSphere, any storage protocol that is supported by vSphere can be used. This
includes NFS, iSCSI, FC, and FCoE.
vSphere managed serversThrough a storage network, VNX storage objects
(file systems for file-based storage and LUNs for block-based storage) are
presented to servers that are running the vSphere bare-metal hypervisor
ESXi. On ESXi, datastores (NFS for file based storage, and VMFS for block-
based storage) are created on the storage objects that are presented from
VNX. These datastores are used to provision virtual machines that runs the
Atmos software.
Introduction

22 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

22


Depl
oyme
Atmos nodesEach virtual machine that runs the Atmos software function as
an Atmos node that holds a portion of the Atmos object store. Each Atmos
node is configured with virtual disks created on the datastores that reside on
the VNX storage objects. These virtual disks consists of the following two data
elements of an Atmos object store:
o DataData that is stored in the object store by the end-users
o MetadataInformation that describes the data that is stored by the
end-users
Atmos access and integration layerEnd-users access the object store
through this layer using REST-based and SOAP-based applications. Even
though the object store is contained and managed by multiple Atmos nodes,
end-users refer to the object store as a single entity. To access the integration
layer, end-users rely on a public network.
This solution topology does not require a dedicated VNX server or vSphere resource.
The VNX platform can be shared with other file-based or block-based applications
and the servers can be used to run other non-Atmos virtual machines. In essence,
available resources in the private cloud can be used to build an object store using
this solution.




23
EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

Chapter 2 Setup and
Configuration
This chapter presents the following topics:
Introduction ..................................................................................... 24
Atmos VE on VNX test environment ............................................................ 24
Configure VNX storage ............................................................................... 27
Configure network ..................................................................................... 32
Configure storage network ......................................................................... 33
Configure vSphere ..................................................................................... 35
Configure Atmos Virtual Edition .................................................................. 39


Setup and Configuration

24 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

24


Depl
oyme
Introduction
The configuration of the Atmos on VNX solution consists of three primary parts
storage configuration on the VNX layers, vSphere configuration on the virtualization
layer, and Atmos VE configuration on the Atmos layer.
The architecture of this solution relies heavily on network resources to, not only
interconnect VNX, vSphere, and Atmos VE, but also to enable end-users to access the
Atmos-based object store. Therefore, a key element in the configuration of this
solution is the network configuration, both the storage network and the end-users
public network.
This chapter covers the setup and configuration of the following layers:
Configure VNX storage
Configure network (storage, private, and public)
Configure vSphere
Configure Atmos
Atmos VE on VNX test environment
This section includes information on the test environment that was used to develop
this solution. Use this configuration as a guideline to design the actual environment
for this solution.
Different VNX platforms, servers, and network switches can be used as long as they
are supported by VMware vSphere and Atmos VE (where applicable). The Atmos
Virtual Edition Best Practices Guide provides more information about
Atmos VE.
The selected VNX platforms, servers, and network switches must also meet the
required object store capacity, and the network connectivity requirements that are
specified in this chapter. Figure 6 on page 25 shows a high-level typical configuration
of the Atmos on VNX solution that includes two sites.
Setup and Configuration

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

25


Depl
oyme

Figure 6. Atmos on VNX high-level typical configuration
The following two sections provide details of the hardware and software resources
that were used to deploy this solution.

Setup and Configuration

26 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

26


Depl
oyme
Hardware resources
Table 1 lists the hardware resources used to deploy this solution.
Table 1. Hardware resources
Description
Quantit
y
Use/setup
EMC VNX5300:
2 Data Moversactive/passive
2 storage processorsactive/active
7 disk-array enclosures (DAEs) fully
populated with 15 SAS 300 GB/15K spindles
Two VNX for File, VNX for
Block, and object
One for each location
Dell PowerEdge R810:
4 quad-core CPU
128 GB memory
1 GbE network interface card (NIC)virtual
machine network
2 dual-port 10 GbE FCoE CAN adapters
storage network
Four ESXi hosts (NAS, iSCSI,
FCoE)
A pair for each location
Cisco Nexus 5010:
Twenty six 10 GbE FCoE ports
Four 10 GbE FCoE converged
network switches
A pair for each location

Software resources
Table 2 lists the software components used to deploy this solution.
Table 2. Software resources
Description Minimum version
VNX Operating Environment (OE)
VNX OE for File
VNX OE for Block

7.0.13.0
05.31.000.5.006
VMware vSphere:
ESXi Installable
vCenter Server

4.1 Update 1
4.1 Update 1
Operating system Windows Server 2008 SP2
EMC VSI for VMware vSphere Unified
Storage Management
4.1
EMC Atmos Virtual Edition (VE) 1.4.1 Maintenance Release (1.4.1.59250)
Setup and Configuration

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

27


Depl
oyme
Configure VNX storage
This section explains the NFS and FC storage configuration used by the Atmos VE
nodes.
Note This solution supports iSCSI storage. However, the iSCSI storage
configuration is not included in this document.
To ensure optimal performance and improve troubleshooting, the following principles
were followed in the storage configuration for the Atmos on VNX solution:
Separate object and non-object data
Separate between boot and data. With an Atmos object store, data includes
both user data and metadata of the Atmos nodes
Leverage VNX to self-optimize the virtual storage pools
Standardize on RAID 5 data protection to simplify configuration, and balance
between optimal performance and high-capacity needs
Use the EMC VSI for VMware vSphere: Unified Storage Management (USM)
feature to easily provision the storage and present it to the ESXi servers
Note To simplify the configuration procedure, place all ESXi servers from
each site in a single folder or cluster in the vCenter Server inventory. This
enables a much smoother operation of the USM feature to provision the VNX
storage and to present it to the servers in a single operation.
In this section, two examples are provided to illustrate the typical storage
configuration of this solution (with block-based and file-based storage). In both the
examples, storage was configured to contain four Atmos nodes. Each Atmos node
consists of 4 TB of data and 1 TB of metadata (total object store size is 20 TB). In
these examples, the ratio of data to metadata was set to 4:1
Determine this ratio based on the applications the end-users use with the object
store and the amount of metadata these applications write for each object. Atmos
supports a range of ratios. The Atmos Virtual Edition Best Practices Guide available
on the EMC Online Support website provides more information on the data and
metadata ratio.
In addition, allocate storage to a separate storage pool to accommodate non-object
application servers.
Figure 7 on page 28 shows the storage configuration with file-based storage.
Figure 8 on page 30 shows the storage configuration with block-based storage.
For each device type in VNX, allocate an adequate number of hot spare drives to
replace any failed drives. In this example, for the 600 GB SAS drives, three hot spare
drives were created in the storage layout.
This section provides more details on each storage configuration.
Setup and Configuration

28 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

28


Depl
oyme
Configure file-based storage (NFS) for Atmos VE
This section describes the procedures required to configure the file-based storage for
an Atmos VE based storage on the test environment.
To configure storage configuration for Atmos VE, complete the following steps:

1. Configure storage for the boot disk of the Atmos VE nodes.
2. Configure storage for data and metadata disks for the Atmos VE nodes.
3. Configure storage for non-object data.
The following four sections describe each step of the storage configuration on the
local VNX for the local RMG Atmos VE nodes. Use similar configuration on the remote
VNX for the remote Resource Management Group (RMG) Atmos VE nodes.
Storage layoutfile-based storage (NFS)
Figure 7 shows the file-based storage layout for the Atmos on VNX solution on the
tested storage configuration.

Figure 7. File-based storage layout for Atmos VE
The Unisphere Online Help available on Unisphere provides the procedure to create
storage pools for files and hot spares for VNX.
Configure storageboot disks of Atmos VE nodes
Create a storage pool with RAID 5 data protection to contain the boot disks of the
Atmos VE nodes.
Setup and Configuration

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

29


Depl
oyme
Create two file systems on this storage pool with the USM feature to contain the boot
virtual disks of the local RMG Atmos VE nodes (the section Configure Atmos Virtual
Edition on page 39 provides more information).
The boot disks of the Atmos VE nodes that are deployed on the VNX storage must be
distributed evenly between these two file systems. This is based on the best practice
for VMware multipathing with VNX file systems. The Using VNX storage with VMware
vSphere TechBook available on the EMC Online Support website provides more
information.
Figure 7 on page 28 shows a pair of 200 GB file systems was created in the storage
pool 2. Storage pool 2 was created to contain the boot disks of the Atmos VE nodes.
Configure storagedata and metadata disks of Atmos VE
Create a storage pool with RAID 5 data protection. This storage pool contains the data
and metadata disks on the Atmos VE nodes.
Create two file systems on this storage pool with the USM feature to contain the data
and metadata virtual disks of the local RMG Atmos VE nodes.
The data and metadata disks of the Atmos VE nodes on the VNX storage must be
evenly distributed among these two filesystems. This is based on the best practice for
VMware multipathing with VNX. The Using VNX storage with VMware vSphere
TechBook provides more information.
Figure 7 on page 28 shows the ratio of data disks to metadata disks in the 20 TB
object store was 4:1. Therefore, two 10 TB file systems were created in storage pool 1
for data and metadata disks on four Atmos VE nodes. Each file system contained four
2 TB data disks and two 1 TB metadata disks.
Note Based on the required object store size, create additional file systems
for the data and metadata disks when the object store exceeds the maximum
VNX file system size. In this case, it is important to create an even number of
file system to optimize load distribution.
Configure storage for application servers
Typically, the VNX storage must not be used exclusively for the Atmos object store.
Allocate a part of the storage for non-object data. For this data, create one (or more)
separate storage pools.
Figure 7 on page 28 shows storage pool 3 was created with RAID 5 data protection
for the non-object data. This storage pool was created with 20 disks and used for the
application servers.
The following file systems were created for application servers on the storage pools:
Two file systems for application server boot disks
Two file systems for application server data disks
To complete the configuration, three hot spares were created in the storage layout.
Setup and Configuration

30 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

30


Depl
oyme
Configure block-based storage (FC) for Atmos VE
This section describes the procedures required to configure the block-based storage
for Atmos VE on the test environment.
To configure storage configuration for Atmos VE, complete the following steps:
1. Configure storage for boot disk of the Atmos VE nodes
2. Configure storage for data and metadata disks for the Atmos VE nodes
3. Configure storage for non-object data
The following four sections describe each step of the storage configuration for the
local RMG Atmos VE nodes on the local VNX. Use a similar configuration for the
remote RMG Atmos VE nodes on the remote VNX.
Storage layoutblock-based storage (FC)
Figure 8 shows the block-based storage layout for the Atmos on VNX solution on a
tested storage configuration.

Figure 8. Block-based storage layout for Atmos VE
Configure storageboot disk of Atmos VE
Create a storage pool with RAID 5 data protection. This storage pool contains the boot
disks of the Atmos VE nodes.
Use the USM feature to create one VMFS data store on this storage pool. This data
store contains the boot virtual disks of the local RMG Atmos VE nodes (the section
Configure Atmos Virtual Edition on page 39 provides more information).
Setup and Configuration

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

31


Depl
oyme
Figure 8 shows one 200 GB VMFS data store was created in storage pool 2. Storage
pool 2 was created to contain the boot disks of the local RMG Atmos VE nodes.
Configure storagedata and metadata disks of Atmos VE
Create a storage pool with RAID 5 data protection. This storage pool contains the data
disks and metadata disks of the Atmos VE nodes.
Typically, Atmos data and metadata disks require VMFS datastores that are much
larger than 2 TB. In VMware vSphere 4, create the data store across multiple LUNs on
the VNX platform that are each up to 2 TB minus 512 bytes in size.
To simplify and streamline the configuration, use a unified LUN size on the data and
metadata storage pool. Use the Create LUN wizard in Unisphere to create LUNs by
according to the overall capacity required for the data and metadata disks. Present
the LUNs to the two ESXi servers.
Use the vSphere Client to create two VMFS datastores on the LUNs to hold the data
and metadata virtual disks of the local RMG Atmos VE nodes. Create each data store
on half of the LUNs that are created.
The data and metadata disks of these nodes must be evenly distributed between the
two VMFS datastores. Tests have shown that creating two VMFS datastores for data
and metadata virtual disks produces better disk utilization and performance than a
configuration with just a single data store for data and metadata virtual disks.
Figure 8 on page 30 shows the ratio of data disks to metadata disks in the 20 TB
object store was 4:1. Each of the four nodes of the local RMG was configured with 4
TB of data and 1 TB of metadata. Therefore, two 10 TB VMFS datastores were created
in storage pool 1 for data and metadata disks of the local RMG Atmos VE nodes.
In this example, each data store was created across five 2 TB LUNs on a VNX (5 VMFS
extents) platform. Each data store contains four 2 TB data disks and two 1 TB
metadata disks.
Configure storage for application servers
Typically, the VNX storage is not used exclusively for the Atmos object store.
Therefore, allocate a part of the storage for non-object data. For this data, create one
or more separate storage pools.
Figure 8 on page 30 shows storage pool 3 was created with RAID 5 data protection
for the non-object data. This storage pool was created with 20 disks and was used for
application servers.
The following two datastores were created for application servers on this storage
pool:
One datastore for boot disks of application servers
One datastore for data disks of application servers

Setup and Configuration

32 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

32


Depl
oyme
Configure network
This section explains how to configure the network for the Atmos VE test
environment.
For the Atmos on VNX solution, the network configuration consists of the following
two parts:
Private and public networksConnects the Atmos VE nodes and the
end-users to the Atmos object store.
Storage networkConnects the VNX storage to the ESXi servers. The Atmos VE
nodes use this network to access their configured virtual disks.
This section provides more information on the configuration of each of these parts.
Configure private and public networks
For the Atmos on VNX solution, configure both private and public networks
The Atmos VE nodes are used to connect to the private network, which is a separate
network. Even though it is technically feasible to configure the private network as an
internal virtual network (using an internal vSwitch), allocate physical network
resources for the private network. This allows seamlessly migration of an Atmos VE
node from the servers using VMware vMotion (the section Migrate Atmos nodes with
vMotion and Storage vMotion on page 48 provides more information).
To configure the private network for the Atmos on VNX solution, complete the
following steps:
1. Log in to the vSphere Client, and then select the server from the Inventory
panel.
2. Click the Configuration tab, and then click Networking from the left panel.

Figure 9. vSphere Client
3. Click the Add Networking wizard to configure the virtual machine network for
the private network.
4. Complete all the steps in the Add Networking wizard to configure the private
network virtual machine network.
5. Repeat the steps from 1 to 4 to configure the private network on the second
local ESXi host.
Setup and Configuration

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

33


Depl
oyme
Configure the Atmos nodes with their first virtual NIC connected to the private
network as shown in Figure 10.

Figure 10. Private network configured
The public network is an external network. End-users use this network to access the
object store. This network must provide high-availability and redundancy. A network
load balancer (hardware or software) is required in the Atmos public network
configuration. This load balancer redirects end-users are from Atmos VE nodes that
are inactive to the alternative Atmos VE nodes (local and remote) that are active and
have access to copies of objects that are inaccessible through the inactive node.
For more information on the configuration of the public network with Atmos, contact
an EMC representative for Atmos.
Similar to the private network, use the Add Networking wizard to configure the public
network in the ESXi hosts. Configure the Atmos nodes with their second virtual NIC
connected to the public network.
The EMC Atmos Virtual EditionBest Practices Guide available on the EMC Online
Support website provides more information on how to set up the private and public
networks of the Atmos VE nodes.
Configure storage network
For the Atmos on VNX solution, configure a storage network. The Atmos VE nodes use
this storage network to access the configured virtual disks on the VNX storage.
Hence, it is important for this network to be highly available with more than one I/O
paths and with no single point of failure.
This section provides details on the configuration of this storage network for the
file-based storage on NFS and for block-based storage on FC.
Configure network for file-based storage (NFS)
For the storage network, a GbE network is required. Because this solution is typically
deployed with non-object workloads, a 10 GbE FCoE converged network is
recommended.
Based on the best practice for VMware multipathing with VNX file storage, the storage
network must include two I/O paths on two network switches between the ESXi hosts
and the VNX Data Movers. Use a combination of NIC Teaming on ESXi, Link
Aggregation on VNX, and multichasis Link Aggregation on the switches for network
multipathing between the two I/O paths. Use two interfaces to create an LACP bond
Setup and Configuration

34 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

34


Depl
oyme
on the Data Mover. Also, configure NIC Teaming on the ESXi side. Enable trunking on
all the storage network connections.
The Using VNX storage with VMware vSphereTechBook available on the EMC Online
Support website provides more information on the recommended configuration for
VMware vSphere network multipathing with VNX file-based storage.
Figure 11 shows a storage network configuration for NFS storage using a 10 GbE FCoE
converged network. In this example, four 10 GbE FCoE network connections were
used to connect the two servers to the two Cisco Nexus network switches. Similarly,
four other connections were used to connect the Data Movers to the two Cisco Nexus
network switches.

Figure 11. Storage network diagramNFS
Configure network for block-based storage (FC)
For the storage network, FC network is required. Because this solution is typically
deployed with non-object workloads, a 10 GbE FCoE converged network is
recommended.
Based on the best practice for VMware multipathing with VNX platform for Block, the
storage network must include two FC I/O paths on two FC switches between the ESXi
hosts and the VNX Data Movers. Use the VMware Native Multipathing (NMP) to
multipath between the two I/O paths. When FCoE is used for the storage network, do
not enable trunking on all the storage network connections.
The Using VNX storage with VMware vSphereTechBook available on the EMC Online
Support website provides more information on the recommended configuration for
VMware vSphere network multipathing with VNX platform for Block.
Setup and Configuration

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

35


Depl
oyme
Figure 12 shows the storage network configuration for FC storage with a 10 GbE FCoE
converged network. In this example, the following FCoE network connections were
used:

Each ESXi host had two FCoE connections, one to each of the Cisco Nexus
5010 network switches.
Each VNX Storage Processor had two FCoE connections, one to each of the
Cisco Nexus 5010 network switch.

Figure 12. Network diagramFCoE
Configure vSphere
This section explains the configuration changes on the default configuration of
VMware vSphere 4.1 for the Atmos VE software. Use the USM feature to provision the
required datastores as explained in the section Configure VNX storage on page 27.
Use Unisphere and vSphere Client to provision the data and metadata datastores on
VNX platform for Block.

Setup and Configuration

36 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

36


Depl
oyme
Configure virtual machines for Atmos VE nodes
Virtual machines are used to create the Atmos VE nodes. These virtual machines were
configured to run the Atmos software. For each Atmos VE node, virtual disks were
created on the following two different datastores:
Boot datastoreBoot disk of 20 GB
Data and metadata datastoreOne metadata disk and one (or more) data
disks
Table 3 lists the configuration of an Atmos VE node virtual machine used to deploy
the Atmos on VNX solution.
Table 3. Virtual machine configuration
Virtual machine component Setting
OS Atmos + Linux 2.6x Kernel (64-bit)
CPU 2
Memory 8 GB or 12 GB
Network adapter 1 Atmos Private Network
Network adapter 2 Virtual machine network
Disk1 20 GBboot
Disk2 1 TBmetadata operations and logging details
Disk3 2 TBdata
Disk4 2 TBdata

Deploy two Atmos nodes on each ESXi server considering the high-availability factor.
Deploy all the virtual machines on datastores that are provisioned on VNX with the
USM feature.
Figure 13 on page 37 shows the virtual machine properties of the Atmos node.
Setup and Configuration

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

37


Depl
oyme

Figure 13. Atmos node properties
Post-configuration steps for the Atmos VE nodes
Perform the post-configuration steps after the virtual machines are configured and
before the Atmos software is installed on them:
For each site, distribute the Atmos VE nodes evenly between the servers (two
nodes on each server). It is possible to use DRS to distribute the Atmos VE
nodes. However, it is required to define DRS rules to set the server for each
node.
Optionally, when using DRS, add the nodes to a DRS resource pool in the DRS
cluster while adding other virtual machines to other resource pools in the
cluster as shown in Figure 14 on page 38. This enables to control the virtual
resources that are allocated to object and non-object workloads in the
cluster.

Setup and Configuration

38 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

38


Depl
oyme

Figure 14. Atmos VE nodes with DRS resources pools
Configure startup shutdown options for the nodes as shown in Figure 15. This
enables the nodes to start automatically soon after the hosts are powered ON
avoiding the need to manually power ON the Atmos VE nodes.

Figure 15. Automatic startup for Atmos VE nodes
The ESXi Configuration Guide and the Resource Management guide in the VMware
vSphere documentation website provides more details on these steps.

Setup and Configuration

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

39


Depl
oyme
Configure Atmos Virtual Edition
This section includes all the configuration changes that were made to the default
configuration of Atmos VE 1.4.1.

The EMC Atmos Virtual Edition Best Practice Guide available on the EMC Online
Support website provides information to install and configure Atmos VE nodes.
The EMC Atmos Administrators Guide available on the EMC Online Support website
provides information to configure the Atmos object store.

Note With Atmos VE, there is no need to manually install the VMware tools
package on nodes after the installation of the Atmos VE software. This is
because the VMware tools package is included in the Atmos software that is
installed.
An Atmos VE configuration typically consists of two RMGs: RMG1 represents a local
object store and RMG2 represents a remote object store. Each RMG must contain four
Atmos VE nodes (two on each ESXi server) as shown in Figure 16.

Figure 16. RMG list
One Tenant (t1) was configured as shown in Figure 17.

Figure 17. Tenant list
Configure multisite object configuration
To configure the multisite object, define the object-replication policy. The replication
is done natively in Atmos.
Note Use VNX-level replication such as Replicator, MirrorView, or
RecoverPoint only for non-object data.
With the Atmos on VNX solution, create two copies for every new objectone local
copy (synchronously) and one remote copy (asynchronously).


Setup and Configuration

40 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

40


Depl
oyme
To create an object-replication policy, complete the following steps for each tenant in
the object store:
1. Log in to the Atmos node as tenant admin. The Tenant Basic Information
window appears.

Figure 18. Tenant Basic Information window
2. In the Policy Specification area, click default.
3. In the Policy specification area, complete the following steps:
a. In the Replica Type list box, select sync and in the Location list box,
retain same and $clientCreateLoc to create the first replica (Replica 1)
within the local RMG.
b. In the Replica Type list box, select async and in the Location list box,
select the location to other than and $clientCreateLoc to create the
second replica (Replica 2) within the remote RMG.
c. If required, select Enable Retention and Enable Deletion to set retention
and expiration for policy.
Setup and Configuration

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

41


Depl
oyme

Figure 19. Policy Specification window
Note Because the object store is deployed on VNX platform, which
is RAID protected, it is not necessary to enable Erasure Codes as part
of the object-replication policy. It is not recommended to use Erasure
Codes with Atmos VE.

Setup and Configuration

42 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

42


Depl
oyme



43
EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide


Chapter 3 Monitor and
Management
This chapter presents the following topics:
Introduction ..................................................................................... 44
EMC Unisphere ..................................................................................... 44
Atmos administration GUI .......................................................................... 44
Ganglia Atmos grid report .......................................................................... 46
EMC VSI for VMware vSphere Unified Storage Management ........................ 46
Migrate Atmos VE nodes ............................................................................ 47


Monitor and Management

44 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

44


Depl
oyme
Introduction
This chapter describes the applications used in this solution. It also explains the
procedures used to monitor and manage the Atmos on VNX solution.
The applications used are:
EMC Unisphere
Atmos administration GUI and Ganglia
USM feature
EMC Unisphere
Unisphere is a GUI that is used to manage and monitor the VNX platform used for the
Atmos object store. Unisphere is used to configure storage pools on the VNX
platform. In these storage pools, file systems and LUNs can be created for the Atmos
on VNX solution.
To configure the storage pool for file, select Storage > Storage configuration > Storage
Pool for File.
To configure the storage pool for block, select Storage > Storage Configuration >
Storage Pools.

Figure 20. Configure storage pool
Atmos administration GUI
The URL that is used to access the Atmos administration GUI is:
https://hostname/mgmt_login
Type the IP address of the Atmos node where hostname appears in the sample.

Monitor and Management

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

45


Depl
oyme
Use the Atmos administration GUI to perform various object store configuration tasks
such as:

1. Configure the Atmos nodes
2. Add RMGs
3. Create tenants
4. Define policies for tenants
5. Add nodes to tenants
6. Monitor the Atmos nodes
To add an RMG, click Add RMG in the System Dashboard window.

Figure 21. Add RMG
The Atmos administration GUI provides various options to monitor the performance of
the activities within the Atmos VE nodes.
To provide a detailed status of activities within Atmos, select System Management >
System Dashboard. The System Summary window appears.

Figure 22. System Summary window

Monitor and Management

46 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

46


Depl
oyme
Ganglia Atmos grid report
The URL that is used to access the Ganglia Atmos Grid Report is
https://hostname/ganglia.
This report helps to monitor the status of RMGs. To monitor the ongoing read and
write activity on the RMG nodes, click the TPS activity. The specific TPS graph
appears.

Figure 23. Ganglia Atmos grid report
EMC VSI for VMware vSphere Unified Storage Management
EMC VSI for VMware vSphere is a vSphere Client plug-in that is used to manage and
monitor EMC storage connected to the VMware vSphere environments. Unified
Storage Management (USM) is a VSI feature available to manage VNX and VNXe
platforms. Administrators can use USM to perform the following management tasks
on VNX storage in the Atmos on VNX solution: :
Provision VNX platform for File and VNX platform for Block for the Atmos
object store from the vSphere Client
Compress and decompress Atmos nodes that are provisioned on VNX
platform for File.
EMC VSI for VMware vSphere is available for download from the EMC Online Support
website. After this plug-in is installed, the VSI icon appears on the vSphere Client
Home page as shown in Figure 24.

Figure 24. VSI icon
Monitor and Management

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

47


Depl
oyme
To provision storage using VSI, right-click the ESXi server (or the cluster that resides
on the hosts), and then select EMC > Unified Storage > Provision Storage.

Figure 25. Provision storage
Note To compress the Atmos nodes, select EMC > Unified Storage >
Compress.
The amount of time to provision storage depends on the size of the Atmos node data
store. The amount of time to compress the Atmos node depends on the amount of
data available on the virtual disk associated with the Atmos node.
The EMC VSI for VMware vSphere: Unified Storage Management Product Guide
available on the EMC Online Support website provides information on how to install
and configure the USM feature.
Migrate Atmos VE nodes
This section provides details about the Atmos node migration using VMware
vMotion and VMware Storage vMotion.
Migration is required to perform maintenance activities on the following items:
Monitor and Management

48 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

48


Depl
oyme
Server The server on which the Atmos node is running.
Note In case of code upgrade, the Atmos node virtual machine is
migrated to another server with vMotion.
VNX storage objectThe VNX storage object on which the node is provisioned
(file system or LUN).
Note In this case, the Atmos node virtual machine is migrated to
another storage location with Storage vMotion.

Migrate Atmos nodes with vMotion and Storage vMotion
To migrate Atmos nodes with vMotion or Storage vMotion, complete the following
steps:
1. Configure the Atmos private network on an external Standard vSwitch that is
connected to both ESXi servers.
2. Remove any existing DRS VM Affinity rules for the Atmos node virtual
machine.
3. Right-click the Atmos node virtual machine, then select Migrate to launch the
Migrate Virtual Machine Wizard.
4. Select Change Host to use vMotion to migrate the Atmos VE node from the
source ESXi host to the destination host.
5. Select Change Datastore to use Storage vMotion to migrate the Atmos VE
node from its existing datastores to destination datastores.
Observations
Testing showed that there is no operational impact on Atmos nodes because of
vMotion. For large objects, the create latency showed an improvement when
compared to the baseline testing. However, for small objects there was a small
degradation in TPS for the create operations. vMotion takes only 28 seconds to finish
this operation.
Migration with Storage vMotion, the operation took much more time to complete due
to the large size of a typical Atmos object store.
Figure 26 on page 49 shows the vMotion performance of local object store between
baseline operations without vMotion.
Monitor and Management

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

49


Depl
oyme

Figure 26. REST TPS without vMotion
Figure 27 shows the vMotion performance of local object store with vMotion.

Figure 27. REST TPS with vMotion

Monitor and Management

50 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

50


Depl
oyme
Figure 28 shows the performance of Atmos nodes between baseline operations and
vMotion for large objects.

Figure 28. Performance comparison for large objects

Monitor and Management

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

51


Depl
oyme
Figure 29 shows the performance of Atmos nodes between baseline operations and
vMotion for small objects.

Figure 29. Performance comparison for small objects
Conclusions
The performance of Atmos nodes was not affected due to vMotion operations.
For both vMotion and Storage vMotion based migrations, the REST workload
continued execution without error while the migration was in progress.
For Storage vMotion based migration, given the typical large size of an Atmos object
store, the migration took much more time. Therefore, consider an offline method to
shorten the time required to migrate the Atmos nodes.

Monitor and Management

52 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

52


Depl
oyme


53
EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide


Chapter 4 Atmos on VNX
Performance
This chapter presents the following topics:
Introduction ..................................................................................... 54
Atmos on VNX performancefile and block ................................................ 54
Atmos on VNX performanceobject and non-object.................................... 56


Atmos on VNX Performance

54 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

54


Depl
oyme
Introduction
This chapter reviews some of the performance aspects of the Atmos on VNX solution.
This chapter covers the following topics:
1. Comparison of performance between file- based and block-based
deployments of this solution.
2. Performance study of mixed environment that includes both object and
non-object workloads on the same VNX platform.
Atmos on VNX performancefile and block
This section gives a detailed overview of the performance of Atmos VE on NFS and FC
configurations. The performance of Atmos VE when deployed on FC LUNs and NFS file
systems was compared.
Test method
The storage configurations for both the test cases were similar. The section Storage
layoutfile-based storage (NFS) on page 28 and section Storage layoutblock-based
storage (FC) on page 30 provide more information on storage provisioning for NFS
and FC, respectively.
For the NFS configuration, NFS file systems were created on the storage pools and
exported to the hosts with the USM feature. The Atmos VE nodes were deployed on
these NFS datastores.
For the FC configuration, FC LUNs were provisioned on the storage pools and
presented to the hosts as VMFS volumes with the USM feature. The Atmos VE nodes
were installed on these VMFS datastores.
The performance of the Atmos VE nodes was measured by running REST-based
workloads that was simulated by the Grinder tool. Grinder was configured to generate
create and read REST operations for large objects (16 MB) and small objects (8 KB).
Appendix A Using Grinder to Generate REST Workload on Atmos provides more
information on the Grinder tool and how it is used to test the REST workload with
Atmos.
Result analysis
Testing showed that the performance of the Atmos VE nodes was better when
deployed on VMFS datastores than on NFS datastores.
Atmos on VNX Performance

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

55


Depl
oyme
Figure 30 shows the comparison of performance of local object store (in TPS) for NFS
and FCoE configurations while the REST workload was in progress.

Figure 30. Performance of object store for NFS and FCoE configurations
Figure 31 shows the comparison of performance of small objects (REST workload) for
NFS and FCoE configurations as reported by the Grinder tool.

Figure 31. Performance of small objects for NFS and FCoE configurations
For small objects, the performance was better on VMFS datastores than on the NFS
datastores. Also, the latency for FCoE was much smaller than the latency of NFS.
Atmos on VNX Performance

56 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

56


Depl
oyme
Figure 32 shows the comparison of performance of large objects that use REST
workload for NFS and FCoE configurations.

Figure 32. Performance of large objects for NFS and FCoE configurations
The performance of the large objects was the same for NFS datastores and VMFS
datastores. The response time was also the same for both the configurations.
Conclusion
The performance of Atmos nodes was better on FC LUNs when the small-object
workload was executed.
With large object workload, the performance of the Atmos nodes was similar on both
NFS and VMFS datastores.
Atmos on VNX performanceobject and non-object
This section provides a detailed overview of the performance of Atmos VE with mixed
workload that includes both object and non-object workload.
The performance of the Atmos VE nodes is measured when application servers that
run the non-object workload are running with the Atmos VE node. In this
configuration, the servers and the VNX platform are subject to both object and
non-object workload.
Test method
The performance of the Atmos VE nodes was measured when two and four non-object
application servers were deployed.
In this case also, the performance of the Atmos VE nodes was measured by running
REST-based workloads that was simulated by the Grinder tool. Grinder was
configured to generate create and read REST operations for large objects (16 MB) and
small objects (8 KB). This test was performed for both NFS and FC storage.
Appendix A Using Grinder to Generate REST Workload on Atmos provides details of
the Grinder tool and how it is used to test the REST workload with Atmos.
Atmos on VNX Performance

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

57


Depl
oyme

The section Configure VNX storage on page 27 provides details on how to provision
storage and deploy Atmos nodes on NFS and FC.
Table 4 explains the configuration of the non-object application servers.
Table 4. Application server configuration specifications
Virtual machine component Setting
OS Windows Server 2008 SP2
CPU 1
Memory 1 GB
Disk1 10 GB OS
Disk2 100 GB data
Other specifications Antivirus (AV) updated
50 GB data on data disk
The application servers were installed on a storage pool that is different from the
storage pools used by the Atmos nodes. Based on the best practice, the boot and
virtual disks were placed on different datastores. The IOmeter workload (mix-8k-
50%read-random), OLTP, was generated on these application servers.
Result analysis
After several iterations, showed the performance of the Atmos VE nodes with non-
object workload on the VNX was better when deployed on VMFS datastores.
The performance of the object stores was almost the same for both NFS and FC
iterations.
Atmos on VNX Performance

58 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

58


Depl
oyme
Figure 33 shows the comparison of performance (in TPS) of local object store for NFS
and FCoE configurations.

Figure 33. Performance of object store for NFS and FCoE configurations
The performance of the read operations was better on VMFS datastores than on the
NFS datastores for small objects. Even the latency for FC was much smaller than the
latency for NFS. However, for create operations, the performance degraded for VMFS
datastores as the number of application servers increased. The performance of the
object stores for both local and remote was the same.

Atmos on VNX Performance

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

59


Depl
oyme
Figure 34 shows the comparison of performance of small objects for NFS and FCoE.

Figure 34. Small objects in NFS and FC datastoresREST and OLTP workload
Atmos on VNX Performance

60 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

60


Depl
oyme
Figure 35 shows the comparison of performance of large objects for NFS and FCoE.

Figure 35. Large objects in NFS and FC datastoresREST and OLTP workload
The performance of the large objects was the same for NFS datastores and VMFS
datastores. The response time was also same for both the configurations. The
response time of the object stores for both local and remote was also same.
Conclusion
The performance of Atmos nodes was better on FC LUNs, when read operation for the
small-object workload was run. However, the performance degraded for create
operations for small-object workload as the number of application servers increased.
With large-object workload, the performance of the Atmos nodes was similar on both
NFS and VMFS datastores.







61
EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

Chapter 5 Storage Efficiency
This chapter presents the following topics:
Introduction ..................................................................................... 62
Storage efficiency with thin provisioning .................................................... 62
Storage efficiency with compression .......................................................... 68


Storage Efficiency

62 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

62


Depl
oyme
Introduction
This chapter describes the storage efficiency considerations when the Atmos on VNX
solution is deployed. This chapter explains how to leverage VNX storage efficiency
technologies to improve the overall storage efficiency and reduce the storage costs
for this solution. The focus is on thin provisioning and compression technologies of
VNX platform for File and VNX platform for Block.
Storage efficiency with thin provisioning
This section explains how to use the VNX thin provisioning technology to enhance the
storage efficiency of the Atmos on VNX solution. Thin provisioning is a method to
optimize the utilization of available storage. It allocates storage based on the
demand instead of allocating the entire storage at the beginning. Even in Atmos
unified system, storage efficiency or storage savings was achieved by deploying the
thin provisioned LUNs and file systems.
Performance of Atmos unified system was tested on both NFS and FC configurations.
However, the deployment process for both the configurations was different.
Configure thin provisioning on file-based object store
When file systems are created with USM feature, thin provisioning is enabled by
default. The EMC VSI for VMware vSphere: Unified Storage Management Product
guide available on the EMC Online Support website provides information on how to
modify the default configuration for thin provisioning with the USM feature and how
to disable the thin provisioning feature.
After the thin file systems and corresponding NFS datastores were created by the
USM feature,-the Atmos nodes were deployed on the thin file systems.
Note In this solution, only Atmos data and metadata disks were deployed
on the thin file systems. Atmos boot disks were deployed on thick
provisioned file systems.
Configure thin provisioning on block-based object store
To create thin LUNs, install Compression Enabler and Thin Provision Enabler on VNX.
In the Unisphere Create LUNs wizard, select Thin to create thin LUNs as shown in
Figure 36.

Figure 36. Thin provisioned FC
Storage Efficiency

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

63


Depl
oyme
Test method
After Atmos nodes were installed on the thin LUNs and the Grinder script was
executed, tests were conducted to measure the performance of the object store
because it was deployed on thin provisioned storage. Also, the size of the LUNs was
compared to find the difference. Similar iterations were performed for file-based
object stores also.
Result analysis
For both NFS and FC storage, 93-95 percent initial storage savings were achieved.
Table 5 shows the details of space allocations for NFS configurations.
Table 5. Thin provisionedNFS
Atmos data file system
Atmos metadata file
system
Max size 8 TB 2 TB
Initial size 20 GB 10 GB
Size after Atmos node
installation
370 GB 10 GB
Size after 5
th
iteration 550 GB 59 GB
Size after 6
th
iteration 616 GB 59 GB

Table 6 shows the details of space allocations for FC configurations.
Table 6. Thin provisionedFC
Atmos data LUN Atmos metadata LUN
Max size 16 TB 4 TB
Size after Atmos node
installation
1040 GB 101 GB
Size after 2
th
iteration 1349 GB 101 GB
Size after 3
rd
iteration 1560 GB 101 GB

The initial size of the datastores must be 5 percent of the storage to avoid initial
storage expansion as shown in Table 5 and Table 6. The initial capacity of metadata
depends on the type of application that is running and the amount of metadata that
is expected to write.
It was also observed that the data file systems and LUNs expanded after each Grinder
test run. No performance change or fluctuations were noted at the object stores for all
the test iterations.

Storage Efficiency

64 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

64


Depl
oyme
Figure 37 shows the performance of the object stores for thick provisioned (baseline)
and thin provisioned scenarios.

Figure 37. Performance of object storethin provisioned NFS
There was no significant difference in performance of the object stores.
Figure 38 shows the performance of the object stores for thin provisioned scenarios
of FC and NFS.

Figure 38. Performance of object storethin provisioned FC
There was not a significant difference in performance of the object stores for NFS and
FC configurations.

Storage Efficiency

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

65


Depl
oyme


Figure 39 shows the comparison of performance of large objects on FCoE.

Figure 39. Large objectsbaseline compared with thin provisioned FCoE
Storage Efficiency

66 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

66


Depl
oyme
Figure 40 shows the comparison of performance of large objects on NFS.

Figure 40. Large objectsbaseline compared with thin provisioned NFS
When the performance of thin provisioned Atmos nodes was compared with the
performance of thick provisioned (baseline) Atmos nodes, no difference was
observed. Hence, it is better to provision for thin storage to achieve storage savings
without affecting the performance.
Figure 41 shows the performance comparison of small objects for FCoE.

Figure 41. Small objectsbaseline compared with thin provisioned FCoE
Storage Efficiency

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

67


Depl
oyme
Figure 42 shows the performance comparison of small objects for NFS.

Figure 42. Small Objectsbaseline compared with thin provision NFS
Figure 43 shows the performance of thin provisioned NFS and FCoE for small objects.

Figure 43. Small objectsthin provisioned NFS compared with FCoE
Storage Efficiency

68 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

68


Depl
oyme
Figure 44 shows the performance of thin provisioned NFS and FCoE for large objects.

Figure 44. Large objectsthin provisioned NFS compared with FCoE
The performance is the same for thin provisioned FC and NFS Atmos nodes for large
objects. However, for small objects, the behavior is different. The thin provisioned FC
Atmos nodes performed better than the thin provisioned NFS Atmos nodes for create
operations. The thin provisioned NFS Atmos nodes performed better than thin
provisioned FC Atmos nodes for read operations.
Conclusion
The storage savings of 93-95 percent was achieved for thin provisioned NFS and FC
configured Atmos unified systems.
Storage efficiency with compression
This section explains the effect of compression on VNX platform for File and VNX
platform for Block.
Configure compression on file-based object store
To enable compression on file systems, complete the following steps:

1. Compress the virtual machines from the VMware vSphere Client.
2. Install the USM feature.
3. Right-click the selected virtual machine, and select EMC > Unified Storage >
Compress. The compression status appears in the vSphere Client.
Storage Efficiency

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

69


Depl
oyme

Figure 45. Compression file system
Note If the virtual machine is a multidisk virtual machine and the
disks are on different file systems, the feature first enables
compression and then compresses the disks on each file systems.

4. To view the compression ratio after compression is complete, right-click the
compressed virtual machine, then select EMC > Unified Storage > Properties.
The Properties dialog box appears.
Storage Efficiency

70 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

70


Depl
oyme

Figure 46. Compression ratio
Configure compression on block-based object store
To enable compression, install Compression Enabler and Thin Provisioning Enabler
on the VNX platform.
To configure compression on block-based object store, complete the following steps
in Unisphere:

1. Right-click the storage pool, and then click Create LUN. The Create LUN dialog
box appears.
Storage Efficiency

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

71


Depl
oyme

Figure 47. Create LUN
2. Select all the fields, and then click Apply. The LUN is created.
3. Right-click the created LUN, and then click Properties. The LUN Properties
dialog box appears.
4. Click the Compression tab.
5. Retain the default compression options.

Storage Efficiency

72 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

72


Depl
oyme

Figure 48. LUN Propertiescompression
6. Click OK to submit a compression operation for this LUN.
7. Click the compressed LUNs summary wizard to monitor all active LUN
compression operations.
Test method
The compression test was performed to measure the effect of using VNX compression
with Atmos VE.
Note LUN compression was tested for only VNX platform for Block.
After Atmos nodes were installed on the LUNs, several Grinder test runs were
performed to accumulate data on the object store. During this time, all the LUNs that
contained the data and metadata datastores were compressed. Also, the size of the
LUNs was compared to find the difference.
Compression tests were conducted to measure the performance of the object store
because it was deployed on compressed LUNs.

Storage Efficiency

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

73


Depl
oyme
Other details regarding this test are:
Thin LUNs were used.
Unlike the recommended storage layout for Atmos on VNX platform for Block,
the data and metadata disks were placed on two different storage pools.
Result analysis
The compression of ten 2 TB LUNs took about two days to complete. However, the
impact on existing workload is minimal while compression is in progress because
compression is a background process in VNX.
Maximum time was spent to compress two LUNs (one LUN from the Atmos data data
store and another from the Atmos metadata data store). This is because the other
LUNs contained minimal data.
To minimize the compression time, ensure that the compression operations are
evenly distributed between the storage processors. Also, for ongoing compression
operations, consider to increase the throttle rate of these operations from medium
(default) to high.
Table 7 shows the details of space allocations for the two LUNs that took the
maximum time to compress.
Table 7. LUN CompressionFC
Atmos data LUN Atmos metadata LUN
LUN size 2 TB 2 TB
Used space (before
compression)
1425 GB 194 GB
Used space (after
compression)
334 GB 9 GB
Compression ratio 77 % 95 %
There was no impact of compression on the Atmos performance. Table 7 shows that
when compression is enabled, 77 percent of storage savings on Atmos data LUNs is
achieved. The compression ratio on the Atmos metadata LUN is high because the
Grinder test tool placed relatively minimal metadata during compression. With an
actual application a smaller compression ratio is expected.
These storage savings results are specific to the test tool that is used to generate load
to the object store. In general, the compression ratio measured is dependent mostly
on the type of data that is written to the object store.
Storage Efficiency

74 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

74


Depl
oyme
Figure 49 shows the LUN compression performance of Atmos nodes of large objects
for baseline operations and after LUN compression is enabled.

Figure 49. Performance of large objectsbaseline and LUN compression
The throughput and latency of large objects were the same for compressed and
uncompressed LUNs.
LUN Compression- REST Workload only
Large Objects (16MB)
40
83
41
80
0
10
20
30
40
50
60
70
80
90
CREATE READ
M
B
/
s
e
c
0
500
1000
1500
2000
2500
3000
3500
4000
4500
FCoE-Baseline - Throughput (MB/sec) LUN Compression - Throughput (MB/sec) FCoE-Baseline - Latency LUN Compression - Latency
Code: Elias e-patch
No. of App Servers: 0
Configuration: FCoE
Unified plug-in used
SP Cache cleared before
test run
Storage Efficiency

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

75


Depl
oyme
Figure 50 shows the LUN compression performance of Atmos nodes of small objects
for baseline operations and after LUN compression is enabled.

Figure 50. Performance of small objectsbaseline and LUN compression
Figure 49 on page 66 and Figure 50 shows that there was not a performance impact
on the REST workloads and a fair amount of storage savings was achieved due to LUN
compression.
Conclusions
The performance of the Atmos nodes was marginally affected due to LUN
compression operations. Also, the storage savings for the object store data was
around 75 percent.
Note This is largely dependent on the data type written to the object store.
Therefore, storage savings with other applications can be different.
Grinder workload continued execution without error after LUN compression was
enabled. For typically sized object store (several tens of TB or more), the compression
is expected to take several days to complete.

LUN Compression- REST Workload only
Small Objects (8kB)
89
504
86
491
0
100
200
300
400
500
600
CREATE READ
f
i
l
e
s
/
s
e
c
0
20
40
60
80
100
120
m
s
e
c
FCoE-Baseline - TPS(files/sec) LUN Compression - TPS(files/sec) FCoE-Baseline - Latency LUN Compression - Latency
Code: Elias e-patch
No. of App Servers: 0
Configuration: FCoE
Unified plug-in used
SP Cache cleared before
test run
Storage Efficiency

76 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

76


Depl
oyme





77
EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

Appendix A Using Grinder to
Generate REST
Workload on Atmos

This appendix presents the following topic:
Grinder tool ..................................................................................... 78


Using Grinder to Generate REST Workload on Atmos


78 EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

78


Depl
oyme
Grinder tool
Grinder is a load-testing framework on Java. This makes it easy to run a distributed
test using many load injector machines. Grinder, which is an open-source project,
enables load-test on any system that has Java API. This includes common cases such
as HTTP web servers, SOAP and REST web services, application servers (CORBA, RMI,
JMS, EJBs), and custom protocols.
Grinder is a framework for running test scripts across a number of machines. The
framework consists of the following three types of process or program:
WorkerInterprets Jython test scripts and performs tests with a number of
worker threads
AgentManages worker processes
ConsoleCoordinates other processes, collates and displays statistics,
performs script editing and distribution
Because Grinder is developed in Java, each of the process is a Java virtual machine
(JVM).
For testing the Atmos on VNX solution, a Grinder test stress script was used. This
script used the Grinder framework to test the REST-based workload.
Grinder test driver system configuration
A Grinder virtual machine is required with the following configuration:
Ubuntu Linux operating system (64-bit)
1 cpu
2 GB RAM
20 GB disk
Grinder script configuration
Download the entire perf-regression folder of the Grinder script to the test driver
system. The download target is the folder from which the Grinder script is configured.
Modify the following three configuration files based on the Atmos test environment
(all in perf-regration/conf):
perf.conf
scenario.lst
static.conf

Run a Grinder test and generate report
Run the following script on the test driver under the directory perf-regression/bin:
#nohup ./runPerf.sh -e "grinder" 2>&1 |tee /tmp/perf.testing.log
To generate the test report on the test driver under the directory
perf-regression/bin, run the following script:
Using Grinder to Generate REST Workload on Atmos

EMC Atmos Virtual Edition with EMC VNX Series
Deployment Guide

79


Depl
oyme
#./gen_report.sh <baseline path> <result path> [absolute path of
html report]
For example:
#./gen_report.sh /perf-archives/1.4.0-b55678/2011-02-10-14-50-18
/perf-archives/1.4.0-b55678/2011-02-10-14-50-18 ./test1.htm

You might also like