You are on page 1of 30

Storage Decisions 2013: Resource Guide

A compilation of our best educational content from our experts to prepare you for a successful 2013

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

Dear IT Professional, The package of information you just downloaded was written and assembled as a takeaway for attendees of the Storage Decisions conference in San Francisco at the end of the year in 2012. The enthusiasm we experienced as folks looked over the information in this package at the conference convinced us that we need to make this available to a wider audience so we posted the PDF of the report online. Included in this report are: The 5 most-read stories from the SearchStorage.com network in 2012 4 featured articles from our independent expert speakers from Storage Decisions San Francisco 2012 2013 predictions from our speakers insights into how the storage landscape is changing and what that means for IT pros

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

We hope you enjoy this package of information. Learn more about our 2013 lineup of Storage Decisions events - including conferences, one-day seminars and special dinner series events at our newly-designed Storage Decisions website. We hope to see you at one of our events this year. Best Regards, The Storage Decisions Delegate Relations Team

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

P.S. Dont forget to join our online community and be the first to hear breaking news and event announcements! Follow us on Twitter @StorageEvents and connect with Storage Decisions on LinkedIn!

Page 1 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

About Storage Decisions conferences & seminars


Storage Decisions conferences and seminars offer a comprehensive curriculum, built to address the specific challenges and complex projects facing todays IT professional. Storage Decisions events facilitate education, peer-to-peer networking, and vendor interaction that storage pros can use to optimize existing operations. Every Storage Decisions event guarantees cutting-edge technical advice and practical guidance that can be implemented immediately! Contents
Top Storage Articles from 2012 Windows 8 Backup: Microsoft Improves Backup Utilities.Page 2 Building a DR site vs. outsourcing disaster recovery....Page 4 Defining cloud storage: The most popular cloud terms .Page 7 vSphere file systems: VMFS 5 plus VMFS vs. NFS..Page 9 When using SSD is a bad idea...Page 11 Speaker Resources Find the best spot for flash SSD storage..Page 14 The value of VMwares Changed Block Tracking (CBT)Page 19 Long-term archives require detailed planning..Page 21 Confusion over storage consolidationPage 22 2013 Speaker Predictions The biggest storage challenges of 2013...Page 24 The changing role of todays storage pro..Page 24 Avoiding storage pitfalls in 2013.Page 25 Whats the next big thing in storage?.Page 26

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 2 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

Top Storage Articles of 2012


These 5 articles earned the most eyeballs of any new content produced in the SearchStorage.com network this year. The popularity of these articles illustrate some common pain points and areas in need of clarification for storage pros, including backing up Windows 8 deployments, sifting through cloud definitions, use cases for solid-state storage and more.

Windows 8 Backup: Microsoft Improves Backup Utilities


Brien Posey (Click here to read this on SearchStorage.com) Anyone who has ever tried backing up Windows 7 or Windows Server 2008 using the built-in backup utilities knows that Windows Backup (and Windows Server Backup) leave a lot to be desired. Although these backup tools are alive and well in Windows 8 and Windows Server 2012, Microsoft has made some big changes that should make the native backup utilities much more tolerable. File History backup The biggest change for Windows 8 backup is the introduction of a new feature called File History. File History replaces a Windows 7 feature known as Previous Versions and allows the easy restoration of files and documents. File History is designed to use an external hard drive as a backup medium, but Microsoft also gives you the option of backing up Windows 8 machines to a network share as well. To enable File History in Windows 8, you must open the Control Panel, and click System and Security, followed by File History. Once the File History applet opens, the first thing that you must do is to specify a location for storing your file history, as shown in Figure A. Figure A: You must specify a backup location At this point, you can enable File History by clicking the Turn On button. Before doing so however, I recommend clicking on the Advanced Settings link. As you can see in Figure B, the Advanced Settings page lets you control how frequently Windows saves data, the size of the offline cache and the retention time for previous file versions. Figure B: You can fine tune the backup through the Advanced Settings dialog box Incidentally, File History is designed to back up the Windows libraries (such as documents, pictures and videos). The File History applet contains an

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 3 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

Exclude link that you can use to exclude specific items from being backed up, but there is no option for adding items outside of the libraries to the backup. So what do you do if you need to perform a full system backup? In previous versions of Windows, you would accomplish this by using Windows Backup. I have heard that Windows Backup will exist in Windows 8, but has been deprecated. In the current build, however, I have been unable to find Windows Backup. Besides Windows Backup, Microsoft gives you a few other options for recovering your system. If you open the File History applet, you will see a link to restore personal files. This link is used to recover items that have been backed up as a part of the file history. If you need a more comprehensive recovery, however, then click the Recovery link instead. This opens the Recovery applet shown in Figure C. Figure C: The Recovery applet gives you multiple options for system recovery One option is called Refresh, which essentially installs a clean copy of Windows. In doing so, Windows will preserve data that is stored in your library and it will retain any apps that were installed from the Windows App Store. All other applications, however, will be deleted (although Windows does create a report telling you which applications have been removed so that you can reinstall them). Another option is Reset, which is designed to return the computer to a pristine state. Choosing Reset removes everything including user files and applications, and installs a clean copy of Windows. Although Reset isnt exactly a restoration mechanism, a badly corrupted PC could be reset and then the user files could be recovered from File History. What about Windows Server? Things work a little bit differently in Windows Server 2012 (formerly known as Windows Server 8). File History does not seem to exist in Windows Server 2012, but Windows Server Backup is alive and well. Thankfully, Windows Server Backup has evolved quite a bit since its debut in Windows Server 2008. One of the major shortcomings of the current version of Windows Server Backup is that if you use it to back up a Hyper-V host, there is no way to restore individual virtual machines or granular content within a virtual machine. The new version of Windows Server Backup makes it easy to restore an individual virtual machine, as shown in Figure D. If you need to restore files or folders from within a virtual machine, Windows Server Backup 2012 lets you restore a virtual hard disk file to an alternate location and then

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 4 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

mount that virtual hard disk outside of the virtual machine. Its possible to copy files and folders from the newly mounted virtual hard disk to the virtual machine. Figure D: Windows Server Backup 2012 gives you the ability to restore virtual machines Conclusion: The built-in backup mechanisms in both Windows 8 and Windows Server 2012 have changed considerably since the previous versions. Of course, both of these operating systems are still in beta testing and Microsoft could still conceivably make changes prior to the final release. About the author: Brien M. Posey, MCSE, has received Microsofts MVP award for Exchange Server, Windows Server and Internet Information Server (IIS). Brien has served as CIO for a nationwide chain of hospitals and has been responsible for the Department of Information Management at Fort Knox. You can visit Brien's personal website at www.brienposey.com.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

Building a DR site vs. outsourcing disaster recovery


Paul Kirvan (Click here to read this on SearchDisasterRecovery.com) Depending on your business continuity needs, it may be necessary to se t up a secondary DR site. And with a remote DR site, your organization will have to decide whether to use a third-party vendor to provide DR service, or to build your own site. In our latest podcast, Paul Kirvan, an expert in the DR industry and board member with the Business Continuity Institutes USA chapter, explains some of the pros and cons of each option. What are the pros and cons of building your own disaster recovery site versus contracting out the service? One of the major challenges in disaster recovery planning is how to recover business operations to the point where business can be returned to as close to normal as possible following a disruptive incident. One of the popular strategies is to have an external site that can support business systems, applications and customer data until the primary data center can be returned to normal operation. Two approaches to this challenge are to build your own backup data center or similar facility and contract out for these services with suitably qualified third-party organizations. Key points in favor of building your own backup facility are management control of these specialized resources, utilization of them as alternate processing centers to handle heavy usage periods, security controls managed by your organization, and reduced likelihood of your data being intermingled with other organizations data. Negative factors include start-up costs associated with building the facility, increased real estate costs and general overhead for the backup

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 5 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

space, and costs for staffing the backup site. Points in favor of outsourcing disaster recovery include minimal or no start-up costs, shared costs of staffing and technology resources, managed security at the site and on-site expertise available 24-7. Downsides of a third-party solution include potential hidden costs or fees associated with declaring a disaster and potential unavailability of facilities if too many subscribers are already using the backup center. Among the key issues to address are costs (both upfront and ongoing), availability of resources (both human and technology) when needed, additional unplanned costs following a disaster, and contractual issues. Define hot site, a cold site and warm site for the purposes of disaster recovery. According to international standard ISO 24762, Information technology Guidelines for information and communications technology disaster recovery services we can define these three important options as follows: A cold site is a type of data center which has its own associated infrastructure that includes power, telecommunications and environmental controls designed to support IT systems, applications and data which are installed only when disaster recovery plans are activated. A warm site is largely a data center equipped with some or all of the equipment found in a working data center, including hardware and software, network services and supporting personnel, but without customer applications and data; these are introduced at the time when DR plans are activated. Finally, a hot site is a fully equipped data center with the required equipment, computing hardware and software, and supporting personnel, has customer data and applications, and is fully functional and ready for organizations to operate their IT systems when DR plans are activated. When designing the physical plant, is it enough that it is remote? The proximity of a backup data center to the primary data center is an important initial design consideration. However, while a sufficient distance between primary and backup centers is key, it is also important to consider the impact of the distance between facilities on your staff. For example, members of your emergency recovery team may be reluctant to go to work at a significant distance for possibly an extended period of time. They may be concerned about their families or other issues. One of the critical design issues in building or selecting a backup facility is the source of commercial electric power. Often we are advised to find a

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 6 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

location that is in a different power grid than the primary data center. While that is certainly desirable, it is not the only deciding factor. Do you need to design it so it can resist natural or man-made disasters? The first thing to do in this situation is to conduct a risk assessment of the areas in which a backup facility may be located. Carefully examine the surrounding region, its utilities, transportation, environment, weather and even crime rates. Review risk assessment findings to pinpoint areas with the least likelihood of disruptive events, knowing that its still possible for the unexpected to occur. Any facility selected should be reasonably secure and have good physical and information security provisions to prevent unauthorized access. If you use an architect, be sure he or she has experience designing data centers and similar facilities. How far away should it be? A distance of 10 to 50 miles from the primary data center ought to be minimally acceptable, provided risk assessments of the prospective backup site locations are conducted. Always consider the impact of a remotely located data center on your staff, especially if it may be necessary for staff to relocate their place of work to a remote location for an extended period of time. Particularly when contracting for a DR site, should there be provisions in your agreement to practice a DR scenario with the site? Do vendors typically allow this? Most third-party recovery site vendors encourage regular testing of the facility in accordance with a scheduled DR plan test. Ensure that your vendor will support at least one annual test of the facility. Its good if you can schedule more than one annual test, but this impacts your DR budget. What about mobile DR sites? We hear about some vendors parking a tractor-trailer with a portable data center in the back and using that. Is that a realistic option, or is a physical plant a better call, no matter what? Mobile recovery solutions present an excellent alternative to fixed hot, warm or cold sites as well as building your own backup data center. Evaluate the costs for a mobile recovery solution based on what you think you will need in a disaster, remembering that the mobile trailer will probably need to travel some distance to your site after you have declared a disaster. Once the trailer has arrived, it will take time to set it up, and you will need to have provisions in place to connect power and communications to the trailer so it can begin functioning. Make sure the organization you contract with for such

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 7 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

a service has a sufficient number of equipped trailers that your DR requirements can be satisfied. And make sure the firm has plenty of experience in disaster situations. About the author: Paul Kirvan, CISA, FBCVI, CBCP, has more than 20 years of experience in business continuity management as a consultant, author and educator. He has been directly involved with dozens of IT/telecom consulting and audit engagements ranging from governance program development, program exercising, execution and maintenance, and RFP preparation and response. Kirvan currently works as an independent business continuity consultant/auditor and is the secretary of the Business Continuity Institute USA chapter. He can be reached at pkirvan@msn.com.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

Defining cloud storage: The most popular cloud terms


Rachel Kossman (Click here to read this on SearchCloudStorage.com) So many cloud terms to learn, so little time. If you're an IT pro who will be advocating for cloud storage project funding, or if you just want to get a head start on understanding the technology, these cloud storage definitions will help you gain a solid understanding of the fundamentals. These cloud terms cover everything from service-level agreements (SLAs) to the nuts and bolts of a cloud storage infrastructure, and they can help you sound like a cloud storage professional when talking to team members, your chief information officer and your vendor. 1. What is cloud storage? It's no surprise this term is at the top of our list because analysts and vendors alike are struggling to determine what cloud storage truly is. Cloud storage options are broken into three categories -- public cloud, private cloud and hybrid cloud -- each with its defining factors to help distinguish it from the others. 2. What is a cloud storage SLA? A cloud storage service-level agreement is crucial for any organization looking to move to cloud storage. This contract between a customer and their cloud storage service provider provides guarantees and details the services being offered, such as 99.9% uptime. 3. What is cloud washing? The chances that youve been cloud washed are pretty good. It means a vendor has slapped a cloud label onto a product that isn't truly a cloud

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 8 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

offering. For tips on how to avoid cloud washing, listen to our expert podcast with ESG senior analyst Terri McClure. 4. What is cloud insurance? This is a risk management approach often included in a cloud service-level agreement. This contractual cloud insurance policy guarantees financial compensation if downtime or failures are made by the service provider. 5. What is a green cloud? Part of the buzz surrounding cloud technology is the opportunity to reduce your organization's data footprint. Research indicates that the potential for green storage benefits exist for organizations switching to cloud storage solutions, including a nearly 40% reduction in data center energy worldwide by 2020. 6. What is cloud drive storage? This refers to mounting a cloud storage option so that it simply appears as a drive letter for users in the interface. This allows the server to treat the cloud storage drive as if it were on direct-attached storage (DAS) or a shared storage filer. 7. What is a cloud storage service? Any company that provides cloud storage services is considered a cloud storage service provider. These services can range from those supplied by public cloud storage, such as Amazon S3 or Windows Azure, to offerings from private cloud storage companies such as Hitachi, Nasuni or StorSimple. 8. What is a cloud storage gateway? Cloud storage gateways allow users interested in a public cloud solution to form a bridge of sorts between local applications and remote cloud-based storage. This is a necessity in certain cases because legacy applications and public cloud technologies use different protocols, therefore making them incompatible. Be sure to check out the recent in-depth podcast on cloud gateways that assistant site editor Rachel Kossman conducted with Gartner research director Gene Ruth. 9. What is cloud file storage? This Internet-based storage option, often called CFS, is billed as a pay-peruse service that's best for unstructured or semi-structured data: documents, emails, spreadsheets, presentations and so on. ESG senior analyst Terri

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 9 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

McClure shared her views on the benefits and best uses cases of cloud document sharing in a recent SearchCloudStorage.com podcast. 10. What is a cloud storage infrastructure? Simply put, a cloud infrastructure is the software and hardware components needed to meet the requirements of a cloud storage model. These requirements can include virtualization software, servers and operating systems -- yet they differentiate a cloud storage solution from a normal storage solution in that the system must be able to access files remotely through a network.

vSphere file systems: VMFS 5 plus VMFS vs. NFS


David Davis (Click here to read this on SearchVirtualStorage.com) Storage admins are no doubt familiar with traditional Windows file systems (NTFS) and Linux file systems (ext3) running on servers with these operating systems installed. But they likely arent as conversant with the most widely used file system for VMwares vSphere/ESXi hypervisor, VMFS. VMFS owes its status as the most popular file system for VMware to the fact that it was purpose-built for virtualization. By enabling advanced vSphere features (like Storage vMotion) or powerful VM features (such as snapshots), this cluster-aware file system is a key (but often overlooked) piece of vSphere that must be considered to ensure a successful virtual infrastructure. And the latest version, VMFS 5, delivers a number of updates. You make be wondering why we need a new file system just for vSphere, when NFS can suffice. There are a number of things that make VMFS special and necessary. Consider the following. Unlike other file systems, VMFS was designed solely for housing virtual machines. Multiple ESXi servers can read/write to the file system at the same time. ESXi servers can be connected or disconnected from the file system without any disruption to the other servers using it or virtual machines inside. VMFS on-disk file locking ensures that two hosts dont try to power on the same virtual machine at the same time. It was designed to have performance as close as possible to native SCSI, even for demanding applications. In the event of a host failure, VMFS recovers quickly thanks to distributed journaling. VMFS can be run on top of iSCSI or Fibre Channel. Unlike the file-level NFS, VMFS is a block-level file system.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 10 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

Point-in-time snapshots can be taken of each virtual machine to preserve the OS and application state prior to installing patches or upgrades. Snapshots are also used by backup and recovery applications to perform backup without downtime to the VM. Running low on disk space? VMFS allows you to hot-add new virtual disks to running virtual machines.

You cant run Windows computers on VMFS, but you can run lots of Windows virtual machines, stored in virtual machine disk files (known as VMDKs) inside VMFS. You can think of the virtual disks that represent each virtual machine as being mounted SCSI disks. This enables you to run any operating system inside a virtual machine disk on your SAN, even if that OS is DOS and wasnt designed to run on, say, an iSCSI SAN. VMFS vs. NFS While VMware supports the use of both VMFS (SAN-based block storage) and NFS (NAS-based file storage) for vSphere shared storage, VMware has usually supported VMFS first (and NFS later) when new features are released. Today, there arent significant differences to using NFS over VMFS but most people at VMware recommend VMFS (which makes sense as the company designed it just for this purpose). For more information on the VMFS vs. NFS debate, see this post by NetApps Vaughn Stewart. No matter the option you choose, by using shared storage with vSphere, you can use the following advanced features (assuming your version of vSphere is licensed for them): * vMotion, which moves running virtual machines from one host to another * Storage vMotion, which moves running virtual machine disk files from one vSphere datastore to another * Storage Distributed Resource Scheduler (SDRS), which rebalances virtual machine disk files when a vSphere datastore is running slow (high latency) or is running out of storage capacity * vSphere High Availability, whereby, if a host fails, the virtual machines are automatically restarted on another host Keep in mind that shared storage and VMFS (or NFS) are required to perform these advanced features. While you might have local VMFS storage on each host, that local storage wont enable these features on its own, unless you use a virtual storage appliance (VSA) such as the vSphere Storage Appliance to achieve shared storage without a physical disk array. New in VMFS 5 With the release of vSphere 5, VMFS has been updated with a number of new features. They are:

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 11 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

New partition table; GUID partition table (GPT) is used instead of master boot record (MBR) Larger volume size, with support for volumes as large as 64 TB Unified block size of 1 MB Smaller sub-block size The capability to upgrade from VMFS 3 to VMFS 5 without disruption to hosts or VMs

While the benefits from these changes may not be immediately evident, they offer the largest volume size and the most efficient virtualization file system yet. Configuring VMFS Assuming your server virtualization environment uses VMFS, how do you know what your capacity is, what your VMFS version is and what your block size is? Its easy: Go to the vSphere Client and then to Datastores and Datastore Clusters Inventory. Click on each datastore and youll see basic information on the Summary tab. However, youll see more detailed information by clicking on the Configuration tab, as shown below. As you can see in the screenshot, this VMFS local storage is using VMFS 5.54 and has a block size of 1 MB block. There is just one path to get to the datastore, and it just has a single extent. If a datastore is running an older version of the VMFS (like VMFS 3), this is where you would upgrade it from. If you click on the Properties for the datastore, shown below, you could manage the paths, add more extents to increase the size of the volume or enable Storage I/O Control (SIOC). Using the Datastore Browser (accessed from the Datastore Summary tab), shown below, you can look inside a VMFS or NFS datastore and see whats inside. Unlike with Windows or Linux file systems, you wont see any operating system files inside your VMFS datastores. Instead, youll just see folders for each of the virtual machines and inside those, youll find the virtual machine VMX configuration file and VMDK file (among other, less important VM files). David Davis is the author of the best-selling VMware vSphere video training library from TrainSignal. He has written hundreds of virtualization articles on the Web, is a vExpert, VCP, VCAP-DCA, and CCIE #9369 with more than 18 years of enterprise IT experience. His personal Website is VMwareVideos.com.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 12 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

When using SSD is a bad idea


Phil Goodwin (Click here to read this on SearchSolidStateStorage.com) SSD has established its position in the data center. Nearly all major vendors specify a Tier 0 in their best-practice architectures. Server-side SSD is being used to enhance server performance and storage-side SSD eliminates the boot-storm bottleneck. As with most technologies, though, its as important to know when not to use it as it is when to use it. Here are some cases where not to use SSD. Dont use SSD when applications are not read-intensive. SSD is brilliant for read-access times. It can outperform HDD by 10X or more. There is no free lunch, however, as SSD loses all of its benefits in the write category. Writes not only lag, but they also wear out the SSD memory cells. Memory cells have an average write life after which the cells begin to burn out (see your vendor for details of its specific system). As cells fail, overall performance degrades. Eventually, the SSD must be replaced to restore full performance and we all know SSD is not cheap. Some vendors do offer extensive warranties. So what is the magic line for a read/write ratio? There probably isnt one, but start with 90/10 as ideal. Application requirements may dictate a compromise in this regard, but knowing permits IT managers to make a conscious decision. If the ratio is below 50/50, then obviously an HDD would be a better choice. Here, from an application performance perspective, the SSD read performance is being offset by the inferior write performance. Finally, if SSD is needed for read performance but writes are an issue, consider some of the vendors that employ wear-leveling mechanisms and minimize write-amplification to reduce the impact. SSD size will also be a factor. Going cheap on the SSD increases thrashing as it reduces the chances of a recursive read. Dont use SSD when data access is highly random. SSD is sometime referred to as cache-tier and the name is apropos. Fundamentally, it is a cache that eliminates the need to perform a fetch to a hard-drive when the data is cache-resident. Applications with highly random access requirements simply wont benefit from SSD the read will be directed by the array controller to the HDD and the SSD will be an expense with little benefit. Dont use general-purpose SSD in highly virtualized environments. OK, this one will generate some controversy, because there are some really good use cases for SSD with virtual machines, such as boot storms. However, many VMs accessing the same SSD results in highly random data patterns, at least from a storage perspective. When a hundreds of VMs are reading and writing from the same storage, one machine is constantly over-writing

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 13 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

the other. However, there are SSD solutions designed specifically for virtual environments, which is why theres a general purpose caveat above. Dont use server-side SSD for solving storage I/O bottlenecks. Serverside SSD is fundamentally server cache, which solves a processing problem and even a network bandwidth problem. Spreading SSD across hundreds of physical servers, equipping each server with its own SSD, may indeed help with I/O bottlenecks, but not nearly as effectively as the same aggregate capacity in a storage tier. Dont use Tier 0 for solving network bottlenecks. If data delivery is inhibited by the network, its obvious that optimizing the storage system behind the network will do little good. Server-side SSD may reduce the need to access the storage system and thereby reduce the network demand. Dont deploy consumer-grade SSD for enterprise applications. SSD is manufactured in three grades: single-layer cell (SLC), multi-layer cell (MLC) and enterprise multi-layer cell (eMLC). MLC is considered consumer-grade and found in most off-the shelf applications. It has a life of 3,000-10,000 write operations per cell. SLC, or enterprise grade, has a life of up to 100,000 write operations per cell. eMLC attempts to strike a balance between price and performance, offering around 30,000 writes per cell but at a lower price point than SLC. Caveat emptor, as you get what you pay for. Phil Goodwin is a storage consultant and freelance writer.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 14 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

RECOMMENDED SPEAKER RESOURCES


These how-to guides written and recommended by our speakers will guide you through four of todays hottest storage themes and technologies and let you in on what you need to know about SSD, VMware, archives and storage consolidation for successful 2013 planning.

Find the best spot for flash SSD storage


Marc Staimer (Click here to download the PDF) There are six different flash SSD storage implementations today. Each is primarily aimed at reducing latency, improving performance in IOPS and throughput, while secondarily aimed at reducing storage total cost of ownership (TCO). This first tip will provide a brief description and reveal the pros and cons of: PCIe flash SSD storage card(s) as cache or storage in the server. PCIe flash SSD storage card(s) as cache in a storage system (SAN storage or NAS). HDD form factor flash SSD(s) as NAS system or storage array cache.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

Flash storage technology diversity requires that this subject be spread over two tips. Part Two to follow. PCIe flash SSD storage card(s) as cache or storage in the server. Putting the flash SSD PCIe card locally in the server on the PCIe bus puts the cache closer to the application. There is no adapter, transceiver, network cable, switch, storage controller, etc., in the path. The short distance reduces the latency; speeding up all IO operations such as reads and writes. This is why these cards are typically called application accelerators vs. storage accelerators. This type of flash SSD is primarily block. When used as cache, it requires additional software that relies upon policies to move data into and out of the cache, such as first-in, first-out (FIFO). Pros: Lowest latencies between applications and storage or storage caching. Makes a significant, noticeable and quantifiable difference for hightransactional and/or high-performance applications (OLTP, OLAP, rendering, genome processing, protein analysis, etc.) Cons: High CPU resource utilization, ranging from 5 to 25%. Relatively low capacities (although FusionIO has a 10TB double PCIe slot card). Cards are not shareable among multiple physical servers. Each physical server

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 15 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

requires one or more cards. Not useful for virtual servers except as cache with caching software because VM portability and resilience requires shared storage. Caching software licensing is on a per-physical-server basis. Most of the caching software is block storage, making it somewhat useless in file based storage or applications. (Nevex is the exception.) Card management is on a per-card basis, increasing administrator management tasks resulting in a high total cost of ownership (TCO). Best fits: Well-suited for high-performance compute clusters (HPC) where performance improvements in nanoseconds to microseconds are huge. Other solid fits include OLTP, OLAP, BI, social media, genome processing, protein processing, rendering, security, facial recognition, and seismic processing. PCIe flash SSD storage card(s) as cache in a storage system (SAN storage or NAS). PCIe flash SSD storage cards provide storage systems (storage vendor option) a lower cost, higher capacity, and slightly less-performing extension of the systems DRAM. Its a storage accelerator. Algorithms determine less frequently accessed data, which is quickly moved from the systems DRAM to the flash PCIe SSD cache. That cache is an extension to memory. Administrators set policies for these caches, determining what type of data should be retained or pinned in flash cache (data not evicted from the cache). Use of PCIe flash SSDs as cache reduces latency to and from the storage system by reducing disk IO when satisfying read requests and in the case of NAS, metadata as well. Pros: Reduces latencies from applications to shared storage. It works well with virtual servers, VDI, VM portability, and VM resilience. Its shareable among physical and virtual servers. It requires no server resources. Cons: Flash cache is size limited by available storage system PCIe slots. Users experience increased latencies and excessive response times because more frequent cache misses requiring requests to get the data from the HDDs. Any given storage systems flash cache cannot be shared by any other storage system. The most severe performance bottleneck is most often the storage systems CPU. As CPU utilization elevates, so does latency and user response times. Tends to be a very high or expensive TCO. Best fits: Well-suited for virtual servers and VDI. Good at providing a boost to heavy traffic applications such as Email. Does well at accelerating databases when indexes and hot files can be pinned to the cache. HDD form factor flash SSD(s) as NAS system or storage array cache.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 16 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

HDD form factor flash SSD storage cache is functionally similar to PCIe flash SSD storage as cache. Its a storage accelerator with similar algorithms. Instead of going into the controller as PCIe SSD cards do, HDD form factor SSDs go behind the storage controller in HDD slots. Sitting behind the controller means higher capacities but higher latencies. Pros: Reduces latency from applications to shared storage. Works well with virtual servers, VM portability, and VM resilience. Its shareable among multiple physical and virtual servers while consuming no server resources. Lower TCO per GB than the PCIe form factor. Cons: Capacities larger than PCIe flash SSDs, but limited by both flash SSD capacities and disk controller performance limitations. Users experience increased latencies and excessive response times because cache misses occur more frequently, redirecting requests to the HDDs. A storage systems flash cache cannot be shared by any other storage system. The most severe performance bottleneck is commonly the storage controller increasing latency and user response times. Best fits: Well-suited for virtual servers and VDI. Good at providing a boost to virtual environments and heavy traffic applications such as email. Does a good job at accelerating databases when indexes and hot files can be pinned to the cache. The next tip will provide a brief description and reveal the pros and cons of: HDD form factor flash SSD(s) as Tier 0 storage in a multi-tier NAS or storage array. HDD form factor flash SSD(s) as all SSD NAS or storage array. PCIe flash SSD storage card(s) or HDD form factor in a caching appliance on the storage network (TCP/IP, SAN or PCIe).

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Determining the best location for flash SSD storage: Part Two The previous tip examined the pros and cons of flash SSD storage in PCIe flash SSD storage card(s) as cache or storage in the server; PCIe flash SSD storage card(s) as cache in a storage system (SAN storage or NAS); and HDD form factor flash SSD(s) as NAS system or storage array cache. This tip continues where the last left off and provides a brief description and examines the pros and cons of: HDD form factor flash SSD(s) as Tier 0 storage in a multi-tier NAS or storage array. HDD form factor flash SSD(s) as all-SSD NAS or storage array.

Page 17 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

PCIe flash SSD storage card(s) or HDD form factor in a caching appliance on the storage network (TCP/IP, SAN or PCIe).

HDD form factor flash SSD(s) as Tier 0 storage in a multi-tier NAS or storage array. Tier 0 storage is similar to HDD form factor flash SSD storage as cache. The difference is in how that HDD form factor flash SSD is treated. It makes flash function as the high-performance storage tier or the storage location for the hottest accessed data. It is also designated as the target for data associated with applications requiring very quick response time and low latency. As the data on Tier 0 ages and access becomes less frequent, auto-tiering software moves the data to a lower-performing, lower-cost HDD storage tier. Pros: Reduces latencies from applications to shared storage. It works well with virtual servers, VM portability and VM resilience. Its shareable among multiple physical and virtual servers. It requires no server resources. Lower TCO per GB than the PCIe form factor. It can redistribute workloads in a manner that reduces the total number of HDDs without compromising performance or capacity. Capacity is shifted to slower higher capacity HDDs while performance requirements are pointed at Tier 0 HDD form factor flash SSDs. Cons: Capacities are limited similarly to HDD form factor flash SSD cache. As working sets grow along with general data growth, there is diminishing ability for that limited Tier 0 to keep up with demand. More applications and users will be pointed at slower storage tiers. Users experience increased latencies and excessive response times. Similar to other implementations, HDD form factor flash SSDs only benefit the storage system in which it is installed. The most severe performance bottleneck is commonly the storage controller, increasing latency and user response times. Auto-tiering software, which can be costly although not always, adds to the storage systems controller utilization load, further impinging on performance. Auto-tiering software tends to move data only in a downward direction (Dell Compellent and XIO Hybrid ISE are distinct exceptions). Best fits: Well-suited for virtual servers and VDI. Good at providing a boost to heavy traffic applications such as email. Quite good at accelerating databases where indexes and hot files are never moved out of Tier 0. HDD form factor flash SSD(s) as all-SSD NAS or storage array. Implementing a pure-HDD form factor all-flash SSD storage system provides much lower latencies, higher IOPS, and throughput while eliminating caching or tiering requirements. All-SSD storage systems have enormous performance and simplicity appeal. Most leverage SSD performance to

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 18 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

include some form of data reduction (deduplication and/or lossless compression). System and flash speed are fast enough that applications and users typically dont notice the additional data reduction latency. Pros: Reduces latencies from applications to shared storage. One storage tier eliminates complicated storage tiering software. Works well with virtual servers, VM portability and VM resilience and shareable among multiple physical and virtual servers, consuming no server resources. HDD elimination prominently reduces power and cooling. Combining power and cooling savings with data reduction capacity savings provides a net-effective GB TCO in line with many HDD storage systems, whereas cost per IOPS or throughput is conspicuously better. Cons: Scalability tends to be limited to less than 500 TB raw storage and some cases, much less (SolidFire is an exception that scales to a petabyte of raw storage). The bottleneck with this type of storage system is the storage controller utilization. As controller utilization elevates, so does latency and user response times. Best fits: Well-suited for virtual server and VDI environments. Good at providing a boost to virtual environments and heavy traffic applications such as email. Does a good job at accelerating databases. Any application requiring a lot of performance and capacity thats more than can be found in either caching or tiering is a good fit. Data centers with limited power and cooling availability are also a good fit. PCIe flash SSD storage card(s) or HDD form factor in a caching appliance on the storage network (TCP/IP, SAN or PCIe). Caching appliances sit non-disruptively on the storage network logically between clients and the storage systems. Caching appliances are primarily read and metadata for NAS only. Caching appliances are loaded up with either PCIe flash card SSDs or HDD form factor SSDs. Capacities tend to be less than 30 TB. These appliances are purpose-built for caching. There are four different types of caching appliances: Dumb (severely limited storage system software such as snapshot, thin provisioning, data reduction, replication, etc.) non-app aware block-based acceleration approach (Violin, Texas Memory Systems, EMC Thunder, Astute Networks). File-based dumb non-app aware variation (Avere). IP network intelligent packet inspection that caches appropriate data to the appliance (CacheIQ). File-based application read and metadata acceleration caching appliance (Alacritech ANX and NetApp FlexCache-SAA). Files are stored on the appliance based on read frequency. As frequency

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 19 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

declines, theyre removed. Metadata also kept on the appliance. Typically, this type of caching produces the lowest file latencies. Pros: Reduces latencies from applications to shared storage. Caching appliances are the most leverageable and shareable with physical servers, virtual servers and multiple storage systems. The file-based application read and metadata acceleration approach reduces NFS and TCP/IP latencies, making both the reads and metadata a lot faster. All of the appliance types reduce controller load on the backend storage systems, enabling more of backend storage controller cycles for modern-day storage functions, improving overall performance. TCO tends to be the lowest with this type of flash SSD while the overall cost per IOPS or throughput is equivalent. Cons: Scalability tends to be less than 10 TB raw storage in some cases. It is another system that sits between servers and storage, making troubleshooting a bit more complicated. For file caching, works better with NFS than CIFS. Best fits: Well-suited for virtual server and VDI. Ideal for lowering overall storage costs while increasing IOPS and throughput. Good fit for HPC (block), rendering (file), genome and protein sequencing. One final note: One size or type of flash storage does not fit all. Be prepared to implement different flash storage variations to solve different problems and application requirements.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

The value of VMwares Changed Block Tracking (CBT)


Marc Staimer (Click here to download the PDF) VMwares Changed Block Tracking is an important part of VMwares vStorage API for Data Protection (VADP). CBT is part of VMwares efforts to simplify and improve the efficiency of backup processes for VMware virtual machines. Traditional virtual machine backups require an agent on every VM. And although that works, it has difficult operational issues. Each agent consumes physical server resources when performing a backup or scan. One agent on one server is not a problem. Dozens of agents on dozens of virtual servers will reduce the number of VMs that can be supported on that physical server. The trickier problem is managing the timing of the VM backups. Backup software is generally optimized to perform as many backups as possible in the shortest period of time. The object is to meet the ever-shrinking backup

Page 20 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

window. When multiple concurrent backups occur with VMs on the same physical server, the performance will suffer. Those VMs are sharing the elements, such as the IO path, bus, adapters and buffers. As a result, performance can slow to a crawl, causing backups to miss those backup windows or just plain fail. Their first VMware solution aimed at addressing the VM backup problem for the backup software market was VMware Consolidated Backup (VCB). VCB did not adequately or fully address the problem. VCB leveraged ESX snapshot capabilities by mounting them on an external Microsoft Windows server via shared SAN storage. A backup agent was installed on the proxy server, which then backed it up. There were obvious problems with the VCB methodology. It required additional server hardware, network hardware, storage networking, rack space, floor space, cables, power, cooling, and other considerations. The number of VMs per proxy server was severely limited, while cost and performance were frequently unacceptable. VMwares second attempt at addressing the problem came with the release of vSphere 4 back in 2009, which came out with a much superior effort in VADP and CBT. VADP eliminated the proxy server and enabled third-party data protection and backup software to be more tightly integrated with VMware. VADP essentially allows that third-party software to quiesce and initiate a snapshot of one VM, multiple VMs, or all of the VMs. It then retrieves or replicates the snapshot data out to the backup/media server. VADP provides a singlestep, source to target, copy process without a VCBlike proxy. VADP eliminates backup processing or agents within VM guests. No additional VMware software is required. There is one exception: VMware is not application-aware. Its snapshots do not quiesce a database or structured application in a hot backup mode. In other words, it does not flush the cache, complete the writes in order, and then take the snapshot. To have an easily recoverable snapshot of those VM applications still requires a software agent. But VADP is, in reality not all that dissimilar to the APIs from Citrix XenServer or Microsoft Hyper-V server virtualization. In all three cases, the backup software is copying a complete snapshot of each VM. Because it is a complete copy of all the data in the VM, it is difficult to complete the backups within the required backup windows. The general rule of thumb for a typical server applications data change rate is between .5% and 1% per week. It makes little sense to have to back up 100% of the data every day when so little has changed. So, VMware took a page from the storage system snapshot vendors as well as common backup software vendors to solve the VM data protection problem with Changed Block Tracking.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 21 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

Changed Block Tracking is conceptually similar to snapshot differential, backup delta block, or incremental backup. It determines if any blocks have changed since the last VM snapshot and tags the blocks that have changed. This enables the third-party backup or data protection software to copy out only the blocks that have changed since the last backup. This means they dont copy out data that has been copied previously. The amount of data copied out is commonly reduced by more than 99%, which saves a lot of time in the backup process. Without Changed Block Tracking, it can only do so after the entire snapshot is first copied out to the media server. And that remains the case if the backup or data protection software provides delta block, incremental backups, compresses, and/or dedupes. Changed Block Tracking dramatically reduces the amount of data copied before additional data reduction technologies are applied, reducing backup and processing time. What does all this mean? If you are planning on backing up VMware vSphere virtual server VMs, make sure your backup software makes effective use of VADP and Changed Block Tracking. It will be simpler. It will save time, money, and reduce the amount of backup data being stored.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

Long-term archives require detailed planning


Randy Kerns (Click here to download the PDF) Conversations with IT people about long-term archiving usually begin by focusing on a specific storage device, and then it quickly becomes apparent that much more is involved. Addressing a long-term archive is a complex issue that requires education to understand. There is no single silver-bullet product. The technology discussions include devices/media for storing data and the storage systems and features utilized. Storage systems that automatically and non-disruptively migrate data from one generation of a system to another are key to long-term archiving. I use the analogy of pushing something along in a relay race. The information maintained in an archive is another key consideration. Information is data with context, where the context is really an understanding of what the data is, what it means, and what its value is. Maintaining information over time requires applications that understand the information, devices that can read the information, and a method for determining when the information no longer has value as part of a data retention policy. Kicking the can of information down the road for years when it has no value makes no sense.

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 22 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

The ability to read and understand the information years into the future is another major concern for long-term archiving. Without applications that do this, the issue of addressing long-term archiving becomes moot. I try to divide the problem into two parts. The first is defining information that is system of record where the data must be processed by the application to produce results. The simplest example of this is business records that produce reports, statistics, or other numbers. In this case, there must be a linkage between the information and the application. If the application changes or is replaced, then the information also must be carried along with translation so the new app understands it. If not, the information no longer has value. The second part of the application issue concerns information that needs to be viewable in the future where no application is needed. This case is created by putting the information in a viewable format that will persist for a long time. Today that would be a PDF document. At some point that may change and the PDF documents would have to be translated or transformed for the new viewable format, once again requiring a linkage between the information and application. You must address all of these points for a long-term archive to achieve its goal of making information available and readable when its needed.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

Confusion over storage consolidation


Randy Kerns (Click here to download the PDF) Storage consolidation seems to be a simple concept. If you reduce the number of storage systems, you benefit from fewer devices to manage, less space required, and less power/cooling demands. Yet there is confusion over exactly what the term storage consolidation refers to. The confusion comes from some vendor messaging and what IT storage professionals actually view as storage consolidation. This leads to miscommunication and different sets of expectations about storage optimization projects. For IT storage professionals, storage consolidation is about storage efficiency. A new storage system can be deployed to meet the aggregate performance and capacity demands to replace disparate storage systems. The simplest form of storage consolidation is to reduce the number of boxes on the floor. But storage consolidation does not mean one storage system for all purposes.

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 23 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

There are legitimate reasons why IT operations end up with multiple storage systems over time. While people claim this can be avoided through better management and planning, things just dont work out that way. Multiple storage systems come about because: Projects that require more storage come with a budget to purchase new storage systems specifically for that project. IT operations consolidate because of acquisitions or mergers. New capacity demands require more storage, and it often makes sense to purchase additional systems instead of expanding existing storage systems. Thats because adding capacity to existing storage reduces the access density and overall performance. Also, the asset depreciation schedule for the existing storage system may make it impractical to reset the schedule with an addition.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

The single box for everything concept is not practical. From an economic standpoint, not all data has equal value and less valuable data can be stored on less expensive, lower-performing storage. The economics of storing data includes the cost of the storage system and the operational costs for protecting and migrating data. The data typically has a lifespan that long outlives any storage system, and managing data over its lifespan is more important for the IT storage professional than the box currently in use. And storage systems are transient. They last a maximum of four or five years before they are replaced with the latest, greatest technology. Tiered storage can lead to consolidation and enable storage efficiency. Using solid-state technology as a performance tier is a hot trend. Tiered storage allows for greater consolidation by managing the variations in performance requirements, which is really an exploitation of the change in probability access of data over time. This allows the storage system to support a greater amount of consolidation in support of performance and capacity demands.

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 24 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea 1. What do you see as the biggest challenge most storage teams face as we head into 2013? [Randy Kerns] Dealing with the increased capacity demand while still meeting the specific needs of how the storage is going to be used is the biggest problem. The demands for storage capacity in server virtualization environments are the most significant being dealt with today. [Dennis Martin] Picking the winner. Diversity is great in people, but too much choice in IT technology can get confusing. There are plenty of good storage technologies available today for everything from the interface such as Fibre Channel or Ethernet, to various advanced features in storage systems such as de-duplication, thin-provisioning and snapshot technologies. The question of how to deploy SSD technology in the enterprise is sure to be on everybodys mind. Operating systems and hypervisors are also advancing with many new storage-related features. [Marc Staimer] Managing the digital data tsunami of passive (e.g. lightly accessed or rarely accessed unstructured data). From storing it, protecting it, and keeping it for long periods of time without data degradation. [Jon Toigo] Unabated acceleration in storage capacity demand, with flat-todeclining budgets for new capacity. Moreover, while folks will want to use public cloud storage offerings, they will wrestle with issues of service l evel predictability and reliability. 2. There is a lot of talk lately about the changing role of the storage pro as convergence, BYOD, virtualization decisions are driving change in the data center. What is your view of the storage pros role today and in the coming years? [Randy Kerns] There is a degree of stratification here regarding the different market segments. In all but the very high-end data center environments, the person dealing with storage is now having to deal with more elements in IT including virtualization which means operating systems, servers, and networking. This means the storage people in these segments must have a much broader base of knowledge.

2013 PREDICTIONS FROM OUR EXPERTS


Our speakers answer some of todays most pressing data storage questions with a look towards the future in this exclusive Q&A session.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 25 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

[Dennis Martin] This may depend on your definition of storage pro, which could be a storage architect or storage administrator. One designs the enterprise storage infrastructure and the other implements and supports it. Both are valid career choices. Going forward, changing demands and technologies continue what the advent of SANs started a decade or so ago. Basic storage technology is becoming a commodity and is beginning to function increasingly as a utility. This means that the highly skilled storage administrators will be asked to learn some additional skills as more and more of the low-level storage management functions are abstracted away into software and/or hardware. Storage administrators are increasingly finding the need to understand server virtualization and the impact it has on storage systems. The storage architect will move into designing enterprise-wide solutions that will require knowledge of more than just storage. [Marc Staimer] Storage pros need to think of themselves as an internal storage service provider, resource, knowledge, and skills expert. Convergence, BYOD, virtualization, software defined data centers, are all tools. Someone must understand the underlying storage technologies, how they work, how to effective protect the data that resides on them, how to recover from a disaster, how to maintain business continuity, and what to do when storage breaks. Lack of this knowledge, skills, and experience can and probably will cause catastrophic issues. [Jon Toigo] None of the things you mentioned convergence, BYOD, server virtualization, etc. have anything to do with storage administration. Workload is workload. In the coming years, storage folks may well see their jobs disappear, with functions blended into application or server administrator positions, until problems start to appear that require expertise again. Most storage admins labor with little recognition until something goes wrong. There is a lot of silly marketecture promulgated by VMware and others about the server hypervisor automating the storage below. Even with good storage virtualization technology, you will still need storage infrastructure savvy planners and administrators to fix problems that occur. 3. Do you see any common blind spots in the storage, backup or disaster recovery approaches of the IT shops you talk to? What should IT pros be worried about that they arent worrying about now? [Randy Kerns] Most are just now getting to see the real economic value of implementing some long term data retention practice. Commonly called archiving but that term has become somewhat diffused now, managing information based on value and probability of access has tremendous payback and many in IT have not had the time in their schedules to really capitalize.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 26 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

[Dennis Martin] Disaster Recovery (DR) is often a blind spot, because it can encompass a range of scenarios from failures in minor components that affect some IT operations all the way to major disasters that can bring down the entire datacenter. Very few shops actually implement it and keep it current. Even fewer can verify that their plan actually works. As long as availability is king, IT pros should know, precisely, how to activate DR and what will happen when they do. [Marc Staimer] Many larger shops have multiple islands of data protection products. They have a general backup system for most of their servers; a VM/VDI specialized image backup product; snapshot-replication-and/or mirroring as part of their primary data storage systems; another backup product for laptops' another backup product for cloud based applications; another product for tablets and smart phones (or just leave it to the user which is never a good idea); server to server (S2S) or CDP or image backups (a.k.a bare metal restore or BMR) for some of their non-virtualized, mission critical or non-popular operating system servers; and a myriad of other point products such as deduplication target storage. More often than not these data protection islands to not integrate with each other and few if any IT pros know which island is protecting what. This leads to overlapping data protection systems creating multiple copies of the protected data that is not resolvable with deduplication. The result is many copies of the data and escalating unnecessary costs. That's obviously bad. What's worse is what happens during a recovery and restores. Multiple data protection systems where the admins are focused on recovering the data from an outage, will recover the same data at different RTOs. Later RTOs will overwrite earlier ones. The potential for data corruption is real and quite high making recoveries and restores tricky at best. A comprehensive data protection plan where all information of what is protected by what and when rolls up into one management system. That can be a single data protection product that covers all or vast majority of the IT organizations requirements or a 3rd party management product that ties all of the disparate data protection systems into one intuitive easy to understand user interface. [Jon Toigo] The biggest blind spot is represented by a failure to think strategically or to work on a coherent and open management scheme (preferably based on REST standards). Either way, you end up with islands of storage automation that mitigate efficient scaling, management and resource allocation to workload. Nobody is worrying about it right now, but with storage capacity requirements scaling by 300 to 650 percent (depending on whether you read IDC or Gartner) by 2014, management is going to be a huge cost accelerator. 4. Whats the next big thing in storage? [Randy Kerns] Moving to all solid state technology for primary storage is an inevitable progression. There will be a number of variations as we step

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 27 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

towards that reality and there will be accompanying promotion or paid evangelism along the way but it will have more quickly than most believe. [Dennis Martin] From the big picture perspective, it will be commodity storage and storage-on-demand. Application owners frequently wont engage with a storage or systems team. Theyll just build their application in a virtualized environment and take advantage of automatic provisioning for virtual machines and virtual storage. The virtualization tools will determine and tune things for the best fit of compute power and storage from enterprise-wide pools. Were not completely there yet, but were moving in that direction, with contributions from the operating systems and the storage hardware. The key will be to have solutions that can stitch together all the layers involved to make a seamless experience for the user. [Marc Staimer] If you recognize that the Hybrid SSD/HDD and 100% SSD storage systems are just beginning to gain market traction, you could say the massive move of primary active data to SSD storage. Or saying that in another way, SSD storage with low enough costs that make it the preferred medium for data instead of spinning drives. And with 14 nanometer NAND hitting the market now, those prices are becoming incredibly competitive. But that is not really the next big thing. It has already happened and is moving quickly to mainstream adoption. Therefore, I am predicting the next big thing (meaning it has not happened yet) is newly defined concept of "Software defined Storage" that was introduced as part of VMware's software defined data center. It will change the nature of storage flexibility and usefulness by allowing applications to consume storage capacity and storage performance on-demand. Great concept. Some storage technologies can do some of it and not all of it. The ability to really do software define storage (SDS) will be the next big thing. [Jon Toigo] 40 TB 2.5 disk featuring Bit Patterned Media (higher capacities shortly afterwards with HAMR), 32 TB LTO tape cartridges with BaFe coatings and Type II PMR, LTFS-based Tape NAS becomes a huge game changer.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Page 28 of 30

Storage Decisions 2013: Resource Guide

Contents
Top Storage Articles from 2012
Windows 8 Backup DR sites/outsourcing DR Defining cloud storage VMFS 5 When using SSD is a bad idea

Free resources for technology professionals


TechTarget publishes targeted technology media that address your need for information and resources for researching products, developing strategy and making cost-effective purchase decisions. Our network of technology-specific Web sites gives you access to industry experts, independent content and analysis and the Webs largest library of vendor-provided white papers, webcasts, podcasts, videos, virtual trade shows, research reports and more drawing on the rich R&D resources of technology providers to address market trends, challenges and solutions. Our live events and virtual seminars give you access to vendor neutral, expert commentary and advice on the issues and challenges you face daily. Our social community IT Knowledge Exchange allows you to share real world information in real time with peers and experts.

Recommended Speaker Resources


The best spot for flash SSD storage The value of VMwares Changed Block Tracking (CBT) Long-term archives require detailed planning Confusion over storage consolidation

What makes TechTarget unique?


TechTarget is squarely focused on the enterprise IT space. Our team of editors and network of industry experts provide the richest, most relevant content to IT professionals and management. We leverage the immediacy of the Web, the networking and face-to-face opportunities of events and virtual events, and the ability to interact with peersall to create compelling and actionable information for enterprise IT professionals across all industries and markets.

2013 Predictions from our Experts


2012 storage challenges The changing role of the storage pro Avoiding storage pitfalls in 2013 The next big things in storage?

Related TechTarget Websites

Page 29 of 30

You might also like