You are on page 1of 12

A Hurwitz wHite pAper

Best Practices for a Virtualized Infrastructure

Judith Hurwitz,
President and CEO

Marcia Kaufman,
COO and Principal Analyst

Daniel Kirsch,
Research Analyst

Sponsored by IBM

A Hurwitz wHite pAper

Introduction
The way organizations view their technology infrastructure is changing dramatically. Rather than simply selecting prepackaged options, management is looking to create a balanced approach that combines the best of mature software elements with optimized systems. This unified approach can provide the scalability and manageability to help the business break down silos across divisions and turn computing into a well-tuned utility. What does this mean to IT departments as they support changing business needs? Take the example of the University of Hamburgs Physics Department. To support its research efforts, the University implemented OpenAFS for file serving (OpenAFS is a popular file server used in many research environments). The Physics Departments IT infrastructure was capacity constrained and could no longer adequately serve its users. IT management needed to improve performance and quality in a way that would keep costs under control. The University had to choose between two alternatives: to purchase 10-12 new x86 commodity servers in combination with virtualization software; or, buy two IBM PowerLinux servers running PowerVM. Because of the design and pricing of the PowerLinux Open Source Infrastructure Services solution, the physics department was able to get the throughput they needed at a lower cost than the x86 option. More importantly, the Physics Departments new IT infrastructure is much more powerful than anticipated so that it can support scientific research and still have capacity for additional services. Taking advantage of this excess capacity on its IBM PowerLinux servers, the department has deployed email to 4,500 users and is considering adding storage management to the same virtualized infrastructure. The University determined that the cost of buying two IBM PowerLinux systems with PowerVM was 30 percent less than the x86 option. There are many significant issues that require companies to prepare for the unexpected. For example, a company may acquire another company or add a new set of partners that demand that new IT based services be created. These types of changes can suddenly put greater pressure on IT to support new workloads and more users than ever before. These changes can happen in a matter of days or weeks. Therefore, speed and agility are required. While times change and the circumstances of individual companies evolve, the need to respond to new workloads and new IT initiatives is a common occurrence. Therefore, if organizations follow a well-proven best practices approach, it is more likely that they will be successful in meeting changing computing infrastructure requirements. By viewing solutions from a best practices perspective, an organization can get to a solution faster. In this paper we will highlight six best practices. We will then provide an overview of IBMs PowerLinux Open Source Infrastructure Services solution.

There are many significant issues that require companies to prepare for the unexpected. These changes can happen in a matter of days or weeks. Therefore, speed and agility are required.

Best Practices for a Virtualized Infrastructure

Page 2

A Hurwitz wHite pAper

Best Practices for Infrastructure Management


React faster and assume that business change is the norm

There are many changes in how companies need to interact with customers, suppliers, and partners. Increasingly, companies are required to leverage complex data in order to understand changing business conditions. Likewise, companies need to be able to support an increasingly mobile world where unpredictable workloads may expand and contract in moments. Companies are also increasingly building complex ecosystems of partners that often exist across complex supply chains. To support these business conditions, IT must create the most flexible infrastructure that can support these changing and often complex workloads in a highly secure and predictable manner. Therefore, the infrastructure needs to be highly scalable, secure, and manageable. The bottom line is that the infrastructure needs to be controlled without unplanned downtime or business interruptions.
Develop a strategy to manage costs while meeting new demands

Often commodity servers appear to be less expensive than alternatives, but may actually cost more because of the cost of supporting components such as virtualization software that can also consume a high percentage of system resources.

With unlimited budgets, IT departments can easily meet any new business demand. But in the real world, IT budgets are constrained. Therefore, having a strategy to manage both the cost of acquiring and running new resources is critical for an organization. When new systems are purchased, there are two main elements that determine the Total Cost of Acquisition (TCA). First, there is the cost of the actual hardware. To evaluate the hardware costs, a purchaser must look closely at the capabilities of a system such as throughput and I/O. Often commodity servers appear to be less expensive than alternatives, but may actually cost more because of the cost of supporting components such as virtualization software that can also consume a high percentage of system resources. In addition, these commodity servers will also require management software. The cost of cooling and powering a data center is another important budget factor and can limit an organizations ability to increase their computer resources. Therefore, when evaluating new infrastructure, it is important to investigate systems that are optimized for managing workloads in the most efficient manner.
Plan to support the need for workload flexibility

The nature of workloads that companies are requesting to support business needs are changing. For example, the amount of data that companies are collecting from customers, partners, sensors on devices and the like are exploding. Companies are finding new ways to monitor equipment and networks. These companies are automating more of the digital business services that support the needs of their customers, partners and suppliers. Therefore, workloads may expand and then contract in a matter of hours or days. In order to take advantage of new opportunities in these evolutionary business and technical environments, companies need to increase the flexibility of the infrastructure. In addition, supporting these workloads requires that the

Best Practices for a Virtualized Infrastructure

Page 3

A Hurwitz wHite pAper

system be able to manage increased bandwidth demands, compression and decompression of data, support for the need to reduce data latency and the overall need for faster throughput. Virtualization and cloud computing are increasingly seen as having a role in supporting this need for increased flexibility. Virtualization creates an operating system layer that allows the system resources to be better optimized to support a variety of workloads. There are many different forms of virtualization that can support demanding workload requirements such as increasing throughput and optimizing I/O management. For example, there are virtualization techniques that are, in essence, designed to provide a computing fabric within a server that manages the way images are controlled to optimize performance. Virtualization hypervisors can be software based or implemented within the firmware of systems. As the hypervisor gets closer to the hardware level, it can run more efficiently and securely. However, virtualization alone is not always sufficient to support an organizations demands for improved infrastructure performance. The way a virtualized environment is designed and managed can significantly determine how well an overall optimized system performs. For example, over the past decade, conventional wisdom assumed that if an organization required additional throughput and I/O, the x86 platform would be the most economical and straightforward approach. But in order to increase throughput and I/O, an x86 platform has to be coupled with a virtualization environment. Without a comprehensive virtualization platform, the x86 system cannot keep up with the demands for throughput. One of the consequences of x86 platform implementations is server sprawl. Scaling an environment to support rapidly increasing computing requirements invariably leads to an overabundance of individual servers. The increase in the number of servers can quickly get out of hand resulting in an IT environment that is challenging and costly to manage. One way that companies have tried to minimize server sprawl is through server virtualization. For example, some virtualized servers can get up to 95 percent utilization, and virtualization makes creating images easy. Although virtualization streamlines the creation of images, this practice also has a downside. Without proper controls, virtualized images often remain in the system after they are no longer needed, resulting in high storage costs, security risks and governance concerns. Other challenges companies face when adding more x86 servers include lack of data center space and higher energy costs. In addition, the costs of a proprietary operating system and associated infrastructure applications can also detract from the expected benefits of this approach. In contrast, a high performance server designed to scale and with built-in advanced virtualization capabilities can provide more flexibility at a lower cost.

The way a virtualized environment is designed and managed can significantly determine how well an overall optimized system performs.

Best Practices for a Virtualized Infrastructure

Page 4

A Hurwitz wHite pAper

Be ready for flexible deployment options

To be able to support the changes needed for managing workloads requires a lot of flexibility in how these workloads are deployed. With all options, virtualization is one of the foundational technologies for the deployment of applications and important business services. How these services are deployed will depend on the nature of the workloads. Increasingly, companies are determining that a hybrid approach to delivery models provides the flexibility they require to support the business. For example, depending on the needed level of security and governance, a private cloud or traditional data center may be the right solution. In other situations where there is a high degree of standardization and automation, a public or private cloud may best support the business. In addition, IT management needs to put deployment models in perspective with how memory and I/O intensive the supported workloads are. The more complex the workloads, the more important it will be to abstract the complexity through the use of technologies that can be added to the overall infrastructure environment when needed. While a traditional computing environment may need to support a set number of users conducting a well-defined set of tasks, there are other scale-out environments that require a different approach to workload management. This is becoming especially critical in situations where large volumes of data that require low latency are involved. Increasingly, organizations are turning to scale-out solutions as a way to improve business performance such as big data. Scale-out solutions are ideal for managing compute-intensive workloads and infrastructure as well. There are two primary types of scale-out workloads: A single workload across multiple servers. The open source Apache Hadoop framework for big data analytics and many high performance computing (HPC) applications are architected for distributed processing. This enables almost limitless scaling to handle very large volumes of data and maximizes throughput and performance. Multiple workloads within a single server or set of servers. Many infrastructure applications, particularly in small to midsize businesses, have limited requirements for system resources. These applications can be combined on a single server or set of servers in virtual machines to maximize return on infrastructure investment. Some applications, like OpenAFS, used by the University of Hamburg, are designed to only utilize 1-2 processor cores on each scale-out server. These too can be combined in virtual machines of a single or set of servers, creating a virtual scale-out environment. Workloads that require scale-out performance are different than traditional data center applications, such as ERP systems. In a virtual scale-out environment, there is a need support the ability to move system resources around based on virtual machine and application behavior. This type of resource movement has to be done without impacting active processes or the customers who use of those

IT management needs to put deployment models in perspective with how memory and I/O intensive the supported workloads are.

Best Practices for a Virtualized Infrastructure

Page 5

A Hurwitz wHite pAper

processes. In addition, the management software should be able to manage pools of memory to reduce latency and improve performance without having to continually adding more physical memory.
Reliability and security must be a centerpiece for enterprise server environments

The speed of business change has become a well-accepted accepted reality. Customers, partners, and suppliers expect that the companys web services will be fast, secure, and available when needed to conduct business. Therefore, IT needs to make the reliability of its compute infrastructure a mission critical priority. Applications and services ranging from customer service web applications to access control directory services and network services all need to function in a consistent and reliable manner to support internal and external users. To meet these objectives, companies are looking to deploy servers that are designed to reduce both planned and unplanned downtime. In addition to reliability, companies need to also focus on the security of their IT environments. A security breech can quickly shatter all expectations of stability and reliability in an IT infrastructure environment. While most software has some level of security built in, it may not be sufficient to ward off threats. Therefore, companies are turning to servers with security that is integrated into the hardware. Solutions with security features built into each level allows for greater protection of resources such as memory, virtual disks and the network. Security technologies that are implemented at higher levels of the infrastructure stack will be more complicated to implement and may not offer as much protection. One of the biggest security threats facing companies is related to the broad use of virtual machines. Often companies do not implement good control over the virtual machines running in their environment, thereby increasing both operational risks and security holes that can be penetrated by an intruder looking for a weak link. Compliance regulations have an impact on infrastructure security as well. For example, in financial services companies there is a requirement to isolate information from different parts of the business. In healthcare organizations, privacy regulations require security around patient data. In retail organizations, customer data must be protected to comply with PCI DSS. No matter what industry a company participates in security must be at the core of any best practices strategy.
Provide the right level of skills and manageability

Management and automation are essential for creating a virtualized infrastructure that supports increased flexibility and reliability without increasing costs.

Management and automation are essential for creating a virtualized infrastructure that supports increased flexibility and reliability without increasing costs. The overall infrastructure environment needs to support abstraction of complexity so that the IT team can be proactive and responsive with a minimum of human error. At the same time, IT needs to understand and optimize a diverse set of workloads in a way that they can be predictably controlled and managed, using flexible and at times automated configuration tools.

Best Practices for a Virtualized Infrastructure

Page 6

A Hurwitz wHite pAper

There are three approaches that will enable a centralized approach to control and management: Reduce the number of servers that must be purchased and managed to meet company throughput needs. As organizations add additional infrastructure to meet the demands of new workloads, administration and management challenges can increase and system health issues can go unnoticed. The more streamlined an environment becomes the better able organizations will be to manage and control workloads. Utilize and manage virtualization to create pools of resources to improve the economics of supporting constituents. Virtualization is one of the most powerful approaches to streamlining costs and resource utilization in a scaleout environment. Automate as many repeatable tasks as possible. Many newer workloads involve managing huge volumes of data and analyzing that information, often in real time. These workloads create both additional complexity and limited room for error. An automated and centralized approach to management allows administrators to anticipate changes and optimize their resources. One of the most cost effective and increasingly mature approaches to support these requirements for standardization and automation is through the use of open source infrastructure services. One of the most widely used open source web standards is the LAMP stack. This stack includes the Linux operating system, the Apache HTTP server, the MySQL database, and scripting languages such as PHP, Perl, and Python. These technologies are typically distributed by Linux vendors such as Red Hat and SUSE and form the foundation for companies interactive Web applications such as e-commerce and customer service. Open source software is equally as accepted and popular for other infrastructure applications such as email, file and print and network and storage management.

over the past decade, open source has transformed with the support of sophisticated enterprise vendors who have taken the time and effort to turn these collaborative projects into safe, scalable and predictable offerings.

Why commercial open source and Linux are foundational


The prevailing IT management thinking has been to purchase commodity x86 platforms. The thought has been that the company could count on having a standardized platform that would support business goals. While many companies find this strategy to be effective in the short run, there are many shortcomings. First, scaling an environment to support rapidly increasing computing requirements invariably leads to an overabundance of individual servers along with an increase in operating and management costs. In addition, the costs and complexity of proprietary operating systems also detract from the expected benefits of this approach. Initially businesses were concerned that open source was not safe in a commercial environment. However, over the past decade, open source has transformed with the support of sophisticated enterprise vendors who have taken the time and effort to turn these collaborative projects into safe, scalable and predictable offerings. For example, the Linux operating system has become a stable, predictable, and well- accepted platform for computing. Linux is supported across most important systems in the market. In addition, when
Best Practices for a Virtualized Infrastructure

Page 7

A Hurwitz wHite pAper

Linux and open source software are combined with virtualization, businesses are finding that they gain a sophisticated well-honed environment that is easy to manage. The other primary benefit of an open source approach is that businesses avoid the lock-in that is associated with proprietary operating systems. At the end of the day, companies need to achieve the level of service needed to support the expectations of employees, customers, and suppliers.

IBM PowerLinux Open Source Infrastructure Services solution


IBM PowerLinux Open Source Infrastructure Services solution combines the IBM PowerLinux servers with virtualization and open source infrastructure software, technical support and a variety of service options. Specific elements of the solution include: IBM PowerLinux 7R2 rack server(s) or can be purchased as an IBM PureFlex System a system expert at sensing and anticipating resource needs to optimize infrastructure systems IBM PowerVM for IBM PowerLinux Red Hat Enterprise Linux for POWER or SUSE Linux Enterprise Server for POWER Open source infrastructure software optimized for POWER7 Choice of IBM, Red Hat or SUSE support Services for implementing a new system. IBM offers multiple options to help its customers speed deployment.
IBM PowerLinux 7R2

A primary benefit of an open source approach is that businesses avoid the lock-in that is associated with proprietary operating systems.

IBMs PowerLinux 7R2 is a two-socket, high-performance server that supports 16 POWER7 cores. The PowerLinux 7R2 was designed to create a highly scalable system. To achieve this goal, IBM standardized on the Linux operating system and implemented virtualization both on the hardware itself via firmware and in the software through PowerVM. Running PowerVM, IBMs virtualized infrastructure solution enables organizations to optimize the hardware so that it can support a huge number of images with scale-out performance.
Virtualization with PowerVM

The advanced virtualization capabilities of PowerVM are key to the power, flexibility, and efficiency of PowerLinux. For example, system resources such as CPUs, memory, and storage can be combined into shared pools. As a result, PowerVM can use these shared pools to dynamically reallocate resources to the workloads that need them the most. Companies benefit from this resource pooling capability by consolidating multiple workloads onto fewer systems. This resource pooling increases server utilization while increasing flexibility and reducing costs. In addition, PowerVM automation capabilities allow for faster deployment of virtual machines and storage. Capabilities like Live Partition

Best Practices for a Virtualized Infrastructure

Page 8

A Hurwitz wHite pAper

Mobility in a two-server environment can eliminate scheduled downtime. Resources can optionally be managed automatically as a result of built-in virtual resource management. PowerVM is also designed with exceedingly low overhead, meaning that more system capacity is available for users to run workloads. The firmwarebased hypervisor employed by POWER7 systems helps enable this advantage. Competing virtualization software, such as VMwares vSphere, can require upward of 10% overhead.

The PowerLinux Workload-Optimized Infrastructure


The PowerLinux Open Source Infrastructure Services solution provides the value of a workload-optimized system through 5 key ways: Optimized for Linux Economically increasing throughput per server Centralizing management and control Managing pools of processors Improving reliability and security
Optimized for Linux and Open Source

IBM is an active contributor to Linux, supporting the continued development and improvement of Linux for over a decade.

Linux was designed to be a modular operating system that can support different hardware architectures. IBM is an active contributor to Linux, supporting the continued development and improvement of Linux for over a decade. Much of IBMs research and enhancements to the operating system are given back to the Linux community. Linux from Red Hat and SUSE are compiled for POWER and ready to use. Since the source code in use for x86 and POWER is the same, a user skilled in Red Hat or SUSE will receive all of the same source code and tools. When you purchase a RHEL or SLES for POWER, you also receive hundreds of applications that are compiled for the PowerLinux platform. A service subscription from IBM or the selected Linux distribution vendor provides organizations not only support for the operating system, but also for all of the included applications. The most popular open source infrastructure applications, including the entire LAMP stack (Linux, Apache, MySQL, Perl/PHP/Python), are included with the Linux distributions. Additionally, mail serving applications such as Postfix, Dovecot, and Cyrus are included with both distributions along with SAMBA, a file and print server and Tomcat, a Webb application server. Other commonly used applications cover network infrastructure and file and print capabilities. The Installation Toolkit for PowerLinux provides a way for users to easily install all of these applications.
Economically Increasing Throughput per Server

Best Practices for a Virtualized Infrastructure

Page 9

A Hurwitz wHite pAper

The pricing structure and technical capabilities of IBMs PowerLinux Open Source Infrastructure Services solution are intended to help customers realize the same throughput for less cost than comparable solutions with x86 with virtualization software. For example, IBM introduced the technology of micro-partitioning as part of its Power family. Through micro-partitioning, IT departments can create up to 10 virtual machines per processor core an advantage that allows for greater consolidation. In addition, IBM offers a single version of PowerVM for IBM PowerLinux, enabling customers to increase the amount of memory per virtual machine without additional cost. The amount of memory that can be added is only limited by the actual amount of available physical memory on the machine. On the other hand, VMware offers three editions of VMware vSphere 5, with limitations on the amount of memory per socket per VM ranging from 32 to 96 gigabytes. Leveraging the capabilities of PowerVM, customers can economically create more virtual machines on a system. The automated resource balancing of the system allows for greater flexibility and utilization of the infrastructure. Cloud providers or organizations creating private clouds have a competitive advantage if they are able to put more VMs on a server. The ability to create these micro-partitions allows IT management to granularly control the amount of resources required for differing workloads while at the same time reserving resources for other applications. Intelligent threading enables automatic switching between one, two, and four execution threads per processor core so multi-threaded workloads can dynamically receive the number of threads needed to optimize performance. The resulting increased throughput helps companies improve response times as well as overall run times sometimes from days to hours or minutes.
Centralizing Management and Control

The ability to create micro-partitions allows IT management to granularly control the amount of resources required for differing workloads while at the same time reserving resources for other applications.

The PowerLinux capabilities for centralized management can have a significant positive impact on end user experience. By using these capabilities, companies can easily and dynamically balance the distribution of power and processors across disparate workloads. For example, email and web serving workloads are likely to make varying demands on system resources at varying times of the day. The email workload may demand the most processing power as the office workday begins and a retail-shopping web serving workload may demand the most power in the early evening when people get home from work. An organization running these two workloads on a PowerLinux server can ensure that end user demands are easily met as the PowerVM dynamically moves resources around to support the changing requirements as each workload hits its peaks and valleys during the day. In alternative systems, such as VMware, memory cannot be removed from a running virtual machine. As a result, if workloads no longer require all the memory it once did, the virtual machine must be restarted in order to make the excess memory available to other workloads running in other virtual machines. This means that the applications must be taken offline, which will detrimentally affect end users

Best Practices for a Virtualized Infrastructure

Page 10

A Hurwitz wHite pAper

Manages Pools of Processors

PowerLinux in combination with PowerVM provides a sophisticated technique for managing pools of memory and CPU among virtual machines on a single server or multiple servers. The environment allows memory or CPU resources to be shifted from one workload to another based on need. For example, memory can be dynamically allocated to a virtualized workload at runtime. In addition, shared processor pools can increase throughput by allowing for the automatic non-disruptive balancing of processing power between partitions assigned to shared pools. It also provides for the ability to reduce processor-based software licensing costs by capping the processor core resources used by a group of partitions. All of this is set up using the single system Integrated Virtualization Manager software that is included with PowerVM. Or, a separate hardware management console (HMC) can be used. VMware requires an HMC, which can add cost, something that is especially impactful in smaller environments.
Reliability and Security

The PowerLinux architecture incorporates a broad set of features to help ensure reliability, availability and serviceability of the infrastructure environment.

The PowerLinux architecture incorporates a broad set of features to help ensure reliability, availability and serviceability of the infrastructure environment. Capabilities such as First Failure Data Capture, Processor Instruction Retry, Alternate Processor Recover, Live Application and Partition Mobility are designed to help eliminate planned and unplanned outages. In addition to maintaining reliability, the PowerLinux architecture is also security-aware based on the advanced dynamic logical partitioning (LPAR) capabilities of the PowerVM hypervisor. The partitioning occurs at the firmware level which leads to major security improvements as compared to software based virtualization technologies. The net result is that a single partition, virtual machine, can act as a separate Linux operating environment. Virtual machines can have dedicated or shared processor resources. Application development and testing can be performed in secure independent domains, while production can be isolated to its own domain on the same system. As a result, there are no known security vulnerabilities identified for PowerVM. In contrast, a Windows Server user may need to take their system down for security patching as often as every four weeks.

Conclusion: The Value of Following a Best Practices Approach


In a world that demands scalability, modularity, quality, fast performance and a best practices approach is key to ensuring predictable results for the business. IBMs PowerLinux Open Source Infrastructure Services solution is a workloadoptimized system that meets or exceeds best practices for a virtualized infrastructure at a reasonable price point. The PowerLinux solution can help turn todays IT infrastructure into a well-tuned utility, providing a more flexible, scalable and valuable asset to the business.

Best Practices for a Virtualized Infrastructure

Page 11

About Hurwitz & Associates


Hurwitz & Associates is a consulting, market research and analyst firm that focuses on how technology solutions solve real world business problems. The firms research concentrates on disruptive technologies, such as Cloud Computing, Service Oriented Architecture and Web 2.0, Service Management, Information Management, and Social and Collaborative Computing. We help our customers understand how these technologies are reshaping the market and how they can apply them to meet business objectives. The team provides direct customer research, competitive analysis, actionable strategic advice, and thought leadership. Additional information on Hurwitz & Associates can be found at www.hurwitz.com.

This document was developed with IBM funding. Although the document may utilize publicly available material from various vendors, including IBM, it does not necessarily reflect the positions of such vendors on the issues addressed in this document. Copyright 2012, Hurwitz & Associates All rights reserved. No part of this publication may be reproduced or stored in a retrieval system or transmitted in any form or by any means, without the prior written permission of the copyright holder. Hurwitz & Associates is the sole copyright owner of this publication. All trademarks herein are the property of their respective owners. 175 Highland Avenue, 3rd Floor Needham, MA 02494 Tel: 617-597-1724 www.hurwitz.com

POL03111-USEN-00

You might also like