Professional Documents
Culture Documents
Storage Performance/Sizing
Important to scale performance to the total workload requirements of each VM Spindles are still key Dont migrate 20 physical servers with 40 spindles each to a Hyper-V host with 10 spindles Dont use left over servers as a production SAN
Storage Connectivity
From parent partition
Direct Attached (SAS/SATA) Fiber Channel iSCSI
Not Supported In
Windows XP Professional x86 All other operating systems
May not be required on core with no other roles Run Antivirus in virtual machines
Hyper-V Storage...
Performance wise from fastest to slowest
Fixed Disk VHDs/Pass Through Disks
The same in terms of performance with R2
Guest Clustering
Via iSCSI only
Disk is offline
Througput(MBps)
100.00
Native Physical Fixed VHD in Win7 Fixed VHD in Win2K8 Dynamic VHD in Win7
1.00
4K Random Read
4k Random Write
Windows Server 2008 R2: 85% of native
Throughput(MBps)
100.00
1.00
4K Random Read
4K Random Write
(Log Scaled by 10)
Differencing VHDs
Performance vs chain length
1000.00
Throughput(MBps)
100.00
64K Sequential Reads (R2) 64K Sequential Reads (v1) 4K Random Reads (R2)
10.00
1.00 1 2 4 8 16 32 64
Passthrough Disks
When to use
Performance is not the only consideration If you need support for Storage Management software
Backup & Recovery applications which require direct access to disk VSS/VDS providers
Booting child VM from SAN supported using iSCSI boot with PXE solution (ex: emBoot/Doubletake)
Must use legacy NIC
iSCSI Direct
Microsoft iSCSI Software initiator runs transparently from within the VM VM operates with full control of LUN LUN not visible to parent iSCSI initiator communicates to storage array over TCP stack Best for application transparency LUNs can be hot added & hot removed without requiring reboot of VM (2008 and 2008 R2) VSS hardware providers run transparently within the VM Backup/Recovery runs in the context of VM Enables guest clustering scenario
Fibre Channel 8 gig & 10 Gig iSCSI will become more common As throughput grows, requirements to support higher IO to disks also grows
The Microsoft iSCSI Software Initiator performs very well at 10 Gig wire speed 10Gig Ethernet adoption is ramping up
Driven by increasing use of virtualization
Fibre Channel 8 gig & 10 Gig iSCSI becoming more common As throughput grows, requirements to support IO to disks also grows
Jumbo Frames
Offers significant performance for TCP connections including iSCSI
Max frame size 9K Reduces TCP/IP overhead by up to 84% Must be enabled at all end points (switches, NICs, target devices
VM1
TCP/IP VM NIC1
VM2
TCP/IP
VM NIC2
NIC
Ethernet
VMBus
VM1
TCP/IP VM NIC1
VM2
TCP/IP VM NIC2
Port 1
Minipor t Driver
Q1
Q2
Defaul t Queue
VM Bus
Switch/Routing Unit
NIC
Ethernet
Quad core Intel server, Windows* 2008 R2 Beta, Intel 82598 10 Gigabit Ethernet Controller Near line rate throughput with VMDq for 4 VMs
Throughput increase from 5.4Gbps to 9.3Gbps *Other names and brands may be claimed as the property of others.
Manageability
iSCSI Quick Connect Improved SAN Configuration and usability Storage Management support for SAS
Scalability
Storport support for >64 cores Scale up storage workload Improved scalability for iSCSI & Fibre Channel SANs Improved Solid State disk performance (70% reduction in latency)
Automation
MPIO Datacenter Automation MPIO automate setting default load balance policy
Reliability
Additional redundancy for Boot from SAN up to 32 paths
Diagnosability
Storport error log extensions Multipath health & statistics reporting Configuration reporting for MPIO Configuration reporting for iSCSI
High Availability with Hyper-V using MPIO & Fibre Channel SAN
VHDs LUNs
Protects against loss of data path during firmware upgrades on storage controller
Additional sessions to target can also be added through MPIO directly from guest Additional connections can be added through MCS with iSCSI using iSCSI direct
Hyper-V Networking
Two 1 Gb/E physical network adapters at a minimum
One for management One (or more) for VM networking Dedicated NIC(s) for iSCSI Connect parent to backend management network
Only expose guests to internet traffic
Child Partitions
Applications
WMI Provider
Applications
Applications
VM Service
VM 3
User Mode
VM 1
VM 2
VSC
VSP
Windows Kernel
VSC
Linux Kernel
VSC
VSP VSP
VMBus
VMBus
VMBus
VMBus
Windows hypervisor
Mgmt NIC 1 VSwitch 1 NIC 2 VSwitch 2 NIC 3 VSwitch 3 NIC 4
Child Partitions
Applications
WMI Provider
Applications
Applications
VM Service
User Mode
VM 1
Windows Kernel
VM 2
VSC
Windows Kernel
VM 3
VSC
Linux Kernel
VSC
VMBus
VMBus
VMBus
Windows hypervisor
Mgmt NIC 1 iSCSI NIC 2 VSwitch 2 NIC 3 VSwitch 3 NIC 4
No Problem
Hyper-V R2 Manager includes option to set bindings per virtual switch
Microsoft Confidential
Avanade
1 Gbit/s LAN
Production VMs
iSCSI SAN
Hyper-V allows us to provision new servers quickly and more efficiently utilize hardware resources. Using Hyper-V with our existing NetApp infrastructure provided a cost-effective and flexible solution without sacrificing performance. Andy Schneider, infrastructure architect, Avanade
Lionbridge Technologies
Hyper-V has allowed us to consolidate 300+ servers to virtual machines. This configuration when combined with Microsofts iSCSI, Fibre Channel and SAN Gateway multipathing support provides great flexibility in storage options. We chose FalconStors SAN Gateway which enables advanced storage features to be Fibre Channel SAN used with any SAN storage and our iSCSI based virtual machines Frank Smith, Sr. Systems Engineer
iSCSI
iSCSI
Indiana University:
Auxiliary Information Technology
Terminal Services
SharePoint Server Farm
Pain Points
High growth and change No disaster protection Poor storage utilization Complex storage management
Solution
Windows Server 2008 iSCSI hosts 30TB iSCSI SAN with MPIO load balancing Lefthand MPIO DSM Two storage pools: SAS and SATA Multi-site SAN between two sites
When combining Hyper-V, and native Server 2008 technologies such as Microsoft MPIO Benefits and the Microsoft iSCSI software initiator, our administration was greatly simplified. High availability across sites Michael Johnston,Reduced storage management Technology VP of Information costs
SITE A SITE B
Virtualization Performance
www.virtualizationperformance.com
File Server VM
An iSCSI SAN allowed us to control costs and deliver better services to our clients. Stephen Ames, Virtualization Performance
Networking Enhancements
TCP/IP Offload Support VMQ & Jumbo Frame Support
Manage Remotely
Free
Free Free Free
For $500 add VMM 2008 R2 (Workgroup Edition) to manage MS Hyper-V Server R2:
Physical to Virtual Conversion (P2V); Quick Storage Migration; Library Management; Heterogeneous Management; PowerShell Automation; Self-Service Portal and more
Deployment Considerations
Minimize risk to the Parent Partition
Use Server Core Dont run arbitrary apps, no web surfing
Run your apps and services in guests
Storage:
Cluster Shared Volumes Storage with Windows Logo + FCCP Multi-Path IO (MPIO) is your friend
Networking:
Standardize the names of your virtual switches Multiple Interfaces CSV uses separate network
More
Mitigate Bottlenecks
Processors Memory Storage
Don't run everything off a single spindle
Networking
VHD Compaction/Expansion
Run it on a non-production system
Use .isos
Great performance Can be mounted and unmounted remotely Having them in SCVMM Library fast & convenient
Create virtual machine Install guest operating system Install integration components Install anti-virus Install management agents SYSPREP Add it to the VMM Library Creat vms using 2-way to ensure an MP HAL
Conclusions
Significant performance gains between Server 2008 and Server 2008 R2 for enterprise storage workloads
Performance improvements in Hyper-V, MPIO, iSCSI, Core storage stack & Networking stack
For general workloads with multiple VMs, performance delta is minimal between SCSI passthrough & VHD iSCSI Performance especially with iSCSI direct scenarios is vastly improved
Additional Resources
Microsoft MPIO: http://www.microsoft.com/mpio
MPIO DDK MPIO DSM sample, interfaces and libraries will be included in Windows 7 DDK/SDK
Additional Resouces
Hyper-V Planning & Deployment Guide
http://technet.microsoft.com/enus/library/cc794762.aspx
Partner References
Intel: http://www.intel.com Emulex: http://www.emulex.com Alacritech: http://www.alacritech.com NetApp: http://www.netapp.com 3Par: http://3par.com iStor: http://istor.com Lefthand Networks http://www.lefthandnetworks.com Doubletake: http://www.doubletake.com Compellent: http://www.compellent.com Dell/Equallogic: http://www.dell.com Falconstor: http://www.falconstor.com
Resources
www.microsoft.com/teched
Sessions On-Demand & Community
www.microsoft.com/learning
Microsoft Certification & Training Resources
http://microsoft.com/technet
Resources for IT Professionals
http://microsoft.com/msdn
Resources for Developers
2009 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.