You are on page 1of 86

Jeff Woolsey Principal Group Program Manager Windows Server, Hyper-V WSV312

Session Objectives And Takeaways


Understand the storage options with Hyper-V as well as use cases for DAS and SAN Learn whats new in Server 2008 R2 for storage and Hyper-V Understand different high availability options of Hyper-V with SANs Learn performance improvements with VHDs, Passthrough & iSCSI Direct scenarios

Storage Performance/Sizing
Important to scale performance to the total workload requirements of each VM Spindles are still key Dont migrate 20 physical servers with 40 spindles each to a Hyper-V host with 10 spindles Dont use left over servers as a production SAN

Windows Storage Stack


Bus Scan up to 8 buses (Storport) Target Up to 255 targets LUNs Up to 255 Support for up to 256TB volumes
>2T supported since Server 2003 SP1

Common Q: What is supported maximum transfer size?


Dependent on adapter/miniport (i.e. Qlogic/Emulex)

Hyper-V Storage Parameters


VHD max size 2040GB Physical disk size not limited by Hyper-V Up to 4 IDE devices Up to 4 SCSI controllers with 64 devices Optical devices only on IDE

Storage Connectivity
From parent partition
Direct Attached (SAS/SATA) Fiber Channel iSCSI

Network attached storage not supported


Except for ISOs

Hot add and remove


Virtual Disks to SCSI controller only

ISOs on network Shares


Machine account access to share Constrained delegation

SCSI Support in VMs


Supported In
Windows XP Professional x64 Windows Server 2003 Windows Server 2008 & 2008 R2 Windows Vista & Windows 7 SuSE Linux

Not Supported In
Windows XP Professional x86 All other operating systems

Requires integration services installed

Antivirus and Hyper-V


Exclude
VHDs & AVHDs (or directories) VM configuration directory VMMS.exe and VMWP.exe

May not be required on core with no other roles Run Antivirus in virtual machines

Encryption and Compression


Bitlocker on parent partition supported Encrypted File System (EFS)
Not supported on parent partition Supported in Virtual Machines

NTFS Compression (Parent partition)


Allowed in Windows Server 2008 Blocked in Windows Server 2008 R2

Step by Step Instructions

Hyper-V Storage...
Performance wise from fastest to slowest
Fixed Disk VHDs/Pass Through Disks
The same in terms of performance with R2

Dynamically Expanding VHDs


Grow as needed

Pass Through Disks


Pro: VM writes directly to a disk/LUN without encapsulation in a VHD Cons:
You cant use VM snapshots Dedicating a disk to a vm

More Hyper-V Storage


Hyper-V provides flexible storage options
DAS: SCSI, SATA, eSATA, USB, Firewire SAN: iSCSI, Fibre Channel, SAS

High Availability/Live Migration


Requires block based, shared storage

Guest Clustering
Via iSCSI only

VM Setting No Pass Through

Computer Management: Disk

Taking a disk offline

Disk is offline

Pass Through Configured

Disk type comparison (Read)


1000.00

Througput(MBps)

100.00

Native Physical Fixed VHD in Win7 Fixed VHD in Win2K8 Dynamic VHD in Win7

Dynamic VHD in Win2K8


10.00

Passthru in Win7 Passthru in Win2K8

1.00

64K Sequential Read

4K Random Read

(Log Scaled by 10)

Hyper-V R2 Fixed Disks


Fixed Virtual Hard Disks (Write)
Windows Server 2008 R1: ~96% of native Windows Server 2008 R2: Equal to Native

Fixed Virtual Hard Disks vs. Pass Through


Windows Server 2008 R1: ~96% of pass-through Windows Server 2008 R2: Equal to Native

Hyper-V R2 Dynamic Disks


Massive Performance Boost 64 Sequential Write
Windows Server 2008 R2: 94% of native Equal to Hyper R1 Fixed Disks

4k Random Write
Windows Server 2008 R2: 85% of native

Disk layout - FAQ


Assuming Integration Services are installed: Do I Use IDE or SCSI? One IDE channel or two? One VHD per SCSI controller? Multiple VHDs on a single SCSI controller?
R2: Can Hot Add VHDs to Virtual SCSI

Disk layout - results


1000.00

Throughput(MBps)

100.00

2 Physical disks in parent

2 Fixed VHDs, 2 SCSI controllers


2 Fixed VHDs, 1 SCSI controller
10.00

2 Fixed VHDs, 2 IDE controllers 2 Fixed VHDs, 1 IDE controller

1.00

64K Sequential Read

64K Sequential Write

4K Random Read

4K Random Write
(Log Scaled by 10)

Differencing VHDs
Performance vs chain length
1000.00

Throughput(MBps)

100.00

64K Sequential Reads (R2) 64K Sequential Reads (v1) 4K Random Reads (R2)
10.00

4K Random Reads (v1)

1.00 1 2 4 8 16 32 64

Diff VHD Chain Length

(Log Scaled by 10)

Passthrough Disks
When to use
Performance is not the only consideration If you need support for Storage Management software
Backup & Recovery applications which require direct access to disk VSS/VDS providers

Allows VM to communicate via inband SCSI unfiltered (application compatibility)

Storage Device Ecosystem


Storage Device support maps to same support as exists in physical servers Advanced scenarios: Live Migration require shared storage Hyper-V supports both Fibre Channel & iSCSI SANs connected from parent Fibre Channel SANs still represent largest install base for SANs and high usage with Virtualization Live Migration is supported with storage arrays which have obtained the Designed for Windows Logo and which pass Cluster Validation

Storage Hardware & Hyper-V


Storage Hardware that is qualified with Windows Server is qualified for Hyper-V Applies to running devices from Hyper-V parent Storage devices qualified for Server 2008 R2 are qualified with Server 2008 R2 Hyper-V No additional storage device qualification for Hyper-V
R2

SAN Boot and Hyper-V


Booting Hyper-V Host from SAN is supported
Fibre Channel or iSCSI from parent

Booting child VM from SAN supported using iSCSI boot with PXE solution (ex: emBoot/Doubletake)
Must use legacy NIC

Native VHD boot


Boot physical system from local VHD is new feature in Server 2008 R2 Booting a VHD located on SAN (iSCSI or FC) not currently supported (considering for future)

iSCSI Direct
Microsoft iSCSI Software initiator runs transparently from within the VM VM operates with full control of LUN LUN not visible to parent iSCSI initiator communicates to storage array over TCP stack Best for application transparency LUNs can be hot added & hot removed without requiring reboot of VM (2008 and 2008 R2) VSS hardware providers run transparently within the VM Backup/Recovery runs in the context of VM Enables guest clustering scenario

High Speed Storage & Hyper-V


Larger virtualization workloads require higher throughput
True for all scenarios
VHD Passthrough iSCSI Direct

Fibre Channel 8 gig & 10 Gig iSCSI will become more common As throughput grows, requirements to support higher IO to disks also grows

High Speed Storage & Hyper-V


Customers concerned about performance should not use a single 1 Gig Ethernet NIC port to connect to iSCSI storage
Multiple NIC ports & aggregate throughput using MPIO or MCS is recommended

The Microsoft iSCSI Software Initiator performs very well at 10 Gig wire speed 10Gig Ethernet adoption is ramping up
Driven by increasing use of virtualization

Fibre Channel 8 gig & 10 Gig iSCSI becoming more common As throughput grows, requirements to support IO to disks also grows

Jumbo Frames
Offers significant performance for TCP connections including iSCSI
Max frame size 9K Reduces TCP/IP overhead by up to 84% Must be enabled at all end points (switches, NICs, target devices

Virtual switch is defined as an end point


Virtual NIC is defined as an end point

Jumbo Frames in Hyper-V R2


Added support in virtual switch Added support in virtual NIC Integration components required How to validate if jumbo frames is configured end to end
Ping n 1 l 8000 f (hostname) -l (length) -f (dont fragment packet into multiple Ethernet frames) -n (count)

Windows* 2008 Hyper-V


Network I/O Path
Data packets get sorted and routed to respective VMs by the VM Switch
Management OS Virtual Machine Switch
Routing VLAN Filtering Data Copy Port 2 Port 1
Miniport Driver

VM1
TCP/IP VM NIC1

VM2
TCP/IP

VM NIC2

NIC
Ethernet

VMBus

Windows Server 2008 R2 VMQ


Data packets get sorted into multiple queues in the Ethernet Controller based on MAC Address and/or VLAN tags Sorted and queued data packets are then routed to the VMs by the VM Switch Enables the data packets to DMA directly into the VMs Removes data copy between the memory of the Management OS and the VMs memory
Management OS Virtual Machine Switch
Routing VLAN Filtering Data Copy Port
2

VM1
TCP/IP VM NIC1

VM2
TCP/IP VM NIC2

Port 1

Minipor t Driver

Q1

Q2

Defaul t Queue

VM Bus

Switch/Routing Unit

NIC

Ethernet

Intel tests with Microsoft VMQ


10000 9000 8000 7000 Throughput in Mbps 6000 5000 No VMQ 4000 3000 2000 1000 0 1 Source: Microsoft Lab, Mar 2009 2 3 4 5 6 7 8 Number of VMs VMQ

Quad core Intel server, Windows* 2008 R2 Beta, Intel 82598 10 Gigabit Ethernet Controller Near line rate throughput with VMDq for 4 VMs

ntttcp benchmark, standard frame size (1500 bytes)

More than 25% throughput gain with VMDq/VMQ as VMs scale

Throughput increase from 5.4Gbps to 9.3Gbps *Other names and brands may be claimed as the property of others.

Hyper-V Performance Improvements

Enterprise Storage Features


Performance
iSCSI digest offload iSCSI Increased Performance MPIO New Load Balancing algorithm

Manageability
iSCSI Quick Connect Improved SAN Configuration and usability Storage Management support for SAS

Scalability
Storport support for >64 cores Scale up storage workload Improved scalability for iSCSI & Fibre Channel SANs Improved Solid State disk performance (70% reduction in latency)

Automation
MPIO Datacenter Automation MPIO automate setting default load balance policy

Reliability
Additional redundancy for Boot from SAN up to 32 paths

Diagnosability
Storport error log extensions Multipath health & statistics reporting Configuration reporting for MPIO Configuration reporting for iSCSI

iSCSI Quick Connect

High Availability with Hyper-V using MPIO & Fibre Channel SAN

VHDs LUNs

MCS & MPIO with Hyper-V


Provides High Availabilty to storage arrays Especially important in virtualized environments to reduce single points of failure Load balancing & fail over using redundant HBAs, NICs, switches and fabric infrastructure Aggregates bandwidth to maximum performance MPIO supported with Fibre Channel , iSCSI, Shared SAS 2 Options for multi-pathing with iSCSI
Multiple Connections per Session Microsoft MPIO (Multipathing Input/Output)

Protects against loss of data path during firmware upgrades on storage controller

Configuring MPIO with Hyper-V


MPIO
Connect from parent
Applies to: Creating vhds for each VM Passthrough disks

Additional sessions to target can also be added through MPIO directly from guest Additional connections can be added through MCS with iSCSI using iSCSI direct

iSCSI Perf Best Practices with Hyper-V


Standard Networking & iSCSI best practices apply Use Jumbo Frames Use Dedicated NIC ports for
iSCSI traffic (Server to SAN)
Multiple to scale

Client Server (LAN)


Multiple to scale

Cluster heartbeat (if using cluster) Hyper-V Management

Hyper-V Enterprise Storage Testing Performance Configuration


Windows Server 2008 R2 Hyper-V Microsoft MPIO
4 Sessions 64K request size 100% read

Microsoft iSCSI Software Initiator Intel 10 Gb/E NIC


RSS enabled (applicable to parent only) Jumbo Frames (9000 byte MTU) LSO V2 (offloads packets up to 256K) LRO

Hyper-V Server 2008 R2 NetApp FAS 3070

Hyper-V Networking
Two 1 Gb/E physical network adapters at a minimum
One for management One (or more) for VM networking Dedicated NIC(s) for iSCSI Connect parent to backend management network
Only expose guests to internet traffic

Hyper-V Network Configurations


Example 1:
Physical Server has 4 network adapters NIC 1: Assigned to parent partition for management NICs 2/3/4: Assigned to virtual switches for virtual machine networking Storage is non-iSCSI such as:
Direct attach SAS or Fibre Channel

Hyper-V Setup & Networking 1

Hyper-V Setup & Networking 2

Hyper-V Setup & Networking 3

Each VM on its own Switch


Parent Partition
VM Worker Processes

Child Partitions

Applications
WMI Provider

Applications

Applications

VM Service
VM 3

User Mode

Windows Server 2008


Windows Kernel

VM 1

VM 2

VSC

VSP

Windows Kernel

VSC

Linux Kernel

VSC

VSP VSP
VMBus

VMBus

VMBus

VMBus

Kernel Mode Ring -1

Windows hypervisor
Mgmt NIC 1 VSwitch 1 NIC 2 VSwitch 2 NIC 3 VSwitch 3 NIC 4

Designed for Windows Server Hardware

Hyper-V Network Configurations


Example 2:
Server has 4 physical network adapters NIC 1: Assigned to parent partition for management NIC 2: Assigned to parent partition for iSCSI NICs 3/4: Assigned to virtual switches for virtual machine networking

Hyper-V Setup, Networking & iSCSI

Now with iSCSI


Parent Partition
VM Worker Processes

Child Partitions

Applications
WMI Provider

Applications

Applications

VM Service

User Mode

Windows Server 2008


VSP VSP
VMBus

VM 1
Windows Kernel

VM 2
VSC
Windows Kernel

VM 3
VSC
Linux Kernel

VSC

VMBus

VMBus

VMBus

Kernel Mode Ring -1

Windows hypervisor
Mgmt NIC 1 iSCSI NIC 2 VSwitch 2 NIC 3 VSwitch 3 NIC 4

Designed for Windows Server Hardware

Networking: Parent Partition

Networking: Virtual Switches

New in R2: Core Deployment


Theres no GUI in a Core Deployment, how do I configure which NICs are bound to switches or kept separate for the parent partition?

No Problem
Hyper-V R2 Manager includes option to set bindings per virtual switch

Microsoft Confidential

Avanade
1 Gbit/s LAN

4-Node Hyper-V Cluster

Production VMs

iSCSI SAN

Hyper-V allows us to provision new servers quickly and more efficiently utilize hardware resources. Using Hyper-V with our existing NetApp infrastructure provided a cost-effective and flexible solution without sacrificing performance. Andy Schneider, infrastructure architect, Avanade

NetApp Fabric-Attached Storage (FAS) System

Lionbridge Technologies

Global IT Windows Server 2008 HyperV ISCSI SAN


SQL Server Windows Server 2008 Failover Cluster File Shares MS Exchange 2007 on Windows Server

300+ Hyper-V Virtual Machines

Hyper-V has allowed us to consolidate 300+ servers to virtual machines. This configuration when combined with Microsofts iSCSI, Fibre Channel and SAN Gateway multipathing support provides great flexibility in storage options. We chose FalconStors SAN Gateway which enables advanced storage features to be Fibre Channel SAN used with any SAN storage and our iSCSI based virtual machines Frank Smith, Sr. Systems Engineer

iSCSI

iSCSI

Indiana University:
Auxiliary Information Technology

Hyper-V Hosts SQL Server on Windows Server 2008

Windows 2008 File Servers

Fibre Channel Switch With 4GB Dual Path HBAs

Jackson Energy Authority


Applications Used
Exchange, SharePoint, Dynamics Windows Server 2003 / 2008 / Hyper-V

Terminal Services
SharePoint Server Farm

Terminal Server SITE A

Highly Available Terminal Server Infrastructure

Windows Server 2008 components


Microsoft iSCSI Software Initiator Microsoft MPIO

Terminal Server SITE B

Pain Points
High growth and change No disaster protection Poor storage utilization Complex storage management

Exchange Mail Servers Dynamics

iSCSI SAN Switched GbEthernet

Solution
Windows Server 2008 iSCSI hosts 30TB iSCSI SAN with MPIO load balancing Lefthand MPIO DSM Two storage pools: SAS and SATA Multi-site SAN between two sites

Multi-Site iSCSI SAN

When combining Hyper-V, and native Server 2008 technologies such as Microsoft MPIO Benefits and the Microsoft iSCSI software initiator, our administration was greatly simplified. High availability across sites Michael Johnston,Reduced storage management Technology VP of Information costs
SITE A SITE B

Increased flexibility in dealing with change and growth

Virtualization Performance
www.virtualizationperformance.com

Sales SQL Database VM

File Server VM

Exchange Mail Server VM

Windows Server 2008 + Hyper-V

iSCSI SAN Switched Gb-Ethernet

iStor iSCSI Disk Arrays

An iSCSI SAN allowed us to control costs and deliver better services to our clients. Stephen Ames, Virtualization Performance

Microsoft Hyper-V Server V2


New Features
Live Migration High Availability New Processor Support
Second Level Address Translation Core Parking

Networking Enhancements
TCP/IP Offload Support VMQ & Jumbo Frame Support

Hot Add/Remove virtual storage Enhancements to SCONFIG Enhanced scalability

Manage Remotely

Hyper-V Server V1 vs. V2


Microsoft Hyper-V Server 2008
Processor Support Physical Memory Support Virtual Machine Memory Support Live Migration High Availability Management Options Up to 4 processors Up to 32 GB Up to 32 GB total (e.g. 31 1 GB VMs or 5 6 GB VMs) No No Free Hyper-V Manager MMC SCVMM

Microsoft Hyper-V Server V2


Up to 8 processors Up to 1 TB 64 GB of memory per VM

Yes Yes Free Hyper-V Manager MMC SCVMM

Live Migration $$ Comparison


Hyper-V Server R2 3 Node Cluster 2 Socket Servers
3 Node Cluster 4 Socket Servers 5 Node Cluster 2 Socket Servers 5 Node Cluster 4 Socket Servers

VMware vSphere $13,470


$26,940 $22,450 $44,900

Free
Free Free Free

For $500 add VMM 2008 R2 (Workgroup Edition) to manage MS Hyper-V Server R2:
Physical to Virtual Conversion (P2V); Quick Storage Migration; Library Management; Heterogeneous Management; PowerShell Automation; Self-Service Portal and more

Deployment Considerations
Minimize risk to the Parent Partition
Use Server Core Dont run arbitrary apps, no web surfing
Run your apps and services in guests

Moving VMs from Virtual Server to Hyper-V


FIRST: Uninstall the VM Additions

Two physical network adapters at a minimum


One for management (use a VLAN too) One (or more) for vm networking Dedicated iSCSI NICs Connect to back-end management network
Only expose guests to internet traffic

Don't forget the ICs!


Emulated vs. VSC

Cluster Hyper-V Servers

Live Migration/HA Best Practices


Best Practices:
Cluster Nodes:
Hardware with Windows Logo + Failover Cluster Configuration Program (FCCP)

Storage:
Cluster Shared Volumes Storage with Windows Logo + FCCP Multi-Path IO (MPIO) is your friend

Networking:
Standardize the names of your virtual switches Multiple Interfaces CSV uses separate network

Use ISOs not physical CD/DVDs


You cant Live Migrate a VM that has a physical DVD attached!

More
Mitigate Bottlenecks
Processors Memory Storage
Don't run everything off a single spindle

Networking

VHD Compaction/Expansion
Run it on a non-production system

Use .isos
Great performance Can be mounted and unmounted remotely Having them in SCVMM Library fast & convenient

Creating Virtual Machines


Use SCVMM Library Steps:
1. 2. 3. 4. 5. 6. 7.

Create virtual machine Install guest operating system Install integration components Install anti-virus Install management agents SYSPREP Add it to the VMM Library Creat vms using 2-way to ensure an MP HAL

Windows Server 2003

Conclusions
Significant performance gains between Server 2008 and Server 2008 R2 for enterprise storage workloads
Performance improvements in Hyper-V, MPIO, iSCSI, Core storage stack & Networking stack

For general workloads with multiple VMs, performance delta is minimal between SCSI passthrough & VHD iSCSI Performance especially with iSCSI direct scenarios is vastly improved

Additional Resources
Microsoft MPIO: http://www.microsoft.com/mpio
MPIO DDK MPIO DSM sample, interfaces and libraries will be included in Windows 7 DDK/SDK

Microsoft iSCSI: http://www.microsoft.com/iSCSI


SCSI@microsoft.com iSCSI WMI Interfaces: http://msdn.microsoft.com/en-us/library/ms807120.aspx

Storport Website: http://www.microsoft.com/Storport


Storport Documentation
Windows Driver Kit MSDN: http://msdn.microsoft.com/en-us/library/bb870491.aspx

Microsoft Virtualization: http://www.microsoft.com/virtualization/default.mspx

Additional Resouces
Hyper-V Planning & Deployment Guide
http://technet.microsoft.com/enus/library/cc794762.aspx

Microsoft Virtualization Website


www.microsoft.com/virtualization http://www.microsoft.com/virtualization/partners. mspx http://blogs.technet.com/virtualization http://blogs.technet.com/jhoward/default.aspx http://blogs.msdn.com/taylorb/

Partner References
Intel: http://www.intel.com Emulex: http://www.emulex.com Alacritech: http://www.alacritech.com NetApp: http://www.netapp.com 3Par: http://3par.com iStor: http://istor.com Lefthand Networks http://www.lefthandnetworks.com Doubletake: http://www.doubletake.com Compellent: http://www.compellent.com Dell/Equallogic: http://www.dell.com Falconstor: http://www.falconstor.com

Resources
www.microsoft.com/teched
Sessions On-Demand & Community

www.microsoft.com/learning
Microsoft Certification & Training Resources

http://microsoft.com/technet
Resources for IT Professionals

http://microsoft.com/msdn
Resources for Developers

www.microsoft.com/learning Microsoft Certification and Training Resources

Complete an evaluation on CommNet and enter to win!

2009 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

You might also like