Professional Documents
Culture Documents
Nick Rintalan
Lead Architect, Americas Consulting, Citrix Consulting
August 5, 2014
Myth Busting
The New PVS wC Option
Most people that say this don’t really know what they are talking about
And the few that do might quote “1.6x IOPS compared to PVS”
The 1.6x number was taken WAY out off context a few years back (it took into
account boot and logon IOPS, too)
Reality: MCS generates about 1.2x IOPS compared to PVS in the steady-state
• 8% more writes and 13% more reads, to be exact
• We have a million technologies to handle those additional reads!
But performance (and IOPS) are only one aspect you need to consider when
deciding between PVS and MCS…
MCS (or any technology utilizing linked clone’ish technology) still leaves a bit to
be desired from an operations and management perspective today
• Significant time required when updating a large number of VDIs (or rolling back)
• Controlled promotional model
• Support for things like vMotion
• Some scripting may be required to replicate parent disks efficiently, etc.
MCS is Simple/Easy
• I’d agree as long as it is somewhat small’ish (less than 1k VDIs or 5k XA users)
• But at real scale, MCS is arguably more complex than PVS
• How do you deploy MCS or Composer to thousands of desktops residing on hundreds of
LUNs, multiple datastores and instances of vCenter, for example?
• This is where PVS really shines today
6 © 2014 Citrix. Confidential.
Myth #4 – PVS is Complex
Make no mistake, the insane scalability that PVS provides doesn’t come
absolutely “free”, so there is some truth to this statement
BUT, have you noticed what we’ve done over the last few years to address this?
• vDisk Versioning
• Native TFTP Load Balancing via NS 10.1+
• We are big endorsers of virtualizing PVS (even on that vSphere thing)
• We have simplified sizing the wC file and we also endorse thin provisioning these days
- RAM Cache w/ overflow to disk (and thin provision the “overflow” disk = super easy)
So can humans!
And if architected correctly, using a pod architecture, PVS cannot and should not
take down your entire environment
Make sure every layer is resilient and fault tolerant
• Don’t forget about Offline Database Support and SQL HA technologies (mirroring)
We still recommend multiple PVS farms with isolated SQL infrastructure for our
largest customers – not really for scalability or technical reasons, but to minimize
the failure domain
Did you know that MCS already works and is supported with CSV Caching in
Hyper-V today?
Did you know that MCS also works with CBRC?
• We even have customers using it in production! (Just don’t ask for official support)
First and foremost, this RAM Caching is NOT the same as the old PVS RAM
Cache feature
• This one uses non-paged pool memory and we no longer manage internal cache lists, etc. (let
Windows do it – it is pretty good at this stuff as it turns out!)
• Actually compared the old vs. new RAM caching and found about 5x improvement in
throughput
Pretty simple concept: leverage memory first, then gracefully spill over to disk
• VHDX-based as opposed to all other “legacy” wC modes, which are VHD-based
- vdiskdif.vhdx vs. .vdiskcache
• Requires PVS 7.1+ and Win7/2008R2+ targets
• Also supports TRIM operations (shrink/delete!)
The VHDX spec uses 2 MB chunks or block sizes, so that is how you’ll see the
wC grow (in 2 MB chunks)
The wC file will initially be larger than the legacy wC file, but over time, it will not
be significantly larger as data will “backfill” into those 2 MB reserved blocks
This new wC option has nothing to do with “intermediate buffering” – totally
replaces it
This new wC option is where we want all our customers to move ASAP, for not
only performance reasons but stability reasons (ASLR)
Since Non-Paged Pool and VHDX are used we support TRIM operations
• Non-Paged Pool memory can be reduced and the VHDX can shrink!!!!
• This is very different than all our old/legacy VHD-based wC options
Win7 on Hyper-V 2012 R2 with 256 MB buffer size (with bloated profile):
Win7 on vSphere 5.5 with 256 MB buffer size (with bloated profile):
Variables Hardware
• Product version • HP DL380p G8
• Hypervisor • (2) Intel Xeon E5-2697
• Image delivery • 384 GB RAM
• Workload • (16) 300 GB 15,000 RPM spindles in RAID 10
• Policy
200
150
100
50
0
2008R2 2012R2 2012R2 2012R2 2012R2 2012R2 2008R2 2012R2 2012R2 2012R2 2012R2 2012R2
UX UX UX Scale UX UX UX UX UX Scale UX UX
Medium Light Medium Medium Medium Medium Medium Light Medium Medium Medium Medium
PVS (Disk) MCS MCS MCS PVS (Disk) PVS (RAM) PVS (Disk) MCS MCS MCS PVS (Disk) PVS (RAM)
Hyper-V Hyper-V Hyper-V Hyper-V Hyper-V Hyper-V vSphere vSphere vSphere vSphere vSphere vSphere
6.5 7.5 7.5 7.5 7.5 7.5 6.5 7.5 7.5 7.5 7.5 7.5
18 © 2014 Citrix. Confidential.
PVS vs MCS
Notable XenApp 7.5 Results
250
200
50
0
Hyper-V 2012R2 vSphere 5.5
MCS PVS (Disk) PVS (RAM with Overflow)
90
80
70
60
server scalability 30
20
10
0
Hyper-V 2012R2 vSphere 5.5
MCS PVS (Disk) PVS (RAM with Overflow)
10
6
PVS (RAM with Overflow)
5
less than 0.1 IOPS with
4
512MB RAM Cache!!!
3
2
0.1 IOPS
1
per user
0
Hyper-V vSphere
MCS PVS (Disk) PVS (RAM with Overflow)
0.9
0.8
0.7
0.6
0.3
0.2
0.1
0
PVS (RAM with Overflow)
512MB 256MB
160
140
Peak = 155 IOPS
120
100
IOPS
80
60
40
20
0
1
100
111
122
133
144
155
166
177
188
199
210
221
232
243
254
265
276
287
298
309
320
331
342
353
364
375
386
397
408
419
430
441
452
463
474
485
496
507
518
529
540
551
562
573
584
595
606
617
628
639
67
12
23
34
45
56
78
89
The new VHDX-based PVS 7.x write cache option is the best thing we have given
away for FREE since Secure Gateway (IMHO)
It doesn’t require a ton of extra memory/RAM – a small buffer will go a long way