Professional Documents
Culture Documents
Bert Dufrasne Wilhelm Gardt Jana Jamsek Peter Kimmel Jukka Myyrylainen
Markus Oscheka Gerhard Pieper Stephen West Axel Westphal Roland Wolf
ibm.com/redbooks
6786edno.fm
International Technical Support Organization IBM System Storage DS8000 Series: Architecture and Implementation November 2007
SG24-6786-03
6786edno.fm
Note: Before using this information and the product it supports, read the information in Notices on page xv.
Fourth Edition (November 2007) This edition applies to the IBM System Storage DS8000 with Licence Machine Code 5.30xx.xx, as announced on October 23, 2007. This document created or updated on January 7, 2008.
Copyright International Business Machines Corporation 2007. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
6786edno.fm
iii
6786edno.fm
iv
6786TOC.fm
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Special thanks to: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi November 2007, Fourth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Part 1. Concepts and Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. Introduction to the DS8000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 The DS8000: A member of the System Storage DS family. . . . . . . . . . . . . . . . . . . . . . . 4 1.1.1 Optimizations to simplify and increase efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.2 Performance innovation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.3 Business continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Introduction to the DS8000 series features and functions . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.1 Hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 Storage capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.3 Supported environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.4 Copy Services functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.5 Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.2.6 Service and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.2.7 Configuration flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3 Positioning the DS8000 series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3.1 Common set of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3.2 Common management functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3.3 DS8000 series compared to other storage disk subsystems . . . . . . . . . . . . . . . . 12 1.4 Performance features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.4.1 Sequential Prefetching in Adaptive Replacement Cache (SARC) . . . . . . . . . . . . 13 1.4.2 Adaptive Multi-stream Prefetching (AMP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.3 Multipath Subsystem Device Driver (SDD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.4 Performance for System z. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.5 Performance enhancements for System p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.4.6 Performance enhancements for z/OS Global Mirror . . . . . . . . . . . . . . . . . . . . . . . 15 Chapter 2. Model overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 DS8000 series model overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Model naming conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Machine types 2107 and 242x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 DS8100 Turbo Model 931 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 DS8300 Turbo Models 932 and 9B2 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 DS8000 series model comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 DS8000 design for scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Scalability for capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Scalability for performance: Linear scalable architecture . . . . . . . . . . . . . . . . . . . 17 18 18 19 20 22 26 27 27 28
6786TOC.fm
2.3.3 Model conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Chapter 3. Storage system logical partitions (LPARs) . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction to logical partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Logical partitioning concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Why logically partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 DS8300 and LPARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 LPARs and SFIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 DS8300 LPAR implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Storage Facility image hardware components . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 DS8300 Model 9B2 configuration options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 LPAR security through POWER Hypervisor (PHYP) . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 LPAR and Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 LPAR benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 4. Hardware Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Base frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Expansion frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.3 Rack operator panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Server-based SMP design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Cache management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Processor complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 RIO-G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 I/O enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Disk subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Device adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Disk enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.4 Fibre Channel ATA (FATA) disk drive overview . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.5 Positioning FATA with Fibre Channel disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.6 FATA as opposed to Fibre Channel drives on the DS8000 . . . . . . . . . . . . . . . . . 4.5 Host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 ESCON Host Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Fibre Channel/FICON Host Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Power and cooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Management console network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 System Storage Productivity Center (SSPC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Ethernet adapter pair (for TPC-R support at R2+) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. Reliability, availability, and serviceability (RAS) . . . . . . . . . . . . . . . . . . . . . 5.1 Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Processor complex RAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Hypervisor: Storage image independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 RIO-G: A self-healing interconnect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 I/O enclosure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Server RAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Metadata checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Server failover and failback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 NVS recovery after complete power loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Host connection availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Open systems host connection: Subsystem Device Driver (SDD) . . . . . . . . . . . . vi
IBM System Storage DS8000 Series: Architecture and Implementation
31 32 32 33 34 34 35 36 37 38 39 40 42 43 44 44 45 45 46 48 49 49 51 52 53 53 53 58 59 61 66 67 67 68 69 70 70 71 73 74 76 79 79 79 80 80 80 83 84 86
6786TOC.fm
5.5.2 System z host connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Disk subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Disk path redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 RAID-5 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.3 RAID-10 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.4 Spare creation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.5 Predictive Failure Analysis (PFA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.6 Disk scrubbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Power and cooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Building power loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.2 Power fluctuation protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.3 Power control of the DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.4 Emergency power off (EPO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Microcode updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Management console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Earthquake resistance kit (R2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
86 87 87 88 88 89 90 91 91 92 92 92 92 93 93 94
Chapter 6. Virtualization concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.1 Virtualization definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.2 Storage system virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.3 The abstraction layers for disk virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.3.1 Array sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 6.3.2 Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 6.3.3 Ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 6.3.4 Extent Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 6.3.5 Logical volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 6.3.6 Space Efficient volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 6.3.7 Allocation, deletion, and modification of LUNs/CKD volumes. . . . . . . . . . . . . . . 107 6.3.8 Logical subsystems (LSS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 6.3.9 Volume access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 6.3.10 Summary of the virtualization hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 6.4 Benefits of virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Chapter 7. Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 FlashCopy and IBM FlashCopy SE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Benefits and use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Licensing requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 FlashCopy options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 IBM FlashCopy SE options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Remote Mirror and Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Global Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.5 z/OS Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.6 z/OS Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.7 Summary of the Copy Services function characteristics . . . . . . . . . . . . . . . . . . . 7.4 Interfaces for Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Hardware Management Console (HMC). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 DS Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 DS Command-Line Interface (DS CLI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 118 119 119 121 121 122 127 128 128 129 130 132 135 135 136 137 137 138 139
Contents
vii
6786TOC.fm
7.4.4 Totalstorage Productivity Center for Replication (TPC for Replication) . . . . . . . 7.4.5 DS Open application programming interface (DS Open API) . . . . . . . . . . . . . . . 7.4.6 System z-based I/O interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part 2. Planning and Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Chapter 8. Physical planning and installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Considerations prior to installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Who should be involved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 What information is required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Planning for the physical installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Delivery and staging area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Floor type and loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 Room space and service clearance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.4 Power requirements and operating environment . . . . . . . . . . . . . . . . . . . . . . . . 8.2.5 Host interface and cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Network connectivity planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Hardware management console network access . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 System Storage Productivity Center network access . . . . . . . . . . . . . . . . . . . . . 8.3.3 DS CLI console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 DSCIMCLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.5 Remote support connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.6 Remote power control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.7 Storage area network connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Remote mirror and copy connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Disk capacity considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Disk sparing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Disk capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Fibre Channel ATA (FATA) disk considerations . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Planning for growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9. DS HMC planning and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 DS Hardware Management Console (DS HMC) overview . . . . . . . . . . . . . . . . . . . . . 9.2 DS HMC software components and communication. . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Components of the DS Hardware Management Console (DS HMC) . . . . . . . . . 9.2.2 Logical flow of communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 DS Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.4 Command-Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.5 DS Open Application Programming Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.6 Using the DS GUI on the HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Typical DS HMC environment setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Planning and setup of the DS HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Using the DS Storage Manager front end . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 Using the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.3 Using the DS Open API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.4 Hardware and software setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.5 Activation of Advanced Function licenses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.6 Microcode upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.7 Time synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.8 Monitoring with the DS HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.9 Call Home and Remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 User management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 User management using the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
IBM System Storage DS8000 Series: Architecture and Implementation
145 146 146 147 147 147 148 149 150 152 153 154 154 154 155 155 156 156 156 157 157 157 158 159 161 162 163 163 163 165 165 167 167 169 170 170 171 171 171 172 173 174 175 175 176 177
6786TOC.fm
9.5.2 User management using the DS GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 External DS HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.1 External DS HMC advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.2 Configuring the DS CLI to use a second HMC . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.3 Configuring the DS Storage Manager to use a second HMC . . . . . . . . . . . . . . . Chapter 10. Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 DS8000 hardware: Performance characteristics. . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Fibre Channel switched disk interconnection at the back end . . . . . . . . . . . . . 10.1.2 Fibre Channel device adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 Four-port host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.4 System p POWER5+ is the heart of the DS8000 dual cluster design. . . . . . . . 10.1.5 Vertical growth and scalability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Software performance enhancements: Synergy items . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 End to end I/O priority: Synergy with System p AIX and DB2. . . . . . . . . . . . . . 10.2.2 Cooperative caching: Synergy with System p AIX and DB2 . . . . . . . . . . . . . . . 10.2.3 Long busy wait host tolerance: Synergy with System p AIX . . . . . . . . . . . . . . . 10.2.4 HACMP-extended distance extensions: Synergy with System p AIX . . . . . . . . 10.3 Performance considerations for disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 DS8000 superior caching algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 Sequential Adaptive Replacement Cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 Adaptive Multi-stream Prefetching (AMP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Performance considerations for logical configuration . . . . . . . . . . . . . . . . . . . . . . . . 10.5.1 Workload characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.2 Data placement in the DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.3 Placement of data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.4 Space Efficient volumes and repositories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Performance and sizing considerations for open systems . . . . . . . . . . . . . . . . . . . . 10.6.1 Cache size considerations for open systems . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.2 Determining the number of connections between the host and DS8000 . . . . . 10.6.3 Determining the number of paths to a LUN. . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.4 Dynamic I/O load-balancing: Subsystem Device Driver (SDD). . . . . . . . . . . . . 10.6.5 Determining where to attach the host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7 Performance and sizing considerations for System z . . . . . . . . . . . . . . . . . . . . . . . . 10.7.1 Host connections to System z servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.2 Sizing the DS8000 to replace older storage subsystems . . . . . . . . . . . . . . . . . 10.7.3 DS8000 processor memory size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.4 Channels consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.5 Ranks and Extent Pool configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.6 Parallel Access Volume (PAV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.7 z/OS Workload Manager: Dynamic PAV tuning . . . . . . . . . . . . . . . . . . . . . . . . 10.7.8 PAV in z/VM environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.9 Multiple Allegiance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.10 HyperPAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.11 I/O priority queuing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 11. Features and license keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 DS8000 licensed functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Activation of licensed functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 Obtaining DS8000 machine information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 Obtaining activation codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.3 Applying activation codes using the GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.4 Applying activation codes using the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . .
179 180 180 181 182 185 186 186 188 188 189 193 193 193 194 194 194 194 196 196 198 199 199 199 200 205 207 207 207 207 208 208 209 209 210 211 211 211 215 218 219 220 221 224 227 228 230 231 233 237 241
Contents
ix
6786TOC.fm
11.3 Licensed scope considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.1 Why you get a choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.2 Using a feature for which you are not licensed . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.3 Changing the scope to All . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.4 Changing the scope from All to FB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.5 Applying an insufficient license feature key . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.6 Calculating how much capacity is used for CKD or FB. . . . . . . . . . . . . . . . . . .
Part 3. Storage Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Chapter 12. Configuration flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 12.1 Configuration worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 12.2 Configuration flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Chapter 13. System Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 13.1 System Storage Productivity Center (SSPC) overview . . . . . . . . . . . . . . . . . . . . . . . 256 13.1.1 System Storage Productivity Center (SSPC) Components. . . . . . . . . . . . . . . . 256 13.1.2 System Storage Productivity Center (SSPC) capabilities . . . . . . . . . . . . . . . . . 257 13.1.3 System Storage Productivity Center (SSPC) upgrade options . . . . . . . . . . . . . 257 13.2 Logical communication flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 13.3 SSPC setup and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 13.3.1 Installing the SSPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 13.3.2 Configuring DS8000 for TPC-BE access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 13.3.3 Setup SSPC user management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 13.3.4 Adding DS8000 to the TPC Element Manager . . . . . . . . . . . . . . . . . . . . . . . . . 267 13.3.5 Adding DS8000 to the TPC Enterprise Manager . . . . . . . . . . . . . . . . . . . . . . . 268 13.3.6 Adding external tools to TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 13.3.7 Adding Out of Band Fabric agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 13.3.8 Accessing the DS8000 GUI by SSPC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 13.4 Maintaining TPC-BE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 13.4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Schedule and monitor TPC tasks273 13.4.2 Auditing TPC actions against the DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 13.4.3 Manually recover CIM Agent connectivity after HMC shutdown . . . . . . . . . . . . 275 13.4.4 Upgrading TPC-BE software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 13.5 Working with DS8000 and TPC-BE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 13.5.1 Work with DS8000 Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 13.5.2 Display and analyze the overall storage environment. . . . . . . . . . . . . . . . . . . . 279 13.5.3 Storage health management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 13.5.4 Create and assign DS8000 volumes to a host . . . . . . . . . . . . . . . . . . . . . . . . . 283 Chapter 14. Configuration with DS Storage Manager GUI . . . . . . . . . . . . . . . . . . . . . 14.1 DS Storage Manager GUI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.1 Accessing the DS GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.2 DS GUI Welcome panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Logical configuration process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Real-time manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4 Simulated manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.1 Download and installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.2 Accessing the Information Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.3 Starting the Simulated manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.4 Enterprise configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.5 Creating simulated hardware configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.6 Entering hardware configuration manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.7 Importing hardware configuration from an eConfig file . . . . . . . . . . . . . . . . . . . x
IBM System Storage DS8000 Series: Architecture and Implementation
287 288 288 295 296 297 297 298 299 299 300 302 303 305
6786TOC.fm
14.4.8 Importing configuration from the storage HMC . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.9 Applying a configuration on a DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5 Examples of configuring DS8000 storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.1 Create extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.2 Configure I/O ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.3 Configure logical host systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.4 Create FB volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.5 Create volume groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.6 Create LCUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.7 Create CKD volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.8 CKD volume actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 15. Configuration with Command-Line Interface. . . . . . . . . . . . . . . . . . . . . . 15.1 DS Command-Line Interface overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Configuring the I/O ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Configuring the DS8000 storage for FB volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.1 Create array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.2 Create ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.3 Create extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.4 Creating FB volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.5 Creating volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.6 Creating host connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3.7 Mapping open systems host disks to storage unit volumes . . . . . . . . . . . . . . . 15.4 Configuring DS8000 storage for count key data volumes . . . . . . . . . . . . . . . . . . . . . 15.4.1 Create array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4.2 Ranks and Extent Pool creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4.3 Logical control unit creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4.4 Create CKD volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.5 Scripting the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.5.1 Single command mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.5.2 Script mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
306 308 309 309 314 315 319 322 324 328 336 339 340 342 343 343 344 344 347 351 354 355 357 358 358 359 360 364 364 364
Part 4. Host considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Chapter 16. Open systems considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1 General considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.1 Getting up-to-date information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.2 Boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.3 Additional supported configurations (RPQ). . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.4 Multipathing support: Subsystem Device Driver (SDD) . . . . . . . . . . . . . . . . . . 16.2 Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.1 HBA and operating system settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.2 SDD for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.3 Windows 2003 and MPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.4 Subsystem Device Driver Device Specific Module for Windows 2003 . . . . . . . 16.2.5 Dynamic Volume Expansion of a Windows 2000/2003 volume . . . . . . . . . . . . 16.2.6 Boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.7 Windows Server 2003 Virtual Disk Service (VDS) support . . . . . . . . . . . . . . . . 16.3 AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.1 Finding the Worldwide Port Names. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.2 AIX multipath support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.3 SDD for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.4 AIX Multipath I/O (MPIO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.5 LVM configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
369 370 370 372 372 372 373 373 374 377 377 380 385 386 390 390 390 391 393 396 xi
6786TOC.fm
16.3.6 AIX access methods for I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 16.3.7 Dynamic volume expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 16.3.8 Boot device support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 16.4 Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 16.4.1 Support issues that distinguish Linux from other operating systems . . . . . . . . 400 16.4.2 Reference material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 16.4.3 Important Linux issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 16.4.4 Troubleshooting and monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 16.5 OpenVMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 16.5.1 FC port configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 16.5.2 Volume configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 16.5.3 Command Console LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 16.5.4 OpenVMS volume shadowing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 16.6 VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 16.6.1 What is new in VMware ESX Server 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 16.6.2 VMware disk architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 16.6.3 VMware setup and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 16.7 Sun Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 16.7.1 Locating the WWPNs of your HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 16.7.2 Solaris attachment to DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 16.7.3 Multipathing in Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 16.7.4 Dynamic Volume Expansion with VxVM and DMP . . . . . . . . . . . . . . . . . . . . . . 425 16.8 Hewlett-Packard Unix (HP-UX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . )430 16.8.1 Available documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 16.8.2 DS8000 specific software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 16.8.3 Locating the WWPNs of HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 16.8.4 Defining HP-UX host for the DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 16.8.5 Multipathing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Chapter 17. System z considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1 Connectivity considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 Operating systems prerequisites and enhancements . . . . . . . . . . . . . . . . . . . . . . . . 17.3 z/OS considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3.1 z/OS program enhancements (SPEs). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3.2 Parallel Access Volume (PAV) definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3.3 HyperPAV z/OS support and implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4 z/VM considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4.1 Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4.2 Supported DASD types and LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4.3 PAV and HyperPAV z/VM support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4.4 Missing-interrupt handler (MIH). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5 VSE/ESA and z/VSE considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 18. System i considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.1 Supported environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.1.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.1.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.2 Logical volume sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.3 Protected as opposed to unprotected volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.3.1 Changing LUN protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4 Adding volumes to the System i configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4.1 Using the 5250 interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4.2 Adding volumes to an Independent Auxiliary Storage Pool . . . . . . . . . . . . . . . 441 442 442 443 443 446 448 451 451 452 452 453 453 455 456 456 456 456 457 457 458 458 460
xii
6786TOC.fm
18.5 Multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.5.1 Avoiding single points of failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.5.2 Configuring multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.5.3 Adding multipath volumes to System i using the 5250 interface. . . . . . . . . . . . 18.5.4 Adding multipath volumes to System i using System i Navigator . . . . . . . . . . . 18.5.5 Managing multipath volumes using System i Navigator . . . . . . . . . . . . . . . . . . 18.5.6 Multipath rules for multiple System i hosts or partitions . . . . . . . . . . . . . . . . . . 18.5.7 Changing from single path to multipath. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.6 Sizing guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.6.1 Planning for arrays and DDMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.6.2 Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.6.3 Number of System i Fibre Channel adapters . . . . . . . . . . . . . . . . . . . . . . . . . . 18.6.4 Size and number of LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.6.5 Recommended number of ranks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.6.6 Sharing ranks between System i and other servers . . . . . . . . . . . . . . . . . . . . . 18.6.7 Connecting using SAN switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.7 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.7.1 OS/400 mirroring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.7.2 Metro Mirror and Global Copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.7.3 OS/400 data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.8 Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.8.1 Boot from SAN and cloning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.8.2 Why consider cloning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.9 AIX on IBM System i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.10 Linux on IBM System i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
467 468 469 470 471 472 475 475 475 476 476 477 477 478 478 479 479 479 479 480 482 482 482 482 483
Part 5. Maintenance and upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 Chapter 19. Licensed machine code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.1 How new microcode is released . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.2 Installation process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3 DS8000 EFIXes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4 Concurrent and non-concurrent updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.5 HMC code updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.6 Host adapter firmware updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.7 Loading the code bundle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.7.1 Code update schedule example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.8 Post-installation activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.9 Planning and application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 20. Monitoring with Simple Network Management Protocol (SNMP). . . . . . 20.1 Simple Network Management Protocol (SNMP) overview . . . . . . . . . . . . . . . . . . . . 20.1.1 SNMP agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1.2 SNMP manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1.3 SNMP trap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1.4 SNMP communication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1.5 Generic SNMP security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1.6 Message Information Base (MIB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1.7 SNMP trap request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1.8 DS8000 SNMP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.2 SNMP notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.2.1 Serviceable event using specific trap 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.2.2 Copy Services event traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 488 488 489 489 490 490 491 491 491 492 493 494 494 494 494 495 496 496 496 497 497 497 498
Contents
xiii
6786TOC.fm
20.3 SNMP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 Chapter 21. Remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1 Call Home for service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Remote services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2.1 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2.2 Establish a remote support connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2.3 Terminate a remote support connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2.4 Log remote support connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Support data offload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3.1 Support data offload and Call Home using SSL . . . . . . . . . . . . . . . . . . . . . . . . 21.3.2 Support Data offload using File Transfer Protocol . . . . . . . . . . . . . . . . . . . . . . 21.4 Optional firewall setup guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 22. Capacity upgrades and CoD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.1 Installing capacity upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.1.1 Installation order of upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.1.2 Checking how much capacity is installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 Using Capacity on Demand (CoD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2.1 What is Capacity on Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2.2 How you can tell if a DS8000 has CoD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2.3 Using the CoD storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix A. Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data migration in open systems environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Migrating with basic copy commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Migrating using volume management software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Migrating using backup and restore methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data migration in System z environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data migration based on physical volume migration. . . . . . . . . . . . . . . . . . . . . . . . . . . Data migration based on logical data set migration . . . . . . . . . . . . . . . . . . . . . . . . . . . Combination of physical and logical data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . Copy Services-based migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Migration Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix B. Tools and service offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Capacity Magic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disk Magic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PAV Analysis Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM TotalStorage Productivity Center for Disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disk Storage Configuration Migrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Global Technology Services: Service offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 506 506 506 507 509 510 510 510 511 512 513 514 515 515 516 517 517 519 521 522 522 523 531 532 532 532 533 534 534 534 535 536 537 538 539 540 540
Appendix C. Project plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 Project plan skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 547 547 548 549 549
6786spec.fm
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
xv
6786spec.fm
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
Redbooks (logo) eServer iSeries i5/OS pSeries z/OS z/VM z/VSE zSeries z9 AIX 5L AIX AS/400 BladeCenter Chipkill CICS DB2 DFSMS DFSMSdss DFSMShsm DFSORT DS4000 DS6000 DS8000 Enterprise Storage Server Enterprise Systems Connection Architecture ECKD ESCON FlashCopy FICON HACMP IBM IMS OS/2 OS/390 OS/400 Perform PowerPC Predictive Failure Analysis POWER POWER Hypervisor POWER5 POWER5+ Redbooks Resource Link RMF S/360 S/390 System i System i5 System p System p5 System x System z System z9 System Storage System Storage DS System Storage Proven Tivoli Enterprise Console Tivoli TotalStorage TDMF Virtualization Engine VSE/ESA WebSphere
The following terms are trademarks of other companies: SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. QLogic, and the QLogic logo are registered trademarks of QLogic Corporation. SANblade is a registered trademark in the United States. Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation and/or its affiliates. Java, JNI, Solaris, Sun, Ultra, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Internet Explorer, Microsoft, Windows Server, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel Xeon, Pentium, Pentium 4, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
xvi
6786pref.fm
Preface
This book describes the concepts, architecture, and implementation of the IBM System Storage DS8000 series of disk storage subsystems. This book has reference information that will help you prepare the planning, installation, and configuration of the DS8000 and also discusses the architecture and components. It includes hints and tips derived from user experiences for efficient installation and use. The DS8000 is a cabinet-mounted self-contained disk storage subsystem. It is designed for the higher demands of data storage and data availability that most organizations face today. The DS8000 series benefit by using IBM POWER5+ processor technology with a dual two-way processor complex implementation in the DS8100 Turbo model 931 and with a dual four-way processor complex implementation in the DS8300 Turbo models 932 and 9B2. This increased power and extended connectivity with up to 128 Fibre Channel/FICON ports or 64 ESCON ports for host connections make it suitable for multiple server environments in both the open systems and System z environments. Its switched Fibre Channel architecture, dual processor complex implementation, high availability design, and the advanced Point-in-Time Copy and Remote Mirror and Copy functions that it incorporates make the DS8000 suitable for mission-critical businesses. To read about DS8000 Point-in-Time Copy, FlashCopy or FlashCopy SE, and the set of Remote Mirror and Copy functions available with the DS8000 series: Metro Mirror, Global Copy, Global Mirror, z/OS Global Mirror, and Metro/Global Mirror, you can refer to the IBM Redbooks: IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788, IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787, and the Redpaper IBM System Storage DS8000 Series: Introducing IBM FlashCopy SE, REDP-4368.
xvii
6786pref.fm
Storage. She holds a Masters degree in computer science and a degree in mathematics from University of Ljubljana, Slovenia. She has co-authored numerous IBM Redbooks publications for System i and IBM System Storage products , including the IBM System Storage DS8000, the IBM Virtualization Engine TS7510 and other tape offerings. Peter Kimmel is an IT Specialist with the Enterprise Disk ATS Performance team at the European Storage Competence Center in Mainz, Germany. He joined IBM Storage in 1999 and since then worked with SSA, VSS, the various ESS generations, and DS8000/DS6000. He has been involved in all Early Shipment Programs (ESPs) and early installs for the Copy Services rollouts. Peter holds a Diploma (MSc) degree in Physics from the University of Kaiserslautern. Jukka Myyrylainen is an Advisory IT Specialist with Integrated Technology Services, IBM Finland. He has several years of experience with storage product implementations on both System z and open systems platforms. He provides consultancy, technical support, and implementation services to customers for IBM's strategic storage hardware and software products. He has contributed to several storage-related IBM Redbooks publications in the past. Markus Oscheka is an IT Specialist for Proof of Concepts and Benchmarks at the ATS Customer Solutions team in Mainz, Germany. His areas of expertise include setup and demonstration of IBM System Storage products and solutions in various environments including AIX, Linux, Windows, HP-UX, and Solaris. He has worked at IBM for five years. He has performed many proof of concepts with Copy Services on DS6000/DS8000, as well as Performance-Benchmarks with DS4000/DS6000/DS8000. Gerhard Pieper Stephen West is a member of the IBM Americas Advanced Technical Support team for IBM disk storage products and related copy services. His primary focus has been the enterprise disk storage products, copy services and customer performance issues with these storage and copy products. A significant part of this job is to educate the technical team in the field on the storage and related copy services solutions. Prior to the new product or new release announcements, the ATS team is trained on the details of the new products and releases, and then develops education for field training sessions. Axel Westphal is working as an IT Specialist for Workshops and Proof of Concepts at the European Storage Competence Center (ESCC) in Mainz, Germany. Axel joined IBM in 1996, working for Global Services as a System Engineer. His areas of expertise include setup and demonstration of IBM System Storage products and solutions in various environments. Since 2004 he is responsible for Workshops and Proof of Concepts conducted at the ESSC with DS8000, SAN Volume Controller and IBM TotalStorage Producitivity Center. Roland Wolf is a Certified Consulting IT Specialist in Germany. He has 21 years of experience with S/390 and disk storage hardware and since the last years also in SAN and storage for open systems. He is working in Field Technical Sales Support for storage systems. His areas of expertise include performance analysis and disaster recovery solutions in enterprises utilizing the unique capabilities and features of the IBM disk storage servers, ESS and DS6000/DS8000. He has contributed to various IBM Redbooks publications including, ESS, DS6000, and DS80000 Concepts and Architecture, and DS6000 / DS8000 Copy Services. He holds a PhD in Theoretical Physics.
xviii
6786pref.fm
The team: Wilhelm, Markus. Jukka, Peter, Jana, Steve, Bertrand, Gerhard, Roland and Axel.
Preface
xix
6786pref.fm
Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review IBM Redbook form found at:
ibm.com/redbooks
Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
xx
6786chang.fm
Summary of changes
This section describes the technical changes made in this edition of the book and in previous editions. This edition can also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-6786-03 for IBM System Storage DS8000 Series: Architecture and Implementation as created or updated on January 7, 2008.
New information
IBM TotalStorage Productivity Center (TPC) for Replication V3.1 support IBM System Storage Productivity Center (SSPC) New SM GUI panels and navigation Storage Pool Striping Dynamic volume expansion Enhanced caching algorithm, AMP (Adaptive Multistream Prefetching) Space Efficient Volumes and IBM FlashCopy SE Microsoft Windows MPIO based drivers and Device Mapper for Multipath I/O (DM-MPIO) in Linux z/OS Global Mirror Multiple Reader Support Data offload and Call Home using SSL New licensing information
Changed information
Information, some examples, and some screen captures presented in this book were updated to reflect the latest available microcode bundle Changes to DS GUI and DS CLI to reflect new features Changes to reflect availability of third and fourth expansion frames Removed most references to 10K RPM disks (no longer available) Support for VMware ESX Server 3 HPUX multipathing solutions
xxi
6786chang.fm
xxii
6786_6452p01.fm
Part 1
Part
6786_6452p01.fm
6786_6452ch_Introduction.fm
Chapter 1.
6786_6452ch_Introduction.fm
6786_6452ch_Introduction.fm
enhanced even further. FlashCopy and FlashCopy SE allows production workloads to continue execution concurrently with data backups. Metro Mirror, Global Copy, Global Mirror, Metro/Global Mirror, z/OS Global Mirror, and z/OS Metro/Global Mirror business continuity solutions are designed to provide the advanced functionality and flexibility needed to tailor a business continuity environment for almost any recovery point or recovery time objective. The DS800 also offers three-site solutions with Metro/Global Mirror and z/OS Metro/Global Mirror for additional high availability and disaster protection. Furthermore, the Copy Services can now be managed and automated with TotalStorage Productivity Center for Replication (TPC-R).
6786_6452ch_Introduction.fm
Internal fabric
DS8000 comes with a high bandwidth, fault tolerant internal interconnection, which is also used in the IBM System p servers, called RIO-2 (Remote I/O). This technology can operate at speeds up to 1 GHz and offers a 2 GB per second sustained bandwidth per link.
6786_6452ch_Introduction.fm
FATA drives
With the introduction of 500 GB (7200 rpm) Fibre Channel ATA (FATA) drives, the DS8000 capacity now scales up to 512 TB. These drives offers a cost-effective option for lower priority data. See 4.4.4, Fibre Channel ATA (FATA) disk drive overview on page 59 for more information.
Host adapters
The DS8000 series offers enhanced connectivity with four-port Fibre Channel/FICON Host Adapters. The DS8000 Turbo models offer 4 Gbps Fibre Channel/FICON host support designed to offer up to a 50% improvement in a single port MB/s throughput performance, helping to enable cost savings with potential reduction in the number of host ports needed (compared to 2 Gbps Fibre Channel and FICON support). The 2 Gbps Fibre Channel/FICON Host Adapters, offered in longwave and shortwave, auto-negotiate to either 2 Gbps, or 1Gbps link speeds. The 4 Gbps Fibre Channel/FICON Host Adapters, also offered in longwave and shortwave, auto-negotiate to either 4 Gbps, 2 Gbps, or 1 Gbps link speeds. This flexibility enables the ability to exploit the benefits offered by higher performance 4 Gbps SAN-based solutions, while also maintaining compatibility with existing 2 Gbps infrastructures. This feature continues to provide individual ports on the adapter, which can be configured with Fibre Channel Protocol (FCP) or FICON. This can help protect your investment in Fibre Channel adapters and increase your ability to migrate to new servers. The DS8000 series also offers two-port ESCON adapters. A DS8100 Turbo Model 931 can support up to a maximum of 16 host adapters. A DS8300 Turbo Model 932 or 9B2 can support up to a maximum of 32 host adapters, which provide up to 128 Fibre Channel/FICON ports.
6786_6452ch_Introduction.fm
System x machine with preinstalled software, including IBM TotalStorage Productivity Center Basic Edition. Utilizing IBM TotalStorage Productivity Center Basic Edition software, SSPC extends the capabilities available through the IBM DS Storage Manager. SSPC offers the unique capability to manage a variety of storage devices connected across the storage area network (SAN). The rich, user-friendly graphical user interface provides a comprehensive view of the storage topology, from which the administrator can explore the health of the environment at an aggregate or in-depth view. Moreover, IBM TotalStorage Productivity Center Standard Edition, which is pre-installed on the SSPC, can be optionally licensed and used to enable more in depth performance reporting, asset and capacity reporting, and automation for the DS8000 as well as to manage other resources, such as server file systems, tape drives and libraries.
This rich support of heterogeneous environments and attachments, along with the flexibility to easily partition the DS8000 series storage capacity among the attached environments, can help support storage consolidation requirements and dynamic, changing environments.
6786_6452ch_Introduction.fm
FlashCopy
The primary objective of FlashCopy is to very quickly create a point-in-time copy of a source volume on a target volume. The benefits of FlashCopy are that the point-in-time target copy is immediately available for use for backups or testing and that the source volume is immediately released so that applications can continue processing with minimal application downtime. The target volume can be either a logical or physical copy of the data, with the latter copying the data as a background process. In a z/OS environment, FlashCopy can also operate at a data set level. Following is a summary of the options available with FlashCopy.
Incremental FlashCopy
Incremental FlashCopy provides the capability to refresh a LUN or volume involved in a FlashCopy relationship. When a subsequent FlashCopy is initiated, only the data required to bring the target current to the source's newly established point-in-time is copied.
Consistency Groups
Consistency Groups can be used to maintain a consistent point-in-time copy across multiple LUNs or volumes, or even multiple DS8000, ESS 800, and ESS 750 systems.
IBM FlashCopy SE
The IBM FlashCopy SE feature provides a Space Efficient copy capability that can greatly reduce the storage capacity needed for point-in-time copies. Only the capacity needed to
6786_6452ch_Introduction.fm
save pre-change images of the source data is allocated in a copy repository. This enables more Space Efficient utilization than is possible with the standard FlashCopy function. Furthermore, less capacity can mean fewer disk drives and lower power and cooling requirements, that can help reduce costs and complexity. FlashCopy SE may be especially useful in the creation of temporary copies for tape backup, online application checkpoints or copies for disaster recovery testing. (For more information on FlashCopy SE, refer to the IBM RedPaper publication: IBM systemStorage DS8000 Series: Introducing IBM FlashCopy SE, REDP-4368)
Metro Mirror
Metro Mirror, previously called Peer-to-Peer Remote Copy (PPRC), provides a synchronous mirror copy of LUNs or volumes at a remote site within 300 km.
Global Copy
Global Copy, previously called Extended Distance Peer-to-Peer Remote Copy (PPRC-XD), is a non-synchronous long distance copy option for data migration and backup.
Global Mirror
Global Mirror provides an asynchronous mirror copy of LUNs or volumes over virtually unlimited distances. The distance is typically limited only by the capabilities of the network and channel extension technology being used.
Metro/Global Mirror
Metro/Global Mirror is a three-site data replication solution for both the open systems and the System z environments. Local site (site a) to intermediate site (site b) provides high availability replication using synchronous Metro Mirror, and intermediate site (site b) to remote site (site c) provides long distance disaster recovery replication using asynchronous Global Mirror.
10
6786_6452ch_Introduction.fm
1.2.5 Interoperability
The DS8000 not only supports a broad range of server environments, but it can also perform Remote Mirror and Copy with the DS6000 and the ESS Models 750 and 800. This offers dramatically increased flexibility in developing mirroring and remote copy solutions and also the opportunity to deploy business continuity solutions at lower costs than have been previously available.
Large LUN and large count key data (CKD) volume support
You can configure LUNs and volumes to span arrays, allowing for larger LUN sizes up to 2 TB. The maximum CKD volume size is 65520 cylinders (about 55.6 GB), greatly reducing the number of volumes to be managed.
11
6786_6452ch_Introduction.fm
6786_6452ch_Introduction.fm
is now possible to replace four ESS Model 800s with one DS8300. And with the LPAR implementation, you get an additional consolidation opportunity, because you get two storage system logical partitions in one physical machine. Because the mirror solutions are compatible between the ESS and the DS8000 series, it is possible to think about a setup for a disaster recovery solution with the high performance DS8000 at the primary site and the ESS at the secondary site, where the same performance is not required.
13
6786_6452ch_Introduction.fm
Self-learning algorithms to adaptively and dynamically learn what data should be stored in cache based upon the frequency needs of the hosts
14
6786_6452ch_Introduction.fm
Multiple allegiance expands the simultaneous logical volume access capability across multiple System z servers. This function, along with PAV, enables the DS8000 series to process more I/Os in parallel, helping to improve performance and enabling greater use of large volumes. I/O priority queuing allows the DS8000 series to use I/O priority information provided by the z/OS Workload Manager to manage the processing sequence of I/O operations. Chapter 10, Performance on page 185, gives you more information about the performance aspects of the DS8000 family.
15
6786_6452ch_Introduction.fm
16
6786_6452ModelOvw.fm
Chapter 2.
Model overview
This chapter provides an overview of the IBM System Storage DS8000 storage disk subsystem, the different models, and how well they scale regarding capacity and performance. The topics covered in this chapter include: DS8000 series model overview DS8000 series model comparison DS8000 design for scalability
17
6786_6452ModelOvw.fm
For example, 931 : Non-LPAR / 2-way base frame 9AE : LPAR / Expansion frame
Figure 2-1 Naming convention for Turbo models 931, 932, 9B2 and older 921, 922, and 9A2
In the following sections, we describe these models further: 1. DS8100 Turbo Model 931 This model features a dual two-way processor complex implementation and it includes a Model 931 base frame and an optional Model 92E expansion frame. 2. DS8300 Turbo Models 932 This model features a dual four-way processor complex implementation and includes a Model 932 base frame and up to four optional Model 92E expansion frames.
18
6786_6452ModelOvw.fm
3. DS8300 Turbo Models 9B2 This model features a dual four-way processor complex implementation and includes a Model 9B2 base frame and up to four optional Model 9AE expansion frames. In addition, the Model 9B2 supports the configuration of two storage system Logical Partitions (LPARs) in one machine.
19
6786_6452ModelOvw.fm
Note: The 242x machine type numbers are used for ordering purposes to support the multiple warranty options, but the DS8000 Turbo Models are otherwise the same systems. In addition 239x machine types have been introduced to allow ordering of add-on licenses for the new FlashCopy SE. The x in 242x designates the machine type according to its warranty period, where x can be either 1, 2, 3, or 4. For example, a 2424-9B2 machine type designates a DS8000 series Turbo Model 9B2 with a four year warranty period. The x in 239x can either be 6, 7, 8, or 9, according to the associated 242x base unit model, 2396 function authorizations apply to 2421 base units, 2397 to 2422, and so on. For example, a 2399-LFA machine type designates a DS8000 Licensed Function Authorization for a 2424 machine with a four year warranty period. In the discussions throughout this book, we almost make no references to the machine types but to the models, because identical models share identical characteristics independently of the machine type designation.
Figure 2-2 DS8100 base frame with covers removed (left) and with Model 92E (right)
The DS8100 Turbo Model 931 has the following features: Two processor complexes, each with a System p5 POWER5+ 2.2 GHz two-way Computer Electronics Complex (CEC). Base frame with up to 128 DDMs for a maximum base frame disk storage capacity of 38.4 TB with FC DDMs and 64 TB with 500 GB FATA DDMs. Up to 128 GB of processor memory, also referred to as the cache Up to 16 four-port Fibre Channel/FICON host adapters (HAs) of 2 or 4 Gbps. Each port can be independently configured as either: FCP port to open systems hosts attachment.
20
6786_6452ModelOvw.fm
FCP port for Metro Mirror, Global Copy, Global Mirror, and Metro/Global Mirror connectivity. FICON port to connect to System z hosts. FICON port for z/OS Global Mirror connectivity. This totals up to 64 ports with any mix of FCP and FICON ports. ESCON host connection is also supported, but with ESCON, a host adapter contains only two ESCON ports and both must be ESCON ports. A DS8000 can have both ESCON adapters and Fibre Channel/FICON adapters at the same time. The DS8100 Turbo Model 931 can connect to one expansion frame Model 92E. Figure 2-2 displays a front view of a DS8100 Model 931 with the covers off and a front view of a Model 931 with an expansion Model 92E attached to it with the covers on. The base and expansion frame together allow for a maximum capacity of 384 DDMs: 128 DDMs in the base frame and 256 DDMs in the expansion frame. With all of these being 300 GB enterprise DDMs, this results in a maximum disk storage subsystem capacity of 115.2 TB. With 500 GB FATA DDMs, this results in a maximum DS8100 storage capacity of 192 TB. Figure 2-3 on page 21 shows the maximum configuration of a DS8100 with the 931 base frame plus a 92E expansion frame and provides the front view of the basic structure and placement of the hardware components within both frames. Note: A model 931 can be upgraded to a 932 or to a 9B2 model.
Figure 2-3 Maximum DS8100 configuration: 931 base unit and 92E expansion
21
6786_6452ModelOvw.fm
22
6786_6452ModelOvw.fm
Figure 2-4 DS8300 Turbo Model 932/9B2 base frame rear views with and without covers
The DS8300 Turbo models can connect to one, two, three, or four expansion frames. This provides the following configuration alternatives: With one expansion frame, the storage capacity and number of adapters of the DS8300 models can expand to: Up to 384 DDMs in total as for the DS8100. This is a maximum disk storage capacity of 115.2 TB with 300 GB FC DDMs and 192 TB with 500 GB FATA DDMs. Up to 32 host adapters (HAs). This can be an intermix of Fibre Channel/FICON (four-port) and ESCON (two-port) adapters. With two expansion frames, the disk capacity of the DS8300 models expands to: Up to 640 DDMs in total for a maximum disk storage capacity of 192 TB when utilizing 300 GB FC DDMs and 320 TB with 500 GB FATA DDMs. With three expansion frames, the disk capacity of the DS8300 models expands to: Up to 896 DDMs in total for a maximum disk storage capacity of 268.8 TB when utilizing 300 GB FC DDMs and 448 TB with 500 GB FATA DDMs. With four expansion frames, the disk capacity of the DS8300 models expands to: Up to 1024 DDMs in total for a maximum disk storage capacity of 307.2 TB when utilizing 300 GB FC DDMs and 512 TB with 500 GB FATA DDMs. Figure 2-5 on page 24 shows a configuration for a DS8300 Turbo Model with two expansion frames: base Model 932 connects to Model 92E expansions and base Model 9B2 connects to Model 9AE expansions. This figure also shows the basic hardware components and how they are distributed across all three frames.
23
6786_6452ModelOvw.fm
Expansion Model 92E or 9AE Expansion Model 92E or 9AE 256 Drives 256 Drives
Figure 2-5 DS8300 maximum configuration: base and two expansion frames
Note the additional I/O enclosures in the first expansion frame, which is the middle frame in Figure 2-5. Each expansion frame has twice as many DDMs as the base frame, so with 128 DDMs in the base frame and 2 x 256 DDMs in the expansion frames, a total of 640 DDMs is possible. Figure 2-6 shows a photo of a DS8300 Turbo Model 932 with the maximum configuration of 1024 disk drives; 2 x 128 DDMs (base frame and fourth expansion frame) + 3 x 256 (first, second, and third expansion frames).
Figure 2-6 DS8300 Turbo models 932/9B2 maximum configuration with 1024 disk drives
24
6786_6452ModelOvw.fm
There are no additional DAs installed for the second, third, and fourth expansion frames. The result of installing all possible 1024 DDMs is that they will be distributed evenly over all the DA pairs; see Figure 2-7. The installation sequence for the third and fourth expansion frames mirrors the installation sequence of the first and second expansion frames with the exception of the last 128 DDMs in the fourth expansion frame.
2 2 0 0 C0 C1
b
3 3 1 1 2 2 0 0
b
6 6 4 4 7 7 5 5
b
3 3 1 1 0 0 1 1
b
4/6 6/4
Figure 2-7 932 Turbo Model: Cabling diagram for third and fourth expansion frames
25
6786_6452ModelOvw.fm
<= 16 <= 32
<= 16 <= 32
Depending on the DDM sizes, which can be different within a 9x1or 9x2 and the number of DDMs, the total capacity is calculated accordingly. Each Fibre Channel/FICON Host Adapter has four Fibre Channel ports, providing up to 128 Fibre Channel ports for a maximum configuration. Each ESCON Host Adapter has two ports; therefore, the maximum number of ESCON ports possible is 64. There can be an intermix of Fibre Channel/FICON and ESCON adapters, up to the maximum of 32 adapters.
26
6786_6452ModelOvw.fm
Physical capacity
Adding DDMs
A significant benefit of the DS8000 series is the ability to add DDMs without disruption for maintenance. IBM offers capacity on demand solutions that are designed to meet the changing storage needs of rapidly growing e-business. The Standby Capacity on Demand (CoD) offering is designed to provide you with the ability to tap into additional storage and is particularly attractive if you have rapid or unpredictable storage growth. Up to four Standby CoD disk drive sets (64 disk drives) can be factory-installed or field-installed into your system. To activate, you simply logically configure the disk drives for use, which is a nondisruptive activity that does not require intervention from IBM. Upon activation of any portion of a Standby CoD disk drive set, you must place an order with IBM to initiate billing for the activated set. At that time, you can also order replacement Standby CoD disk drive sets. For more information about the Standby CoD offering, refer to the DS8000 series announcement letter. IBM announcement letters can be found at:
http://www.ibm.com/products
When the first expansion frame is attached to a 9x2 base frame, you do need a disruptive maintenance because the first expansion enclosure for the Models 9x2 has I/O enclosures
Chapter 2. Model overview
27
6786_6452ModelOvw.fm
and these I/O enclosures must be connected into the existing RIO-G loops. If you install the base frame and the first expansion frame for the Model 9x2 at the beginning, you do not need a disruptive upgrade to add DDMs. The expansion enclosure for the DS8100 model and the second, third, and fourth expansion enclosures for the DS8300 models have no I/O enclosure; therefore, you can attach them to the existing frame without disruption.
I/O Enclosure
RIO-G
Disks
28
6786_6452ModelOvw.fm
Hosts
RIO-G
Disks
Figure 2-9 4-way model components
Data migration or backup/restore is your responsibility. Fee-based data migration services are available from Global Services. The IBM POWER5+ processor is an optional feature for the DS8100 Model 921 and DS8300 Models 922 and 9A2. The installation of this feature is non-disruptive for model upgrades 921 to 931, 922 to 932, and 9A2 to 9B2. The following upgrade paths to Turbo models 931, 932, and 9B2 equivalences, are available for the older models 921, 922, and 9A2: Model 921 upgrade to Turbo Model 931 equivalence: Add Processor Upgrade feature and complete a Processor Memory feature conversion. (non-disruptive)
29
6786_6452ModelOvw.fm
Model 921 upgrade to Turbo Model 932 equivalence: First a Model 921 to 922 model conversion, then add Processor Upgrade feature and complete Processor Memory feature conversion. (disruptive) Model 921 upgrade to Turbo Model 9B2 equivalence: First a Model 921 to 9A2 model conversion, then add Processor Memory feature. (disruptive) Model 922 upgrade to Turbo Model 932 equivalence: Add Processor Upgrade feature and complete a Processor Memory feature conversion. (non-disruptive) Model 922 upgrade to Turbo Model 9B2 equivalence: First a Model 922 to 9A2 model conversion, then add Processor Upgrade feature. (disruptive) Model 9A2 upgrade to Turbo Model 9B2 equivalence: Add Processor Upgrade feature and complete Processor Memory feature conversion. (non-disruptive) Model 9A2 upgrade to Turbo Model 932 equivalence: First a Model 9A2 to 922 model conversion, then add Processor Upgrade feature. (disruptive) For more information about model conversions and upgrade paths, refer to the DS8000 series Turbo models announcement letter. IBM announcement letters can be found at:
http://www.ibm.com/products
30
6786_6452ch_LPAR.fm
Chapter 3.
LPAR security through POWER Hypervisor (PHYP) LPAR and Copy Services LPAR benefits Summary
31
6786_6452ch_LPAR.fm
Partitions
When a multi-processor computer is subdivided into multiple, independent operating system images, those independent operating environments are called partitions. The resources on the system are allocated to specific partitions.
Resources
Resources are defined as a systems processors, memory, and I/O slots. I/O slots can be populated by different adapters, such as Ethernet, SCSI, Fibre Channel, or other device controllers. A disk is allocated to a partition by assigning it the I/O slot that contains the disks controller.
32
6786_6452ch_LPAR.fm
Logical Partition
Logical Partition 0 Logical Partition 1 Logical Partition 2
Processor Cache
Processor Cache
Processor Cache
Processor Cache
Processor Cache
Processor Cache
I/O
I/O
I/O
I/O
I/O
I/O
I/O
I/O
I/O
I/O
I/O
I/O
Memory
33
6786_6452ch_LPAR.fm
Processor complex 0
LPAR01
LPAR02
LPAR12
Figure 3-2 DS8300 Model 9B2: LPAR and storage facility image
The DS8300 series incorporates two four-way POWER5+ server processors; see Figure 3-2, DS8300 Model 9B2: LPAR and storage facility image on page 34. We call each of these a processor complex. Each processor complex on the DS8300 is divided into two processor LPARs (a set of resources on a processor complex that support the execution of an operating
34
6786_6452ch_LPAR.fm
system). The storage facility image (SFI) is built from a pair of processor LPARs, one on each processor complex. Figure 3-2, DS8300 Model 9B2: LPAR and storage facility image on page 34 shows that LPAR01 from processor complex 0 and LPAR11 from processor complex 1 form storage facility image 1 (SFI 1). LPAR02 from processor complex 0 and LPAR12 from processor complex 1 form the second storage facility image (SFI 2). Storage facility images (SFIs) are also referred to as storage system LPARs, and sometimes as just storage LPARs. Important: It is important to understand and realize the difference between processor LPARs and storage LPARs (Storage Facility Image)
Processor complex 0
Storage enclosures
Processor complex 1
RIO-G
I/O drawers
RIO-G
RIO-G
I/O drawers
RIO-G
Storage enclosures
Figure 3-3 DS8300 LPAR resource allocation
35
6786_6452ch_LPAR.fm
Each SFI in the DS8300 has access to: 50 percent of the processors 50 percent of the processor memory 1 loop of the RIO-G interconnection Up to 16 host adapters (4 I/O drawers with up to 4 host adapters) Up to 512 disk drives (up to 154 TB with FC DDMs or 256 TB with FATA DDMs) Note: The resource allocation for processors, memory, and I/O slots in the two storage system LPARs on the DS8300 is split in a fixed 50/50 ratio.
HA HA DA HA HA DA
I/O drawer 0
HA HA DA HA HA DA
Processor complex 1
2 Processors Processors
RIO-G interface RIO-G interface SCSI controller
Memory
SCSI controller
I/O drawer 1
Memory
A'
boot data data
Ethernet-Port Ethernet-Port
B
boot data data
B'
boot data data
CD/DVD
CD/DVD
HMC
C
boot data data
C'
boot data data
Ethernet-Port Ethernet-Port
D
boot data data
D'
boot data data
Memory
SCSI controller
HA HA DA HA HA DA
2 Processors
I/O drawer 3
HA HA DA HA HA DA
Memory
SCSI controller RIO-G interface RIO-G interface
2 Processors
I/O drawer 2
Figure 3-4 shows the split of all available resources between the two SFIs; each SFI has 50% of all available resources.
I/O resources
For one SFI, the following hardware resources are required: 2 SCSI controllers with 2 disk drives each 2 Ethernet ports (to communicate with the HMC) 1 Thin Device Media Bay (for example, CD or DVD; can be shared between the LPARs)
36
6786_6452ch_LPAR.fm
Each SFI will have two physical disk drives in each processor complex. Each disk drive will contain three logical volumes: the boot volume and two logical volumes for the memory save dump function. These three logical volumes are then mirrored across the two physical disk drives for each LPAR. In Figure 3-4, for example, the disks A/A' are mirrors. For the DS8300 Model 9B2, there will be four drives total in one physical processor complex.
37
6786_6452ch_LPAR.fm
An additional 256 DDMs The second and third expansion frame Model 9AE have the following option divided evenly between the SFIs: An additional 256 DDMs The fourth expansion frame Model 9AE has the following option divided evenly between the SFIs: An additional 128 DDMs A fully configured DS8300 has one base frame and four expansion frames. The first expansion frame has additional I/O drawers and disk drive modules (DDMs), while the three additional expansion frames contain only additional DDMs. Figure 3-5 provides an example of how a DS8300 with two expansion frames might be configured. The disk enclosures are assigned to SFI 1 (yellow) or SFI 2 (green). When ordering additional disk capacity, it can be allocated to either SFI, but the cabling is predetermined, so in this example, disks added to the empty pair of disk enclosures would be allocated to SFI 2.
Storage enclosure Storage enclosure Storage enclosure Storage enclosure Storage enclosure
Storage enclosure Storage enclosure Storage enclosure Storage enclosure Empty storage enclosure Empty storage enclosure Storage enclosure Storage enclosure
Processor complex 0
Processor complex 1
38
6786_6452ch_LPAR.fm
always be installed and activated, regardless of the system configuration. It operates as a hidden partition, with no processor resources assigned to it. In a partitioned environment, the POWER Hypervisor is loaded into the first Physical Memory Block (PMB) at the physical address zero and reserves the PMB. From then on, it is not possible for an LPAR to access the physical memory directly. Every memory access is controlled by the POWER Hypervisor as shown in Figure 3-6 on page 39. Each partition has its own exclusive page table (also controlled by the POWER Hypervisor), which the processors use to transparently convert a program's virtual address into the physical address where that page has been mapped into physical memory. In a partitioned environment, the operating system uses hypervisor services to manage the translation control entry (TCE) tables. The operating system communicates the desired I/O bus address to logical mapping, and the hypervisor translates that into the I/O bus address to physical mapping within the specific TCE table. The hypervisor needs a dedicated memory region for the TCE tables to translate the I/O address to the partition memory address, then the hypervisor can perform direct memory access (DMA) transfers to the PCI adapters.
HypervisorControlled TCE Tables For DMA
Partition 1
Proc
N
Physical Memory
Partition 1
I/O Slot
Bus Addresses
Proc
N
Proc
N
Real Addresses
Partition 2
Proc
0
Virtual Addresses
{ {
Addr N
I/O Load/Store
Proc
0
Bus Addresses
Partition 2
I/O Slot I/O Slot
Proc
0
Addr 0
Real Addresses
The Hardware and Hypervisor manage the real to virtual memory mapping to provide robust isolation between partitions
Figure 3-6 LPAR protection in IBM POWER Hypervisor
39
6786_6452ch_LPAR.fm
Remote mirror
Primary
Secondary
Secondary
FlashCopy Source
FlashCopy Target
FlashCopy Source
FlashCopy Target
Remote mirroring within a Storage Facility Image or across Storage Facility Images FlashCopy within a Storage Facility Image
Figure 3-7 summarizes the basic considerations for Copy Services when used with a partitioned DS8300.
FlashCopy
The DS8000 series fully supports the FlashCopy and IBM FlashCopy SE capabilities, including cross LSS support. However, a source volume of a FlashCopy located in one SFI cannot have a target volume in the second SFI, as illustrated in Figure 3-7.
40
6786_6452ch_LPAR.fm
The following are examples of possible scenarios where SFIs can be useful: Separating workloads An environment can be split by operating system, application, organizational boundaries, or production readiness. For example, you can separate z/OS hosts on one SFI from open hosts on the other SFI, or you can run production on one SFI and run a test or development environment on the other SFI. Dedicating resources As a service provider, you can provide dedicated resources to each client, thereby satisfying security and service level agreements, while having the environment all contained on one physical DS8300. Production and data mining For database purposes, imagine a scenario where the production database is running in the first SFI and a copy of the production database is running in the second SFI. Analysis and data mining can be performed on the copy without interfering with the production database. Business continuance (secondary) within the same physical array You can use the two partitions to test Copy Services solutions, or you can use them for multiple copy scenarios in a production environment. Information Lifecycle Management (ILM) partition with fewer resources, slower DDMs One SFI can utilize, for example, only fast disk drive modules to ensure high performance for the production environment, and the other SFI can use fewer and slower DDMs to ensure Information Lifecycle Management at a lower cost. Figure 3-8 depicts one example of SFI use in the DS8300.
System 2 System z
41
6786_6452ch_LPAR.fm
This example shows a DS8300 with a total physical capacity of 30 TB. In this case, a minimum Operating Environment License (OEL) is required to cover the 30 TB capacity. The DS8300 is split into two images. SFI 1 is used for an open systems environment and utilizes 20 TB of fixed block data (FB). SFI 2 is used for a System z environment and uses 10 TB of count key data (CKD). To utilize either FlashCopy or FlashCopy SE on the entire capacity would require a 30 TB FlashCopy or a 30 TB FlashCopy SE license. However, as in this example, it is possible to have either a FlashCopy or a FlashCopy SE license for SFI 1 for 20 TB only. In this example for the System z environment, no copy function is needed, so there is no need to purchase a Copy Services license for SFI 2. For more information about the licensed functions, see Chapter 11, Features and license keys on page 227.
ESS 800 Max Logical Subsystems Max Logical Devices Max Logical CKD Devices Max Logical FB Devices Max N-Port Logins/Port Max N-Port Logins Max Logical Paths/FC Port Max Logical Paths/CU Image Max Path Groups/CU Image 32 8K 4K 4K 128 512 256 256 128
DS8300 with LPAR 510 127.5K 127.5K 127.5K 509 16K 1280 512 256
Figure 3-9 DS8300 with and without LPAR as opposed to ESS800: addressing capabilities
3.6 Summary
The DS8000 series started delivering the first use of the POWER5 processor IBM Virtualization Engine logical partitioning capability originally with the Model 9A2 and with Release 2 enhanced with the POWER5+ processor of the Turbo Model 9B2. This storage system LPAR technology is designed to enable the creation of two completely separate storage systems. The SFIs can be used for production, test, or other unique storage environments, and they operate within a single physical enclosure. Each SFI can be established to support the specific performance requirements of a different, heterogeneous workload. The DS8000 series robust partitioning implementation helps to isolate and protect the SFIs. These storage system LPAR capabilities are designed to help simplify systems by maximizing management efficiency, cost-effectiveness, and flexibility.
42
6786_6452_components.fm
Chapter 4.
Hardware Components
This chapter describes the hardware components of the DS8000 series. This chapter is intended for readers who want to get a clear picture of what the individual components look like and the architecture that holds them together. The following topics are covered in this chapter: Frames Architecture Processor complex Disk subsystem Host adapters Power and cooling Management console network System Storage Productivity Center (SSPC) Ethernet adapter pair (for TPC-R support at R2+)
43
6786_6452_components.fm
4.1 Frames
The DS8000 is designed for modular expansion. From a high-level view, there appear to be three types of frames available for the DS8000. However, on closer inspection, the frames themselves are almost identical. The only variations are the combinations of processors, I/O enclosures, batteries, and disks that the frames contain. Figure 4-1 is an attempt to show some of the frame variations that are possible with the DS8000 series. The left frame is a base frame that contains the processors (System p POWER5+ servers). The center frame is an expansion frame that contains additional I/O enclosures but no additional processors. The right frame is an expansion frame that contains just disk (and no processors, I/O enclosures, or batteries). Each frame contains a frame power area with power supplies and other power-related hardware.
disk enclosure pair disk enclosure pair disk enclosure pair disk enclosure pair
System p5 server
Primary power supply
System p5 server
44
6786_6452_components.fm
Between the disk enclosures and the processor complexes are two Ethernet switches, a Storage Hardware Management Console (HMC), and a keyboard/display module. The base frame contains two processor complexes. These System p POWER5+ servers contain the processor and memory that drive all functions within the DS8000. In the ESS, we referred to them as clusters, but this term is no longer relevant. We now have the ability to logically partition each processor complex into two LPARs, each of which is the equivalent of an ESS cluster. Finally, the base frame contains four I/O enclosures. These I/O enclosures provide connectivity between the adapters and the processors. The adapters contained in the I/O enclosures can be either device adapters (DAs) or host adapters (HAs). The communication path used for adapter to processor complex communication is the RIO-G loop. This loop not only joins the I/O enclosures to the processor complexes, it also allows the processor complexes to communicate with each other.
45
6786_6452_components.fm
removal of system power. A small cover must be lifted to operate it. Do not trip this switch unless the DS8000 is creating a safety hazard or is placing human life at risk.
Fault indicator
There is no power on/off switch on the operator panel, because power sequencing is managed through the HMC. This is to ensure that all data in nonvolatile storage (known as modified data) is destaged properly to disk prior to power down. It is thus not possible to shut down or power off the DS8000 from the operator panel (except in an emergency, with the EPO switch mentioned previously).
4.2 Architecture
Now that we have described the frames themselves, we use the rest of this chapter to explore the technical details of each of the components. The architecture that connects these components is pictured in Figure 4-3 on page 47. In effect, the DS8000 consists of two processor complexes. Each processor complex has access to multiple host adapters to connect to Fibre Channel, FICON, and ESCON hosts. A DS8300 can have up to 32 host adapters. To access the disk subsystem, each complex uses several four-port Fibre Channel arbitrated loop (FC-AL) device adapters. A DS8000 can have up to sixteen of these adapters arranged into eight pairs. Each adapter connects the complex to two separate switched Fibre Channel networks. Each switched network attaches disk enclosures that each contain up to 16 disks. Each enclosure contains two 20-port Fibre Channel switches. Of these 20 ports, 16 are used to attach to the 16 disks in the enclosure and the remaining four are used to either interconnect with other enclosures or to the device adapters. Each disk is attached to both switches. Whenever the device adapter connects to a disk, it uses a switched connection to transfer data. This means that all data travels through the shortest possible path. The attached hosts interact with software which is running on the complexes to access data on logical volumes. Each complex will host at least one instance of this software (which is 46
IBM System Storage DS8000 Series: Architecture and Implementation
6786_6452_components.fm
called a server), which runs in a logical partition (an LPAR). The servers manage all read and write requests to the logical volumes on the disk arrays. During write requests, the servers use fast-write, in which the data is written to volatile memory on one complex and persistent memory on the other complex. The server then reports the write as complete before it has been written to disk. This provides much faster write performance. Persistent memory is also called nonvolatile storage (NVS).
Processor Complex 0
SAN fabric
Host ports
Processor Complex 1
Volatile memory
Persistent memory
Host adapter
in I/O enclosure
Host adapter
in I/O enclosure
Volatile memory
Persistent memory
RIO-G
N-way SMP
RIO-G
Device adapter in I/O enclosure B
N-way SMP
When a host performs a read operation, the servers fetch the data from the disk arrays using the high performance switched disk architecture. The data is then cached in volatile memory in case it is required again. The servers attempt to anticipate future reads by an algorithm known as Sequential prefetching in Adaptive Replacement Cache (SARC). Data is held in cache as long as possible using this smart algorithm. If a cache hit occurs where requested data is already in cache, then the host does not have to wait for it to be fetched from the disks. Starting with v5.2.400.327, the cache management was further enhanced by a breakthrough caching technology from IBM Research called Adaptive Multi-stream Prefetching (AMP).
47
6786_6452_components.fm
Both the device and host adapters operate on a high bandwidth fault-tolerant interconnect known as the RIO-G. The RIO-G design allows sharing of host adapters between servers and offers exceptional performance and reliability. Figure 4-3 on page 47 uses colors as indicators of how the DS8000 hardware is shared between the servers (the cross hatched color is green and the lighter color is yellow). On the left side, the green server is running on the left processor complex. The green server uses the N-way symmetric multiprocessor (SMP) of the complex to perform its operations. It records its write data and caches its read data in the volatile memory of the left complex. For fast-write data, it has a persistent memory area on the right processor complex. To access the disk arrays under its management (the disks also shown in green), it has its own device adapter (again in green). The yellow server on the right operates in an identical fashion. The host adapters (in dark red) are deliberately not colored green or yellow, because they are shared between both servers.
48
6786_6452_components.fm
The SMP system features 2-way or 4-way, copper-based, SOI-based POWER5+ microprocessors running at 2.2 GHz with 36 MB off-chip Level 3 cache configurations. The system is based on a concept of system building blocks. The processor complexes are facilitated with the use of processor interconnect and system flex cables that enable as many as four 4-way System p processor complexes to be connected to achieve a true 16-way SMP combined system. How these features are implemented in the DS8000 might vary. Figure 4-4 shows a front view and a rear view of the DS8000 series processor complex.
49
6786_6452_components.fm
Front view
DVD-rom drives
SCSI disk drives Operator panel Processor cards Power supply 1 Power supply 2
Rear view
RIO-G ports
One processor complex includes: Five hot-plug PCI-X slots with Enhanced Error Handling (EEH) An enhanced blind-swap mechanism that allows hot-swap replacement or installation of PCI-X adapters without sliding the enclosure into the service position Two Ultra320 SCSI controllers One 10/100/1000 Mbps integrated dual-port Ethernet controller Two serial ports Two USB 2.0 ports Two HMC Ethernet ports Four remote RIO-G ports Two System Power Control Network (SPCN) ports The System p POWER5+ servers in the DS8000 include two 3-pack front-accessible, hot-swap-capable disk bays. The six disk bays of one System p POWER5+ processor complex can accommodate up to 880.8 GB of disk storage using the 146.8 GB Ultra320 SCSI disk drives. Two additional media bays are used to accept optional slim-line media devices, such as DVD-ROM or DVD-RAM drives. The System p5+ processor complex also has I/O
50
6786_6452_components.fm
expansion capability using the RIO-G interconnect. How these features are implemented in the DS8000 can vary.
Processor memory
The DS8100 models 9x1 offer up to 128 GB of processor memory and the DS8300 models 9x2 offer up to 256 GB of processor memory. Half of this will be located in each processor complex. In addition, the Nonvolatile Storage (NVS) scales to the processor memory size selected, which can also help optimize performance.
4.3.1 RIO-G
The RIO-G ports are used for I/O expansion to external I/O drawers. RIO stands for remote I/O. The RIO-G is evolved from earlier versions of the RIO interconnect. Each RIO-G port can operate at 1 GHz in bidirectional mode and is capable of passing data in each direction on each cycle of the port. It is designed as a high performance self-healing interconnect. Each System p POWER5+ server in the DS8000 provides two external RIO-G ports, and an adapter card adds two more. Two ports on each processor complex form a loop. Figure 4-5 on page 52 illustrates how the RIO-G cabling is laid out in a DS8300 that has eight I/O drawers. This would only occur if an expansion frame were installed. The DS8000 RIO-G cabling will vary based on the model. A two-way DS8000 model will have one RIO-G loop. A four-way DS8000 model will have two RIO-G loops. Each loop will support four disk enclosures.
51
6786_6452_components.fm
I/O enclosure
Processor Complex 0
RIO-G ports
I/O enclosure
Processor Complex 1
RIO-G ports
Loop 0
I/O enclosure I/O enclosure I/O enclosure I/O enclosure
Loop 1
I/O enclosure
Figure 4-5 DS8300 RIO-G port layout
I/O enclosure
Front view
SPCN ports
Rear view
52
6786_6452_components.fm
Each I/O enclosure has the following attributes: 4U rack-mountable enclosure Six PCI-X slots: 3.3 V, keyed, 133 MHz blind-swap hot-plug Default redundant hot-plug power and cooling devices Two RIO-G and two SPCN ports
The DAs are installed in pairs, because each storage partition requires its own adapter to connect to each disk enclosure for redundancy. This is why we refer to them as DA pairs.
53
6786_6452_components.fm
dummy carriers. A dummy carrier looks very similar to a DDM in appearance but contains no electronics. The enclosure is pictured in Figure 4-8. Note: If a DDM is not present, its slot must be occupied by a dummy carrier. This is because without a drive or a dummy, cooling air does not circulate correctly. Each DDM is an industry standard FC-AL disk. Each disk plugs into the disk enclosure backplane. The backplane is the electronic and physical backbone of the disk enclosure.
54
6786_6452_components.fm
The main problems with standard FC-AL access to DDMs are: The full loop is required to participate in data transfer. Full discovery of the loop through loop initialization protocol (LIP) is required before any data transfer. Loop stability can be affected by DDM failures. In the event of a disk failure, it can be difficult to identify the cause of a loop breakage, leading to complex problem determination. There is a performance drop off when the number of devices in the loop increases. To expand the loop, it is normally necessary to partially open it. If mistakes are made, a complete loop outage can result. These problems are solved with the switched FC-AL implementation on the DS8000.
When a connection is made between the device adapter and a disk, the connection is a switched connection that uses arbitrated loop protocol. This means that a mini-loop is created between the device adapter and the disk. Figure 4-11 on page 56 depicts four simultaneous and independent connections, one from each device adapter port.
55
6786_6452_components.fm
Switched connections
server 0 device adapter
15
FC switch
Rear enclosures
15
15
4 FC-AL Ports
15
Front enclosures
15
15
56
6786_6452_components.fm
Expansion
Storage enclosures are added in pairs and disks are added in groups of 16. It takes two orders of 16 DDMs to fully populate a disk enclosure pair (front and rear). To provide an example, if a machine had six disk enclosures total, it would have three at the front and three at the rear. If all the enclosures were fully populated with disks, and an additional order of 16 DDMs were purchased, then two new disk enclosures would be added, one at the front and one at the rear. The switched networks do not need to be broken to add these enclosures. They are simply added to the end of the loop, 8 DDMs will go in the front enclosure and the remaining 8 DDMs will go in the rear enclosure. If an additional 16 DDMs gets ordered later, they will be used to fill up that pair of disk enclosures. These additional DDMs added have to be from the same type than the 8 DDMs residing in the two enclosures already.
loop 0
loop 0
loop 1
loop 1
57
6786_6452_components.fm
To better understand AAL, refer to Figure 4-13 on page 57 and Figure 4-14. To make the diagrams clearer, only 16 DDMs are shown, eight in each disk enclosure. When fully populated, there would be 16 DDMs in each enclosure. Figure 4-13 on page 57 is used to depict the DA pair layout. One DA pair creates two switched loops. The front enclosures populate one loop while the rear enclosures populate the other loop. Each enclosure places two switches onto each loop. Each enclosure can hold up to 16 DDMs. DDMs are purchased in groups of 16. Half of the new DDMs go into the front enclosure and half go into the rear enclosure. Having established the physical layout, the diagram is now changed to reflect the layout of the array sites, as shown in Figure 4-14. Array site 1in green (the darker disks) uses the four left DDMs in each enclosure. Array site 2 in yellow (the lighter disks), uses the four right DDMs in each enclosure. When an array is created on each array site, half of the array is placed on each loop. A fully populated enclosure would have four array sites.
loop 0
loop 0
Array site 1
Array site 2
loop 1
loop 1
4
There are two separate switches in each enclosure.
AAL benefits
AAL is used to increase performance. When the device adapter writes a stripe of data to a RAID-5 array, it sends half of the write to each switched loop. By splitting the workload in this manner, each loop is worked evenly, which improves performance. If RAID-10 is used, two RAID-0 arrays are created. Each loop hosts one RAID-0 array. When servicing read I/O, half of the reads can be sent to each loop, again improving performance by balancing workload across loops.
6786_6452_components.fm
Four different Fibre Channel DDM types: 73 GB, 15k RPM drive 146 GB, 15k RPM drive 300 GB, 15k RPM drive And one Fibre Channel Advanced Technology Attachment (FATA) DDM drive: 500 GB, 7,200 RPM drive Effective January 11, 2008, IBM has withdrawn from marketing the following products: IBM System Storage DS8000 series 146 GB 10,000 RPM Fibre Channel Disk Drives IBM System Storage DS8000 series 300 GB 10,000 RPM Fibre Channel Disk Drives Drives to be replaced in the field as spare part are not effected from this withdraw.
59
6786_6452_components.fm
SATA and FATA disk drives are a cost efficient storage option for lower intensity storage workloads. By providing the same dual port FC interface as Fibre Channel disks, FATA drives offer higher availability and ensure compatibility and investment protection for existing enterprise-class storage systems. Note: The FATA drives offer a cost-effective option for lower priority data such as various fixed content, data archival, reference data, and near-line applications that require large amounts of storage capacity for lighter workloads. These new drives are meant to complement, not compete with existing Fibre Channel drives, because they are not intended for use in applications that require drive utilization duty cycles greater than 20 percent. Following is a summary of the key characteristics of the different disk drive types.
Fibre Channel
Fibre Channel is: Intended for heavy workloads in multi-user environments Highest performance, availability, reliability, and functionality Good Capacity: 36 GB300 GB Very high activity Greater than 80% duty cycle
FATA
FATA is:
60
6786_6452_components.fm
Intended for lower workloads in multi-user environments High performance, availability, and functionality High reliability More robust technology: Extensive Command Queuing High Capacity: 500 GB disk drives Moderate activity 20-30% duty cycle
SATA-1
SATA-1 characteristics are: Intended for lower workloads in multi-user environments Good performance Less availability and functionality than FATA or Fibre Channel disk drives: Single port interface, no command queuing High reliability High capacity: 250500 GB disk drives Moderate activity 20-30% duty cycle
SATA-2
SATA-2 characteristics are: Intended for lower workloads in multi-user environments High performance, availability, and functionality High reliability More robust technology: Extensive Command Queuing High Capacity: 500 GB disk drives Moderate activity 20-30% duty cycle
Classes of storage
To better visualize where the benefits of FATA can best be obtained if implemented in a networked storage environment, a positioning of the types or classes of storage and the appropriate storage technology used at these levels helps in this observation. See Figure 4-15 on page 62.
61
6786_6452_components.fm
Server
Disk
Disk
Tape
Network
Online Storage Near-line Storage Offline Storage
Basically, storage data can reside at three different locations within the networked storage hierarchy. Particular data types are suitable for storage at the various levels: Online (primary) storage Best suited for applications that require constant instantaneous access to data, such as databases and frequently accessed user data. Primary storage stores business-critical information, data with the highest value and importance. This data requires continuous availability and typically has high performance requirements. Business-critical data will be stored on Fibre Channel disk implemented in enterprise-class storage solutions. Near-line (secondary) storage Used for applications that require quicker access compared with offline storage (such as tape), but do not require the continuous, instantaneous access provided by online storage. Secondary storage stores business-important information, but can, however, often tolerate lower performance and potentially slightly less than 24/7 availability. It can also be used to cache online storage for quicker backups to tape. Secondary storage represents a large percentage of a companys data and is an ideal fit for FATA technology. Offline (archival) storage Used for applications where infrequent serial access is required, such as backup for long-term storage. For this type of storage, tape remains the most economical solution. Data storage implementations best suited to use FATA technology reside at the near-line or secondary location within the networked storage hierarchy and offer a cost-effective alternative to FC disks at that location. Positioned between online storage and offline storage, near-line storage or secondary storage is an optimal cost/performance solution for hosting cached backups and fixed data storage. Table 4-1 on page 63 summarizes the general characteristics for primary, secondary, and archival storage in traditional IT environments.
62
6786_6452_components.fm
Table 4-1 Storage classes in traditional IT environments Class of storage Primary media Price IOPS performance MBps performance Time to data Media reliability Uptime Typical applications Online FC disk Highest Highest Highest Immediate Highest 24/7 ERP/Oracle Near-line FATA disk Low cost-per-GB Minimal High Almost Immediate Good < 24/7 Fixed content Offline Tape Lowest NA Lowest Mount time Good - Lower < 24/7 Archive retrieval
Access frequency
In addition to random and sequential access patterns, another consideration is access frequency and its relationship with secondary storage. Several secondary storage implementations identified as ideal for FATA technology generate random data access, which on the surface does not fit the FATA performance profile. But these implementations, such as fixed content and reference data, will have sporadic access activity on large quantities of data and will therefore primarily be measured by cost per gigaByte and not performance. Many non-traditional IT environments, such as high-performance computing, rich media, and energy, will significantly benefit from enterprise-class FATA solutions. These businesses are
63
6786_6452_components.fm
looking for high throughput performance at the lowest cost per gigaByte, which is exactly what FATA can deliver.
Backup application
The secondary storage implementation that fits the FATA performance profile exceptionally well is backup, which generates sequential I/O as it streams data to the backup target, a performance strength of FATA. The backup of secondary data can be achieved more efficiently when the near-line storage device acts as a caching device between Fibre Channel (FC) disks and tape, allowing the primary disk to remain online longer. Advantages of this backup method are that it is faster and consumes less server CPU than direct backup to tape. See Figure 4-16.
Server
Disk
Disk
Tape
Network
Online Storage
Figure 4-16 Near-line backup scenario
Near-line Storage
Offline Storage
Near-line storage allows disk-to-disk backups to help achieve the following benefits: Shorter backup time and higher application availability Any IT department will tell you that its backup windows are either shrinking or already nonexistent. As a result, IT personnel are always looking for ways to improve backup times and minimize the amount of time that a given application is affected by backup, either total downtime or time running in a degraded mode. By using disk as the backup target, the backup runs and completes faster. After the data is safely stored on disk, the application is free of the backup overhead. In addition, the data can then be moved to tape to provide the long-term benefits of the traditional backup process. Faster recovery time In the past, tape was the only means of restoring data. This is a prolonged process, because the appropriate tape has to be located, loaded into the tape drive, and then sequentially read to locate and retrieve the desired data. Information has become increasingly vital to a companys success, and the lengthy restoration time from tape can now be avoided. Backing up data to disk, as a disk image, enables significantly faster restoration times, because data is stored online and can be located and retrieved immediately. Improved backup and restore reliability Disk-to-disk backups create a new confidence in the ability to recover critical data by eliminating the mechanical concerns associated with tape; one bad tape can cause large restores to fail. Disk backups offer the same high level of RAID protection and redundancy as the original data.
64
6786_6452_components.fm
Easier backup and restore management Storage management software functionality can be used to create volume-level copies, or clones, of data as a source for restoration. Disk-to-disk backup packages, however, provide more intelligence and file-level information that enable simplified administration and faster restores.
Network
Disk Reference Data Storage
Figure 4-17 Reference data storage scenario
Offline Storage
Data retention
Recent government regulations have made it necessary to store, identify, and characterize data. The majority of this data will be unchanging and accessed infrequently, if ever. As a result, the highest possible performance is not a requirement. These implementations require the largest amount of storage for the least cost in the least amount of space. The FATA cost per gigaByte advantage over Fibre Channel and high capacity drives make it an attractive solution.
Temporary workspace
FATA is a great fit for project-based applications that need short-term, affordable capacity.
Conclusion
We have discussed when FATA is a good choice depending on the nature of an application or the type of storage required. Important: We recommend that FATA drives be employed strictly with applications such as those discussed in The right FATA application on page 64. Other types of applications, and in particular transaction processing, must be avoided.
65
6786_6452_components.fm
66
6786_6452_components.fm
Usage Remote data protection Scientific and geophysics Surveillance data Temporary storage, spool, and paging
Storage characteristics Good performance, availability, and capacity Performance, capacity Capacity, availability High performance and good availability
ESCON distances
For connections without repeaters, the ESCON distances are 2 km with 50 micron multimode fiber, and 3 km with 62.5 micron multimode fiber. The DS8000 supports all models of the IBM 9032 ESCON directors that can be used to extend the cabling distances.
67
6786_6452_components.fm
This site should be consulted regularly, because it has the most up-to-date information about server attachment support.
QDR
Fibre Channel Protocol Engine
PPC 750GX
Processor 1 GHz
Buffer
Protocol Chipset
QDR
Data Mover
This document should be consulted regularly, because it has the most up-to-date information about server attachment support.
68
6786_6452_components.fm
69
6786_6452_components.fm
them (to support the 4-way processors). In the event of a complete loss of input AC power, the battery assemblies are used to allow the contents of NVS memory to be written to a number of DDMs internal to the processor complex, prior to power off. The FC-AL DDMs are not protected from power loss unless the extended power line disturbance feature has been purchased.
HMC
The HMC is the focal point for maintenance activities and allows configuration management as well as Copy Services management. It is possible to order two management consoles to act as a redundant pair. A typical configuration is to have one internal and one external management console. The internal HMC will contain a PCI modem for remote service.
Ethernet switches
In addition to the Fibre Channel switches installed in each disk enclosure, the DS8000 base frame contains two 16-port Ethernet switches. Two switches are supplied to allow the creation of a fully redundant management network. Each processor complex has multiple connections to each switch to allow each server to access each switch. This switch cannot be used for any equipment not associated with the DS8000. The switches get power from the internal power bus and thus do not require separate power outlets.
70
6786_6452_components.fm
Without installing additional software, the customer does have the option to upgrade his licences of: TPC for Disk to add performance monitoring capabilities TPC for Fabric to add performance monitoring capabilities TPC for Data to add storage management for open system hosts TPC Standard Edition (TPC SE) to add all of the above SSPC can be ordered as a software (SW) package to be installed on customers hardware or can be ordered as Model 2805 which has the software pre installed on a X-Series Server 3550 running Windows 2003 Enterprise R2. Important: Any new DS8000 shipped with Licence Machine Code 5.30xx.xx requires a minimum of one SSPC per datacenter to enable the launch of the DS8000 Storage Manager other than from the HMC. SSPC is described in detail in Chapter 13, System Storage Productivity Center on page 255.
71
6786_6452_components.fm
72
6786_6452_RAS.fm
Chapter 5.
73
6786_6452_RAS.fm
5.1 Naming
It is important to understand the naming conventions used to describe DS8000 components and constructs in order to fully appreciate the discussion of RAS concepts.
Storage complex
This term describes a group of DS8000s managed by a single management console. A storage complex can consist of just a single DS8000 storage unit.
Storage unit
A storage unit consists of a single DS8000 (including expansion frames). If your organization has one DS8000, then you have a single storage complex that contains a single storage unit.
Processor complex 0
Processor complex 1
server 0
server 1
processor LPARs
Figure 5-1 Single image mode
74
6786_6452_RAS.fm
used to form a storage image. If there are four servers, there are effectively two separate storage subsystems existing inside one DS8300 model 9B2 storage unit. See Figure 5-2.
processor LPAR s
Processor com plex 0 Storage Facility Im age 1 Processor com plex 1
Server 0
Server 1
Server 0
S erver 1
processor LP AR s
Figure 5-2 DS8300 Turbo Model 9B2: Dual image mode
In Figure 5-2, we have two storage facility images (SFIs). The upper server 0 and upper server 1 form SFI 1. The lower server 0 and lower server 1 form SFI 2. In each SFI, server 0 is the darker color (green) and server 1 is the lighter color (yellow). SFI 1 and SFI 2 can share common hardware (the processor complexes), but they are completely separate from an operational point of view. Note: You might think that the lower server 0 and lower server 1 should be called server 2 and server 3. While this might make sense from a numerical point of view (for example, there are four servers so why not number them from 0 to 3), but each SFI is not aware of the others existence. Each SFI must have a server 0 and a server 1, regardless of how many SFIs or servers there are in a DS8000 storage unit.
For more information about DS8000 series storage system partitions, see Chapter 3, Storage system logical partitions (LPARs) on page 31.
Processor complex
A processor complex is one System p system unit. Two processor complexes form a redundant pair such that if either processor complex fails, the servers on the remaining processor complex can continue to run the storage image. In an ESS 800, we would have referred to a processor complex as a cluster.
75
6786_6452_RAS.fm
Fault avoidance
POWER5 systems are built to keep errors from ever happening. This quality-based design includes such features as reduced power consumption and cooler operating temperatures for increased reliability, enabled by the use of copper chip circuitry, silicon on insulator (SOI), and dynamic clock-gating. It also uses mainframe-inspired components and technologies.
Permanent monitoring
The SP that is included in the System p5 provides a way to monitor the system even when the main processor is inoperable. The next subsection offers a more detailed description of the monitoring functions in the System p5.
76
6786_6452_RAS.fm
Mutual surveillance
The SP can monitor the operation of the firmware during the boot process, and it can monitor the operating system for loss of control. This enables the service processor to take appropriate action when it detects that the firmware or the operating system has lost control. Mutual surveillance also enables the operating system to monitor for service processor activity and can request a service processor repair action if necessary.
Environmental monitoring
Environmental monitoring related to power, fans, and temperature is performed by the System Power Control Network (SPCN). Environmental critical and non-critical conditions generate Early Power-Off Warning (EPOW) events. Critical events (for example, a Class 5 AC power loss) trigger appropriate signals from hardware to the affected components to prevent any data loss without operating system or firmware involvement. Non-critical environmental events are logged and reported using Event Scan. The operating system cannot program or access the temperature threshold using the SP. Temperature monitoring is also performed. If the ambient temperature goes above a preset operating range, then the rotation speed of the cooling fans can be increased. Temperature monitoring also warns the internal microcode of potential environment-related problems. An orderly system shutdown will occur when the operating temperature exceeds a critical level. Voltage monitoring provides warning and an orderly system shutdown when the voltage is out of operational specification.
Self-healing
For a system to be self-healing, it must be able to recover from a failing component by first detecting and isolating the failed component. It should then be able to take it offline, fix or isolate it, and then reintroduce the fixed or replaced component into service without any application disruption. Examples include: Bit steering to redundant memory in the event of a failed memory module to keep the server operational Bit scattering, thus allowing for error correction and continued operation in the presence of a complete chip failure (Chipkill recovery) Single-bit error correction using Error Checking and Correcting (ECC) without reaching error thresholds for main, L2, and L3 cache memory L3 cache line deletes extended from 2 to 10 for additional self-healing ECC extended to inter-chip connections on fabric and processor bus Memory scrubbing to help prevent soft-error memory faults Dynamic processor deallocation
77
6786_6452_RAS.fm
any single-bit errors that have accumulated by passing the data through the ECC logic. This function is a hardware function on the memory controller chip and does not influence normal system memory performance.
N+1 redundancy
The use of redundant parts, specifically the following ones, allows the System p5 to remain operational with full resources: Redundant spare memory bits in L1, L2, L3, and main memory Redundant fans Redundant power supplies
Fault masking
If corrections and retries succeed and do not exceed threshold limits, the system remains operational with full resources and no client or IBM service representative intervention is required.
Resource deallocation
If recoverable errors exceed threshold limits, resources can be deallocated with the system remaining operational, allowing deferred maintenance at a convenient time. Dynamic deallocation of potentially failing components is non disruptive, allowing the system to continue to run. Persistent deallocation occurs when a failed component is detected; it is then deactivated at a subsequent reboot. Dynamic deallocation functions include: Processor L3 cache lines Partial L2 cache deallocation PCI-X bus and slots Persistent deallocation functions include: Processor Memory Deconfigure or bypass failing I/O adapters L3 cache Following a hardware error that has been flagged by the service processor, the subsequent reboot of the server invokes extended diagnostics. If a processor or L3 cache has been marked for deconfiguration by persistent processor deallocation, the boot process will attempt to proceed to completion with the faulty device automatically deconfigured. Failing I/O adapters will be deconfigured or bypassed during the boot process.
Concurrent Maintenance
Concurrent Maintenance provides replacement of the following parts while the processor complex remains running: Disk drives Cooling fans Power Subsystems PCI-X adapter cards
78
6786_6452_RAS.fm
79
6786_6452_RAS.fm
LUNs and for System z, 3390 volumes. LUN stands for logical unit number, which is used
for SCSI addressing. Each logical volume belongs to a logical subsystem (LSS). For open systems, the LSS membership is not that important (unless you are using Copy Services), but for System z, the LSS is the logical control unit (LCU), which equates to a 3990 (a System z disk controller which the DS8000 emulates). It is important to remember that LSSs that have an even identifying number have an affinity with server 0, while LSSs that have an odd identifying number have an affinity with server 1. When a host operating system issues a write to a logical volume, the DS8000 host adapter directs that write to the server that owns the LSS of which that logical volume is a member. If the DS8000 is used to operate a single storage image, then the following examples refer to two servers, one running on each processor complex. If a processor complex were to fail, then one server would fail. Likewise, if a server itself were to fail, then it would have the same effect as the loss of the processor complex it runs on. If, however, the DS8000 is divided into two storage images, then each processor complex will host two servers. In this case, a processor complex failure would result in the loss of two servers. The effect on each server would be identical. The failover processes performed by each storage image would proceed independently.
80
6786_6452_RAS.fm
Data flow
When a write is issued to a volume, this write normally gets directed to the server that owns this volume. The data flow is that the write is placed into the cache memory of the owning server. The write data is also placed into the NVS memory of the alternate server. See Figure 5-3.
Server 0
Figure 5-3 Normal data flow
Server 1
Figure 5-3 illustrates how the cache memory of server 0 is used for all logical volumes that are members of the even LSSs. Likewise, the cache memory of server 1 supports all logical volumes that are members of odd LSSs. But for every write that gets placed into cache, another copy gets placed into the NVS memory located in the alternate server. Thus the normal flow of data for a write is: 1. Data is written to cache memory in the owning server. 2. Data is written to NVS memory of the alternate server. 3. The write is reported to the attached host as having been completed. 4. The write is destaged from the cache memory to disk. 5. The write is discarded from the NVS memory of the alternate server. Under normal operation, both DS8000 servers are actively processing I/O requests. This section describes the failover and failback procedures that occur between the DS8000 servers when an abnormal condition has affected one of them.
Failover
In the example depicted in Figure 5-4 on page 82, server 0 has failed. The remaining server has to take over all of its functions. The RAID arrays, because they are connected to both servers, can be accessed from the device adapters used by server 1. From a data integrity point of view, the real issue is the un-destaged or modified data that belonged to server 1 (that was in the NVS of server 0). Since the DS8000 now has only one copy of that data (which is currently residing in the cache memory of server 1), it will now take the following steps:
Chapter 5. Reliability, availability, and serviceability (RAS)
81
6786_6452_RAS.fm
1. It destages the contents of its NVS to the disk subsystem. 2. The NVS and cache of server 1 are divided in two, half for the odd LSSs and half for the even LSSs. 3. Server 1 now begins processing the writes (and reads) for all the LSSs.
Server 0
Server 1
Failover
Figure 5-4 Server 0 failing over its function to server 1
This entire process is known as a failover. After failover, the DS8000 now operates as depicted in Figure 5-4. Server 1 now owns all the LSSs, which means all reads and writes will be serviced by server 1. The NVS inside server 1 is now used for both odd and even LSSs. The entire failover process should be invisible to the attached hosts, apart from the possibility of some temporary disk errors.
Failback
When the failed server has been repaired and restarted, the failback process is activated. Server 1 starts using the NVS in server 0 again, and the ownership of the even LSSs is transferred back to server 0. Normal operations with both controllers active then resumes. Just like the failover process, the failback process is invisible to the attached hosts. In general, recovery actions on the DS8000 do not impact I/O operation latency by more than 15 seconds. With certain limitations on configurations and advanced functions, this impact to latency can be limited to 8 seconds. On logical volumes that are not configured with RAID-10 storage, certain RAID-related recoveries can cause latency impacts in excess of 15 seconds. If you have real-time response requirements in this area, contact IBM to determine the latest information about how to manage your storage to meet your requirements,
82
6786_6452_RAS.fm
83
6786_6452_RAS.fm
HP HP HP HP
HP HP HP HP
HP HP
HP HP
ESCON Slot 4
ESCON Slot 5
Server owning all odd LSS logical volumes
RIO-G
RIO-G
RIO-G
RIO-G
RIO-G
RIO-G
Slot 4 ESCON
HP HP
Slot 5 ESCON
HP HP
It is always preferable that hosts that access the DS8000 have at least two connections to separate host ports in separate host adapters on separate I/O enclosures, as in Figure 5-6 on page 85.
84
6786_6452_RAS.fm
HP HP HP HP
HP HP HP HP
HP HP
HP HP
ESCON Slot 4
ESCON Slot 5
RIO-G
RIO-G
RIO-G
RIO-G
RIO-G
RIO-G
RIO-G
Slot 1
Slot 4 ESCON
HP HP
Fibre channel
HP HP HP HP
Slot 5 ESCON
HP HP
In this example, the host is attached to different Fibre Channel host adapters in different I/O enclosures. This is also important because during a microcode update, an I/O enclosure might need to be taken offline. This configuration allows the host to survive a hardware failure on any component on either path.
SAN/FICON/ESCON switches
Because a large number of hosts can be connected to the DS8000, each using multiple paths, the number of host adapter ports that are available in the DS8000 might not be sufficient to accommodate all the connections. The solution to this problem is the use of SAN switches or directors to switch logical connections from multiple hosts. In a System z environment you will need to select a SAN switch or director that also supports FICON. ESCON-attached hosts might need an ESCON director. A logic or power failure in a switch or director can interrupt communication between hosts and the DS8000. We recommend that more than one switch or director be provided to ensure continued availability. Ports from two different host adapters in two different I/O enclosures should be configured to go through each of two directors. The complete failure of either director leaves half the paths still operating.
Multipathing software
Each attached host operating system requires a mechanism to allow it to manage multiple paths to the same device, and to preferably load balance these requests. Also, when a failure occurs on one redundant path, then the attached host must have a mechanism to allow it to detect that one path is gone and route all I/O requests for those logical devices to an alternative path. Finally, it should be able to detect when the path has been restored so that the I/O can again be load-balanced. The mechanism that will be used varies by attached host operating system and environment as detailed in the next two sections.
85
6786_6452_RAS.fm
For more information about the SDD, see 16.1.4, Multipathing support: Subsystem Device Driver (SDD) on page 372.
6786_6452_RAS.fm
software support will respond to such requests by varying off the affected paths, and either notifying the DS8000 subsystem that the paths are offline, or that it cannot take the paths offline. CUIR reduces manual operator intervention and the possibility of human error during maintenance actions, at the same time reducing the time required for the maintenance. This is particularly useful in environments where there are many systems attached to a DS8000.
Figure 5-7 also shows the connection paths for expansion on the far left and far right. The paths from the switches travel to the switches in the next disk enclosure. Because expansion is done in this linear fashion, the addition of more enclosures is completely non disruptive.
87
6786_6452_RAS.fm
RAID-5 theory
The DS8000 series supports RAID-5 arrays. RAID-5 is a method of spreading volume data plus parity data across multiple disk drives. RAID-5 provides faster performance by striping data across a defined set of DDMs. Data protection is provided by the generation of parity information for every stripe of data. If an array member fails, then its contents can be regenerated by using the parity data.
Drive failure
When a disk drive module fails in a RAID-5 array, the device adapter starts an operation to reconstruct the data that was on the failed drive onto one of the spare drives. The spare that is used will be chosen based on a smart algorithm that looks at the location of the spares and the size and location of the failed DDM. The rebuild is performed by reading the corresponding data and parity in each stripe from the remaining drives in the array, then performing an exclusive-OR operation to recreate the data, and then writing this data to the spare drive. While this data reconstruction is going on, the device adapter can still service read and write requests to the array from the hosts. There might be some degradation in performance while the sparing operation is in progress because some DA and switched network resources are used to do the reconstruction. Due to the switch-based architecture, this effect will be minimal. Additionally, any read requests for data on the failed drive require data to be read from the other drives in the array, and then the DA performs an operation to reconstruct the data. Performance of the RAID-5 array returns to normal when the data reconstruction onto the spare device completes. The time taken for sparing can vary, depending on the size of the failed DDM and the workload on the array, the switched network, and the DA. The use of arrays across loops (AAL) both speeds up rebuild time and decreases the impact of a rebuild.
RAID-10 theory
RAID-10 provides high availability by combining features of RAID-0 and RAID-1. RAID-0 optimizes performance by striping volume data across multiple disk drives at a time. RAID-1 provides disk mirroring, which duplicates data between two disk drives. By combining the features of RAID-0 and RAID-1, RAID-10 provides a second optimization for fault tolerance. Data is striped across half of the disk drives in the RAID-1 array. The same data is also striped across the other half of the array, creating a mirror. Access to data is preserved if one 88
IBM System Storage DS8000 Series: Architecture and Implementation
6786_6452_RAS.fm
disk in each mirrored pair remains available. RAID-10 offers faster data reads and writes than RAID-5 because it does not need to manage parity. However, with half of the DDMs in the group used for data and the other half to mirror that data, RAID-10 disk groups have less capacity than RAID-5 disk groups.
Drive failure
When a disk drive module (DDM) fails in a RAID-10 array, the controller starts an operation to reconstruct the data from the failed drive onto one of the hot spare drives. The spare that is used will be chosen based on a smart algorithm that looks at the location of the spares and the size and location of the failed DDM. Remember a RAID-10 array is effectively a RAID-0 array that is mirrored. Thus, when a drive fails in one of the RAID-0 arrays, we can rebuild the failed drive by reading the data from the equivalent drive in the other RAID-0 array. While this data reconstruction is going on, the DA can still service read and write requests to the array from the hosts. There might be some degradation in performance while the sparing operation is in progress, because some DA and switched network resources are used to do the reconstruction. Due to the switch-based architecture of the DS8000, this effect will be minimal. Read requests for data on the failed drive should not be affected because they can all be directed to the good RAID-1 array. Write operations will not be affected. Performance of the RAID-10 array returns to normal when the data reconstruction onto the spare device completes. The time taken for sparing can vary, depending on the size of the failed DDM and the workload on the array and the DA. In relation to a RAID-5, RAID-10 sparing completion time is a little faster. This is because rebuilding a RAID-5 6+P configuration requires six reads plus one parity operation for each write where as a RAID-10 3+3 configuration requires one read and one write (essentially a direct copy).
89
6786_6452_RAS.fm
you chose to populate a loop with three different DDM sizes. With the DS8000, the intention is to not do this. A minimum of one spare is created for each array site defined until the following conditions are met: A minimum of 4 spares per DA pair A minimum of four spares of the largest capacity array site on the DA pair A minimum of two spares of capacity and RPM greater than or equal to the fastest array site of any given capacity on the DA pair
Floating spares
The DS8000 implements a smart floating technique for spare DDMs. On an ESS 800, the spare floats. This means that when a DDM fails and the data it contained is rebuilt onto a spare, then when the disk is replaced, the replacement disk becomes the spare. The data is not migrated to another DDM, such as the DDM in the original position the failed DDM occupied. So in other words, on an ESS 800 there is no post repair processing. The DS8000 microcode might choose to allow the hot spare to remain where it has been moved, but it can instead choose to migrate the spare to a more optimum position. This will be done to better balance the spares across the DA pairs, the loops, and the enclosures. It might be preferable that a DDM that is currently in use as an array member is converted to a spare. In this case, the data on that DDM will be migrated in the background onto an existing spare. This process does not fail the disk that is being migrated, though it does reduce the number of available spares in the DS8000 until the migration process is complete. A smart process will be used to ensure that the larger or higher RPM DDMs always act as spares. This is preferable because if we were to rebuild the contents of a 146 GB DDM onto a 300 GB DDM, then approximately half of the 300 GB DDM will be wasted, because that space is not needed. The problem here is that the failed 146 GB DDM will be replaced with a new 146 GB DDM. So the DS8000 microcode will most likely migrate the data back onto the recently replaced 146 GB DDM. When this process completes, the 146 GB DDM will rejoin the array and the 300 GB DDM will become the spare again. Another example would be if we fail a 73 GB 15k RPM DDM onto a 146 GB 10k RPM DDM. This means that the data has now moved to a slower DDM, but the replacement DDM will be the same as the failed DDM. This means the array will have a mix of rpms. This is not desirable. Again, a smart migration of the data will be performed when suitable spares have become available.
Overconfiguration of spares
The DDM sparing policies support the overconfiguration of spares. This possibility might be of interest to some installations, because it allows the repair of some DDM failures to be deferred until a later repair action is required.
90
6786_6452_RAS.fm
91
6786_6452_RAS.fm
6786_6452_RAS.fm
batteries. If building power is lost, the DS8000 can use its internal batteries to destage the data from NVS memory to a variably sized disk area to preserve that data until power is restored. However, the EPO switch does not allow this destage process to happen and all NVS data is lost. This will most likely result in data loss. If you need to power the DS8000 off for building maintenance, or to relocate it, you should always use the S-HMC to achieve this.
HMC
The HMC is used to perform configuration, management, and maintenance activities on the DS8000. It can be ordered to be located either physically inside the base frame or external for mounting in a client-supplied rack.
93
6786_6452_RAS.fm
Important: The HMC described here is the Storage HMC, not to be confused with the SSPC console which is also required with any new DS8000 shipped with Licence Machine Code 5.30xx.xx. SSPC is described in 4.8, System Storage Productivity Center (SSPC) on page 70. If the HMC is not operational, then it is not possible to perform maintenance, power the DS8000 up or down, perform modifications to the logical configuration or perform Copy Services tasks such as the establishment of FlashCopies. We therefore recommend that you order two management consoles to act as a redundant pair. Alternatively, if TotalStorage Productivity Center for Replication (TPC-R) is used, Copy Services tasks can be managed by that tool if the HMC is unavailable.
Ethernet switches
Each DS8000 base frame contains two 16-port Ethernet switches. Two switches are supplied to allow the creation of a fully redundant management network. Each server in the DS8000 has a connection to each switch. Each HMC also has a connection to each switch. This means that if a single Ethernet switch fails, then all traffic can successfully travel from either S-HMC to any server in the storage unit using the alternate switch.
94
6786_6452_Virtualization_Concepts.fm
Chapter 6.
Virtualization concepts
This chapter describes virtualization concepts as they apply to the DS8000. This chapter covers the following topics: Virtualization definition Storage system virtualization The abstraction layers for disk virtualization: Array sites Arrays Ranks Extent Pools Logical volumes Space Efficient volumes Logical subsystems (LSS) Volume access Summary of the virtualization hierarchy
Benefits of virtualization
95
6786_6452_Virtualization_Concepts.fm
Logical view: virtual Storage Facility images Physical view: physical storage unit
LIC
Memory Processor
LIC
Memory Processor
LPAR Hypervisor
takes part of takes part of takes part of
96
6786_6452_Virtualization_Concepts.fm
I/O Enclosure HA HA DA HA HA DA
RIO-G
I/O Enclosure HA HA DA HA HA DA
Server 0
Switches
Switched loop 1
Switched loop 2
Compare this with the ESS design, where there was a real loop and having an 8-pack close to a device adapter was an advantage. This is no longer relevant for the DS8000. Because of the switching design, each drive is in close reach of the device adapter, apart from a few
Chapter 6. Virtualization concepts
Server 1
97
6786_6452_Virtualization_Concepts.fm
more hops through the Fibre Channel switches for some drives. So, it is not really a loop, but a switched FC-AL loop with the FC-AL addressing schema: Arbitrated Loop Physical Addressing (AL-PA).
Array Site
Switch
Loop 1
Figure 6-3 Array site
Loop 2
As you can see from Figure 6-3, array sites span loops. Four DDMs are taken from loop 1 and another four DDMs from loop 2. Array sites are the building blocks used to define arrays.
6.3.2 Arrays
An array is created from one array site. Forming an array means defining it as a specific RAID type. The supported RAID types are RAID-5 and RAID-10 (see 5.6.2, RAID-5 overview on page 88 and 5.6.3, RAID-10 overview on page 88). For each array site, you can select a RAID type. The process of selecting the RAID type for an array is also called defining an array. Note: In the DS8000 implementation, one array is defined using one array site. According to the DS8000 sparing algorithm, from zero to two spares can be taken from the array site. This is discussed further in 5.6.4, Spare creation on page 89. Figure 6-4 on page 99 shows the creation of a RAID-5 array with one spare, also called a 6+P+S array (capacity of 6 DDMs for data, capacity of one DDM for parity, and a spare drive). According to the RAID-5 rules, parity is distributed across all seven drives in this example. 98
IBM System Storage DS8000 Series: Architecture and Implementation
6786_6452_Virtualization_Concepts.fm
On the right side in Figure 6-4 on page 99, the terms D1, D2, D3, and so on stand for the set of data contained on one disk within a stripe on the array. If, for example, 1 GB of data is written, it is distributed across all the disks of the array.
Array Site
D1 D2 D3 D7 D8 D9 D10 D11 P D12 D13 D14 D15 D16 P D17 D18 ... ... ... ... ... ... ...
Creation of an array
Data Data Data Data Data Data Parity Spare
D4 D5 D6 P
RAID Array
Spare
So, an array is formed using one array site, and while the array could be accessed by each adapter of the device adapter pair, it is managed by one device adapter. You define which adapter and which server manages this array later in the configuration path.
6.3.3 Ranks
In the DS8000 virtualization hierarchy, there is another logical construct, a rank. When defining a new rank, its name is chosen by the DS Storage Manager, for example, R1, R2, or R3, and so on. You have to add an array to a rank. Note: In the DS8000 implementation, a rank is built using just one array. The available space on each rank will be divided into extents. The extents are the building blocks of the logical volumes. An extent is striped across all disks of an array as shown in Figure 6-5 on page 100 and indicated by the small squares in Figure 6-6 on page 102. The process of forming a rank does two things: The array is formatted for either fixed block (FB) data (open systems) or count key data (CKD) (System z) data. This determines the size of the set of data contained on one disk within a stripe on the array. The capacity of the array is subdivided into equal-sized partitions, called extents. The extent size depends on the extent type, FB or CKD. An FB rank has an extent size of 1 GB (where 1 GB equals 230 Bytes).
99
6786_6452_Virtualization_Concepts.fm
System z users or administrators typically do not deal with gigaBytes, instead they think of storage in metrics of the original 3390 volume sizes. A 3390 Model 3 is three times the size of a Model 1, and a Model 1 has 1113 cylinders, which is about 0.94 GB. The extent size of a CKD rank therefore was chosen to be one 3390 Model 1 or 1113 cylinders. One extent is the minimum physical allocation unit when a LUN or CKD volume is created, as we discuss later. It is still possible to define a CKD volume with a capacity that is an integral multiple of one cylinder or a fixed block LUN with a capacity that is an integral multiple of 128 logical blocks (64K Bytes). However, if the defined capacity is not an integral multiple of the capacity of one extent, the unused capacity in the last extent is wasted. For instance, you could define a 1 cylinder CKD volume, but 1113 cylinders (1 extent) is allocated and 1112 cylinders would be wasted. Figure 6-5 shows an example of an array that is formatted for FB data with 1 GB extents (the squares in the rank just indicate that the extent is composed of several blocks from different DDMs).
D1
RAID Array
D2 D3 D4 D5 D6 P
Creation of a Rank
.... ....
1GB
1GB
1GB
1GB
....
....
....
....
....
....
100
6786_6452_Virtualization_Concepts.fm
There is no predefined affinity of ranks or arrays to a storage server. The affinity of the rank (and its associated array) to a given server is determined at the point it is assigned to an Extent Pool. One or more ranks with the same extent type (FB or CKD) can be assigned to an Extent Pool. One rank can be assigned to only one Extent Pool. There can be as many Extent Pools as there are ranks.
Storage Pool Striping was made available with Licence Machine Code 5.30xx.xx and allows you to create logical volumes striped across multiple ranks. This will typically enhance performance. To benefit from Storage Pool Striping (see Storage Pool Striping - extent rotation on page 107) more than one rank in an Extent Pool is required.
Storage Pool Striping can enhance performance a lot, but on the other hand, when you loose one rank not only the data of this rank is lost but also all data in this Extent Pool because data is striped across all ranks. Therefore you should keep the number of ranks in an Extent Pool in the range of four to eight. Tip: Use four to eight ranks with the same characteristics in an Extent Pool. If you want to use more ranks in an Extent Pool, mirror your data to another DS8000. The DS Storage Manager GUI guides you to use the same RAID types in an extent pool. As such, when an extent pool is defined, it must be assigned with the following attributes: Server affinity Extent type RAID type The minimum number of extent pools is one; however, normally it should be at least two with one assigned to server 0 and the other to server 1 so that both servers are active. In an environment where FB and CKD are to go onto the DS8000 storage server, four extent pools would provide one FB pool for each server, and one CKD pool for each server, to balance the capacity between the two servers. Figure 6-6 is an example of a mixed environment with CKD and FB extent pools. Additional extent pools might also be desirable to segregate ranks with different DDM types. Extent pools are expanded by adding more ranks to the pool. Ranks are organized in two rank groups; rank group 0 is controlled by server 0 and rank group 1 is controlled by server 1. Important: Capacity should be balanced between the two servers for best performance.
101
6786_6452_Virtualization_Concepts.fm
Server0
1GB FB
1GB FB
1GB FB
1GB FB
1GB FB
1GB FB
1GB FB
1GB FB
1GB FB
1GB FB
1GB FB
1GB FB
1GB FB
1GB FB
1GB FB
1GB FB
CKD volumes
A System z CKD volume is composed of one or more extents from one CKD extent pool. CKD extents are of the size of 3390 Model 1, which has 1113 cylinders. However, when you define a System z CKD volume, you do not specify the number of 3390 Model 1 extents but the number of cylinders you want for the volume. You can define CKD volumes with up to 65520 cylinders, which is about 55.6 GB.
102
Server1
6786_6452_Virtualization_Concepts.fm
If the number of cylinders specified is not an integral multiple of 1113 cylinders, then some space in the last allocated extent is wasted. For example, if you define 1114 or 3340 cylinders, 1112 cylinders are wasted. For maximum storage efficiency, you should consider allocating volumes that are exact multiples of 1113 cylinders. In fact, integral multiples of 3339 cylinders should be consider for future compatibility. If you want to use the maximum number of cylinders (65520), you should consider that this is not a multiple of 1113. You could go with 65520 cylinders and waste 147 cylinders for each volume (the difference to the next multiple of 1113) or you might be better off with a volume size of 64554 cylinders, which is a multiple of 1113 (factor of 58), or even better, with 63441 cylinders, which is a multiple of 3339, a model 3 size. See Figure 6-7.
used
1113 free
used
1000 used
A CKD volume cannot span multiple extent pools, but a volume can have extents from different ranks in the same extent pool or you can stripe a volume across the ranks (see Storage Pool Striping - extent rotation on page 107). Figure 6-7 shows how a logical volume is allocated with a CKD volume as an example. The allocation process for FB volumes is very similar and is shown in Figure 6-8 on page 104.
103
6786_6452_Virtualization_Concepts.fm
Logical 3 GB LUN
3 GB LUN Rank-b
used 1 GB free
used
1 GB free
Allocate a 3 GB LUN
3 GB LUN Rank-b
used 1 GB used
used
1 GB used
System i LUNs
System i LUNs are also composed of fixed block 1 GB extents. There are, however, some special aspects with System i LUNs. LUNs created on a DS8000 are always RAID-protected. LUNs are based on RAID-5 or RAID-10 arrays. However, you might want to deceive i5/OS and tell it that the LUN is not RAID-protected. This causes OS/400 to do its own mirroring. System i LUNs can have the attribute unprotected, in which case, the DS8000 will lie to a System i host and tell it that the LUN is not RAID-protected. OS/400 only supports certain fixed volume sizes, for example, model sizes of 8.5 GB, 17.5 GB, and 35.1 GB. These sizes are not multiples of 1 GB, and hence, depending on the model chosen, some space is wasted. System i LUNs expose a 520 Byte block to the host. The operating system uses 8 of these Bytes so the usable space is still 512 Bytes like other SCSI LUNs. The capacities quoted for the System i LUNs are in terms of the 512 Byte block capacity and are expressed in GB (109 ). These capacities should be converted to GB (230 ) when considering effective utilization of extents that are 1 GB (230 ). For more information about this topic, see Chapter 18, System i considerations on page 455.
6786_6452_Virtualization_Concepts.fm
Space Efficient volumes can be created when the DS8000 has the IBM FlashCopy SE feature (licensing is required). Note: In the current implementation (Licence Machine Code 5.30xx.xx), Space Efficient volumes are supported as FlashCopy target volumes only. The idea with Space Efficient volumes is to save storage when it is only temporarily needed. This is the case with FlashCopy when you use the nocopy option. This type of FlashCopy is typically used with the goal of taking a backup from the FlashCopy target volumes. Without the use of Space Efficient volumes, target volumes consume the same physical capacity as the source volumes. Actually, however, these target volumes were often nearly empty because space is only occupied on-demand, when a write to the source volume occurs. Only changed data is copied to the target volume (space is also consumed when writing to the target, obviously). With Space Efficient volumes we only use the space required for the updates to the source volume and writes to the target. Since the FlashCopy target volumes are normally kept only until the backup jobs have finished, the changes to the source volumes should be low and hence the storage needed for the Space Efficient volumes should be low. IBM FlashCopy SE target volumes are also very cost efficient when several copies of the volumes are required (for example to protect data against logical errors or viruses you might want to take Space Efficient FlashCopies several times a day).
105
6786_6452_Virtualization_Concepts.fm
Space for a Space Efficient volume is allocated when a write occurs, more precisely, when a destage from the cache occurs. The allocation unit is a track. That is 64 KB for open systems LUNs or 57 KB for CKD volumes. This has to be considered when planning for the size of the repository. The amount of space that gets physically allocated might be larger than the amount of data that was written. If there are 100 random writes of, for example, 8 KB (800 KB in total), we probably will allocate 6.4 MB (100 x 64KB). If there are other writes changing data within these 6.4 MB there will be no new allocations at all. Since space is allocated in tracks and the system needs to maintain tables where it places the physical track and how to map it to the logical volume, there is some overhead involved with Space Efficient volumes. The smaller the allocation unit the larger the tables and the overhead. The DS8000 has a fixed allocation unit of a track which is a good compromise between processing overhead and allocation overhead. Summary: Virtual space is created as part of the extent pool definition. This virtual space is mapped onto the repository (Physical Space) as needed. Virtual space would equal the total space of the intended FlashCopy source volumes or space to contain the size of SE volumes intended to be used for other purposes. No actual storage is allocated until write activity occurs to the SE volumes Figure 6-9 on page 106 illustrates the concept of Space Efficient volumes.
Ranks
normal Volume
The lifetime of data on Space Efficient volumes is expected to be short because they are used as FlashCopy targets. Physical storage gets allocated when data is written to Space Efficient volumes and we need some mechanism to free up physical space in the repository when the data is no longer needed.
106
6786_6452_Virtualization_Concepts.fm
The FlashCopy commands have options to release space of Space Efficient volumes when the FlashCopy relation is established or removed. There is also a new CLI command initfbvol and initckdvol to release space.
107
6786_6452_Virtualization_Concepts.fm
Figure 6-10 shows an example how volumes get allocated within the extent pool. When you create striped volumes and non-striped volumes in an Extent Pool, a rank could be filled before the others. A full rank is skipped when you create new striped volumes. There is no reorg function for the extents in an Extent Pool. If you add one ore more ranks to an existing Extent Pool, the existing extents are not redistributed. Tip: If you have to add capacity to an Extent Pool because it is nearly full, it is better to add several ranks at once instead of just one rank. This allows new volumes to be striped across the added ranks.
Where to start with the first volume is determined at power on Striped volume with two Extents created Next striped volume starts at next rank (five extents in this example) Non-striped volume created Starts at next rank Striped volume created Starts at next rank (extents 13 to 15) 2 5 8.12 1 5 36 11 36
Extent Pool
1 1 47 4
Ranks
Extent
By using striped volumes you distribute the I/O load to a LUN/CKD volume to more than just one set of eight disk drives. The ability to distribute a workload to many physical drives can greatly enhance performance for a logical volume. Particularly, operating systems that do not have a volume manager that can do striping, will benefit most from this allocation method. However, if you have Extent Pools with many ranks and all volumes are striped across the ranks and you loose just one rank, for example because there are two disk drives in the same rank that fail at the same time, you will loose a lot of your data. Therefore it might be better to have Extent Pools with only about four to eight ranks. On the other hand, if you do for example Physical Partition striping in AIX already, double striping probably will not improve performance any further. The same can be expected when the DS8000 LUNs are used by an SVC striping data across LUNs. If you decide to use Storage Pool Striping it is probably better to use this allocation method for all volumes in the extent pool to keep the ranks equally filled and utilized.
108
6786_6452_Virtualization_Concepts.fm
Tip: If you configure a new DS8000, do not mix striped volumes and non-striped volumes in an Extent Pool. For more information how to configure Extent Pools and volumes for optimal performance see Chapter 10, Performance on page 185.
Before you can actually see the change in other tools like ISMF or before you can use it, you have to refresh the VTOC (see Example 6-2 and Example 6-3).
Example 6-2 Refresh VTOC //INIT EXEC PGM=ICKDSF,PARM='NOREPLYU' //IN1 DD UNIT=3390,VOL=SER=RW9630,DISP=SHR //SYSPRINT DD SYSOUT=* //SYSIN DD * REFORMAT DDNAME(IN1) VERIFY(RW9630) REFVTOC /* Example 6-3 ISMF view of the volume size before and after the resize LINE VOLUME FREE % ALLOC FRAG LARGEST FREE OPERATOR SERIAL SPACE FREE SPACE INDEX EXTENT EXTENTS ---(1)---- -(2)-- ---(3)--- (4)- ---(5)--- -(6)- ---(7)--- --(8)-RW9630 23241 1 2748259 0 23241 1
**** Expand volume **** Run ICKDSF: REFORMAT REFVTOC LINE VOLUME FREE % ALLOC FRAG LARGEST FREE OPERATOR SERIAL SPACE FREE SPACE INDEX EXTENT EXTENTS ---(1)---- -(2)-- ---(3)--- (4)- ---(5)--- -(6)- ---(7)--- --(8)-RW9630 5566242 67 2748259 0 5566242 1
See Chapter 16, Open systems considerations on page 369 for more information on other specific environments. A logical volume has the attribute of being striped across the ranks or not. If the volume was created as striped across the ranks of the extent pool, then also the extents that are used to increase the size of the volume are striped. If a volume was created without striping, the system tries to allocate the additional extents within the same rank that the volume was created from originally. Since most operating systems have no means to move data from the end of the physical disk off to some unused space at the beginning of the disk, it makes no sense to reduce the size of
109
6786_6452_Virtualization_Concepts.fm
a volume. The DS8000 configuration interfaces DS CLI and DS GUI will not allow you to change a volume to a smaller size. Attention: Before you can expand a volume you have to delete any copy services relationship involving that volume.
110
6786_6452_Virtualization_Concepts.fm
you have a problem with one of the pairs, is done at the LSS level. With the option now to put all or most of the volumes of a certain application in just one LSS, this makes the management of remote copy operations easier; see Figure 6-11.
Physical Drives
Of course, you could have put all volumes for one application in one LSS on an ESS, too, but then all volumes of that application would also be in one or a few arrays, and from a performance standpoint, this was not desirable. Now on the DS8000, you can group your volumes in one or a few LSSs but still have the volumes in many arrays or ranks. Fixed block LSSs are created automatically when the first fixed block logical volume on the LSS is created, and deleted automatically when the last fixed block logical volume on the LSS is deleted. CKD LSSs require user parameters to be specified and must be created before the first CKD logical volume can be created on the LSS; they must be deleted manually after the last CKD logical volume on the LSS is deleted.
Address groups
Address groups are created automatically when the first LSS associated with the address group is created, and deleted automatically when the last LSS in the address group is deleted. LSSs are either CKD LSSs or FB LSSs. All devices in an LSS must be either CKD or FB. This restriction goes even further. LSSs are grouped into address groups of 16 LSSs. LSSs are numbered X'ab', where a is the address group and b denotes an LSS within the address group. So, for example, X'10' to X'1F' are LSSs in address group 1. All LSSs within one address group have to be of the same type, CKD or FB. The first LSS defined in an address group fixes the type of that address group. System z users, who still want to use ESCON to attach hosts to the DS8000, should be aware that ESCON supports only the 16 LSSs of address group 0 (LSS X'00' to X'0F'). Therefore, this address group should be reserved for ESCON-attached CKD devices, in this case, and not used as FB LSSs.
111
6786_6452_Virtualization_Concepts.fm
LSS X'11' LSS X'13' LSS X'15' X'1500' LSS X'17' LSS X'19' LSS X'1B' LSS X'1D' X'1D00' LSS X'1F'
Extent Pool FB-2 Extent Pool CKD-2 Rank-w
Rank-b
Rank-x
Server0
Rank-y
Rank-d
Volume ID
The LUN identifications X'gabb' are composed of the address group X'g', and the LSS number within the address group X'a', and the position of the LUN within the LSS X'bb'. For example, LUN X'2101' denotes the second (X'01') LUN in LSS X'21' of address group 2.
Host attachment
Host bus adapters (HBAs) are identified to the DS8000 in a host attachment construct that specifies the HBAs World Wide Port Names (WWPNs). A set of host ports can be associated through a port group attribute that allows a set of HBAs to be managed collectively. This port group is referred to as host attachment within the GUI. Each host attachment can be associated with a volume group to define which LUNs that HBA is allowed to access. Multiple host attachments can share the same volume group. The host attachment can also specify a port mask that controls which DS8000 I/O ports the HBA is allowed to log in to. Whichever ports the HBA logs in on, it sees the same volume group that is defined in the host attachment associated with this HBA. The maximum number of host attachments on a DS8000 is 8192.
112
Server1
6786_6452_Virtualization_Concepts.fm
Volume group
A volume group is a named construct that defines a set of logical volumes. When used in conjunction with CKD hosts, there is a default volume group that contains all CKD volumes and any CKD host that logs in to a FICON I/O port has access to the volumes in this volume group. CKD logical volumes are automatically added to this volume group when they are created and automatically removed from this volume group when they are deleted. When used in conjunction with Open Systems hosts, a host attachment object that identifies the HBA is linked to a specific volume group. You must define the volume group by indicating which fixed block logical volumes are to be placed in the volume group. Logical volumes can be added to or removed from any volume group dynamically. There are two types of volume groups used with Open Systems hosts and the type determines how the logical volume number is converted to a host addressable LUN_ID on the Fibre Channel SCSI interface. A map volume group type is used in conjunction with FC SCSI host types that poll for LUNs by walking the address range on the SCSI interface. This type of volume group can map any FB logical volume numbers to 256 LUN_IDs that have zeroes in the last six Bytes and the first two Bytes in the range of X'0000' to X'00FF'. A mask volume group type is used in conjunction with FC SCSI host types that use the Report LUNs command to determine the LUN_IDs that are accessible. This type of volume group can allow any and all FB logical volume numbers to be accessed by the host where the mask is a bitmap that specifies which LUNs are accessible. For this volume group type, the logical volume number X'abcd' is mapped to LUN_ID X'40ab40cd00000000'. The volume group type also controls whether 512 Byte block LUNs or 520 Byte block LUNs can be configured in the volume group. When associating a host attachment with a volume group, the host attachment contains attributes that define the logical block size and the Address Discovery Method (LUN Polling or Report LUNs) that are used by the host HBA. These attributes must be consistent with the volume group type of the volume group that is assigned to the host attachment so that HBAs that share a volume group have a consistent interpretation of the volume group definition and have access to a consistent set of logical volume types. The GUI typically sets these values appropriately for the HBA based on your specification of a host type. You must consider what volume group type to create when setting up a volume group for a particular HBA. FB logical volumes can be defined in one or more volume groups. This allows a LUN to be shared by host HBAs configured to different volume groups. An FB logical volume is automatically removed from all volume groups when it is deleted. The maximum number of volume groups is 8320 for the DS8000. See Figure 6-13 on page 114.
113
6786_6452_Virtualization_Concepts.fm
WWPN-1
WWPN-2
WWPN-3
WWPN-4
Volume group: DB2-test Host att: Test WWPN-5 WWPN-6 WWPN-7 Host att: Prog WWPN-8 Volume group: docs
Figure 6-13 shows the relationships between host attachments and volume groups. Host AIXprod1 has two HBAs, which are grouped together in one host attachment, and both are granted access to volume group DB2-1. Most of the volumes in volume group DB2-1 are also in volume group DB2-2, accessed by server AIXprod2. In our example, there is, however, one volume in each group that is not shared. The server in the lower left part has four HBAs and they are divided into two distinct host attachments. One can access some volumes shared with AIXprod1 and AIXprod2. The other HBAs have access to a volume group called docs.
114
6786_6452_Virtualization_Concepts.fm
1 GB FB
1 GB FB
1 GB FB
Server0
1 GB FB
LSS FB
Address Group
X'2x' FB 4096 addresses LSS X'27'
Volume Group
Increased number of logical volumes: Up to 65280 (CKD) Up to 65280 (FB) 65280 total for CKD + FB Any mixture of CKD or FB addresses in 4096 address groups. Increased logical volume size: CKD: 55.6 GB (65520 cylinders), architected for 219 TB FB: 2 TB, architected for 1 PB
1 GB FB
Host Attachment
1 GB FB
115
6786_6452_Virtualization_Concepts.fm
Flexible logical volume configuration: Multiple RAID types (RAID-5, RAID-10) Storage types (CKD and FB) aggregated into extent pools Volumes allocated from extents of extent pool Storage Pool Striping Dynamically add and remove volumes Dynamic volume expansion Space Efficient volumes for FlashCopy
116
6786_6452CopyServices.fm
Chapter 7.
Copy Services
This chapter discusses the Copy Services functions available with the DS8000 series models, which include the several Remote Mirror and Copy functions as well as the Point-in-Time Copy function (FlashCopy). These functions make the DS8000 series a key component for disaster recovery solutions, data migration activities, as well as for data duplication and backup solutions. This chapter covers the following topics: Copy Services FlashCopy and IBM FlashCopy SE Remote Mirror and Copy: Metro Mirror Global Copy Global Mirror Metro/Global Mirror z/OS Global Mirror z/OS Metro/Global Mirror
Interfaces for Copy Services Interoperability The information discussed in this chapter is covered to a greater extent and in more detail in the following IBM Redbooks: IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788 IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787 IBM System Storage DS8000 Series: Introducing IBM FlashCopy SE, REDP-4368
117
6786_6452CopyServices.fm
Additionally for the System z users, the following are available: z/OS Global Mirror, previously known as Extended Remote Copy (XRC) z/OS Metro/Global Mirror, a 3-site solution that combines z/OS Global Mirror and Metro Mirror Many design characteristics of the DS8000 and its data copy and mirror capabilities and features contribute to the protection of your data, 24 hours a day and seven days a week. We discuss these Copy Services functions in the following sections.
118
6786_6452CopyServices.fm
119
6786_6452CopyServices.fm
Read Write Read and write to both source and copy possible
T0
If you access the source or the target volumes during the background copy, standard FlashCopy manages these I/O requests as follows: Read from the source volume When a read request goes to the source volume, it is read from the source volume. Read from the target volume When a read request goes to the target volume, FlashCopy checks the bitmap and: If the point-in-time data was already copied to the target volume, it is read from the target volume. If the point-in-time data has not been copied yet, it is read from the source volume. Write to the source volume When a write request goes to the source volume, first the data is written to the cache and persistent memory (write cache). Then when the update is destaged to the source volume, FlashCopy checks the bitmap and: If the point-in-time data was already copied to the target, then the update is written to the source volume. If the point-in-time data has not been copied yet to the target, then first it is copied to the target volume, and after that the update is written to the source volume. Write to the target volume Whenever data is written to the target volume while the FlashCopy relationship exists, the storage subsystem makes sure that the bitmap is updated. This way, the point-in-time data from the source volume never overwrites updates that were done directly to the target. The background copy can have a slight impact to your application because the physical copy needs some storage resources, but the impact is minimal because the host I/O is prior to the
120
6786_6452CopyServices.fm
background copy. And if you want, you can issue standard FlashCopy with the no background copy option.
121
6786_6452CopyServices.fm
adequate feature number license in terms of physical capacity. For details about feature and function requirements, see 11.1, DS8000 licensed functions on page 228. Note: For a detailed explanation of the features involved and considerations when ordering FlashCopy, we recommend that you refer to the announcement letters: IBM System Storage DS8000 Function Authorization for Machine type 2244 IBM FlashCopy SE features IBM System Storage DS8000 series (machine type 2107) delivers new functional capabilities IBM announcement letters can be found at:
http://www.ibm.com/products
122
6786_6452CopyServices.fm
Incremental FlashCopy
Write Read
Source Target
Initial FlashCopy relationship established with change recording and Write persistent copy options
Read
Control bitmap for each volume created Incremental FlashCopy started Tracks changed on the target are overwritten by the corresponding tracks from the source Tracks changed on the source are copied to the target Possible reverse operation, the target updates the source
When using the Incremental FlashCopy option, this is what happens: 1. At first, you issue full FlashCopy with the change recording option. This option is for creating change recording bitmaps in the storage unit. The change recording bitmaps are used for recording the tracks which are changed on the source and target volumes after the last FlashCopy. 2. After creating the change recording bitmaps, Copy Services records the information for the updated tracks to the bitmaps. The FlashCopy relationship persists even if all of the tracks have been copied from the source to the target. 3. The next time you issue Incremental FlashCopy, Copy Services checks the change recording bitmaps and copies only the changed tracks to the target volumes. If some tracks on the target volumes are updated, these tracks are overwritten by the corresponding tracks from the source volume. You can also issue incremental FlashCopy from the target volume to the source volumes with the reverse restore option. The reverse restore operation cannot be done unless the background copy in the original direction has finished.
123
6786_6452CopyServices.fm
Source Volume
Target Volume 1
124
6786_6452CopyServices.fm
In order to create this consistent copy, you issue a set of establish FlashCopy commands with a freeze option, which will hold off host I/O to the source volumes. In other words, Consistency Group FlashCopy provides the capability to temporarily queue (at the host I/O level, not the application level) subsequent write operations to the source volumes that are part of the Consistency Group. During the temporary queuing, Establish FlashCopy is completed. The temporary queuing continues until this condition is reset by the Consistency Group Created command or the timeout value expires (the default is two minutes). After all of the Establish FlashCopy requests have completed, a set of Consistency Group Created commands must be issued using the same set of DS network interface servers. The Consistency Group Created commands are directed to each logical subsystem (LSS) involved in the Consistency Group. The Consistency Group Created command allows the write operations to resume to the source volumes. This operation is illustrated in Figure 7-5. For a more detailed discussion of the concept of data consistency and how to manage the Consistency Group operation, you can refer to the IBM Redbooks IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788, and IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787.
Server2
write requests
125
6786_6452CopyServices.fm
Important: Consistency Group FlashCopy can create host-based consistent copies; they are not application-based consistent copies. The copies have power-fail or crash level consistency. This means that if you suddenly power off your server without stopping your applications and without destaging the data in the file cache, the data in the file cache can be lost and you might need recovery procedures to restart your applications. To start your system with Consistency Group FlashCopy target volumes, you might need the same operations as the crash recovery. For example, if the Consistency Group source volumes are used with a journaled file system (such as AIX JFS) and the source LUNs are not unmounted before running FlashCopy, it is likely that fsck will have to be run on the target volumes.
Primary Volume
Figure 7-6 Establish FlashCopy on existing Metro Mirror or Global Copy primary
It took some time to initialize all the bitmaps that were needed for the scenario described above. In DS8000 LIC Release 3 the time to initialize the bitmaps has been greatly improved.
Persistent FlashCopy
Persistent FlashCopy allows the FlashCopy relationship to remain even after the copy operation completes. You must explicitly delete the relationship.
126
6786_6452CopyServices.fm
Incremental FlashCopy
As incremental FlashCopy implies a full volume copy and a full volume copy is not possible in an IBM FlashCopy SE relationship, incremental FlashCopy is not possible with IBM FlashCopy SE.
127
6786_6452CopyServices.fm
Licensing requirements
To use any of these Remote Mirror and Copy optional licensed functions, you must have the corresponding licensed function indicator feature in the DS8000, and you must acquire the corresponding DS8000 function authorization with the adequate feature number license in terms of physical capacity. For details about feature and function requirements, see 11.1, DS8000 licensed functions on page 228. Also, consider that some of the remote mirror solutions, such as Global Mirror, Metro/Global Mirror, or z/OS Metro/Global Mirror, integrate more than one licensed function. In this case, you need to have all of the required licensed functions. Note: For a detailed explanation of the features involved and considerations when ordering Copy Services licensed functions, refer to the announcement letters: IBM System Storage DS8000 Function Authorization for Machine type 2244 IBM FlashCopy SE features IBM System Storage DS8000 series (machine type 2107) delivers new functional capabilities IBM announcement letters can be found at:
http://www.ibm.com/products
6786_6452CopyServices.fm
Server write
1
Primary (local)
Write to secondary
Secondary (remote)
129
6786_6452CopyServices.fm
Server write
1
2 Write acknowledge
Secondary (remote)
6786_6452CopyServices.fm
Efficient synchronization of the local and remote sites with support for failover and failback operations, helping to reduce the time that is required to switch back to the local site after a planned or unplanned outage. Figure 7-9 illustrates the basic operation characteristics of Global Mirror.
Server write
1
2 Write acknowledge
FlashCopy (automatically)
C
Figure 7-9 Global Mirror basic operation
131
6786_6452CopyServices.fm
FlashCopy Target
Global Copy
FlashCopy
B
Remote site
Local site
1. Create Consistency Group of volumes at local site 2. Send increment of consistent data to remote site 3. FlashCopy at the remote site 4. Resume Global Copy (copy out-of-sync data only) 5. Repeat all the steps according to defined time period
Figure 7-10 How Global Mirror works
Once the Consistency Group is created, the application writes can continue updating the A volumes. The increment of the consistent data is sent to the B volumes using the existing Global Copy relationships. Once the data reaches the B volumes, it is FlashCopied to the C volumes. The C volumes now contain a consistent copy of the data. Because the B volumes, except at the moment of doing the FlashCopy, usually contain a fuzzy copy of the data, the C volumes are used to hold the last consistent point-in-time copy of the data while the B volumes are being updated by Global Copy. The data at the remote site is current within 3 to 5 seconds, but this recovery point depends on the workload and bandwidth available to the remote site. With its efficient and autonomic implementation, Global Mirror is a solution for disaster recovery implementations where a consistent copy of the data needs to be kept at all times at a remote location that can be separated by a very long distance from the production site.
132
6786_6452CopyServices.fm
Server or Servers
***
normal application I/Os failover application I/Os Global Copy asynchronous long distance
Metro Mirror
A
Metro Mirror synchronous short distance
C D
Global Mirror
Intermediate site (site B) Remote site (site C)
Both Metro Mirror and Global Mirror are well established replication solutions. Metro/Global Mirror combines Metro Mirror and Global Mirror to incorporate the best features of the two solutions: Metro Mirror: Synchronous operation supports zero data loss. The opportunity to locate the intermediate site disk subsystems close to the local site allows use of intermediate site disk subsystems in a high availability configuration. Note: Metro Mirror can be used for distances of up to 300 km, but when used in a Metro/Global Mirror implementation, a shorter distance might be more appropriate in support of the high availability functionality. Global Mirror: Asynchronous operation supports long distance replication for disaster recovery. Global Mirror methodology allows for no impact to applications at the local site. This solution provides a recoverable, restartable, consistent image at the remote site with an RPO typically in the 3-5 second range.
133
6786_6452CopyServices.fm
Server or Servers
***
4
normal application I/Os failover application I/Os Global Mirror network asynchronous large distance
1
A
Metro Mirror
2 3
Global Mirror
Global Mirror consistency group formation (CG)
a. write updates to B volumes paused (< 3ms) to create CG b. CG updates to B volumes drained to C volumes c. after all updates drained, FlashCopy changed data from C to D volumes
The local site (site A) to intermediate site (site B) component is identical to Metro Mirror. Application writes are synchronously copied to the intermediate site before write complete is signaled to the application. All writes to the local site volumes in the mirror are treated in exactly the same way. The intermediate site (site B) to remote site (site C) component is identical to Global Mirror, except that: The writes to intermediate site volumes are Metro Mirror secondary writes and not application primary writes. The intermediate site volumes are both GM source and MM target at the same time. The intermediate site disk subsystems are collectively paused by the Global Mirror Master disk subsystem to create the Consistency Group (CG) set of updates. This pause would normally take 3 ms every 3 to 5 seconds. After the CG set is formed, the Metro Mirror writes from local site (site A) volumes to intermediate site (site B) volumes are allowed to continue. Also, the CG updates continue to drain to the remote site (site C) volumes. The intermediate site to remote site drain should take only a few seconds to complete. Once all updates are drained to the remote site, all changes since the last FlashCopy from the C volumes to the D volumes are logically (NOCOPY) FlashCopied to the D volumes. After the logical FlashCopy is complete, the intermediate site to remote site Global Copy data transfer is resumed until the next formation of a Global Mirror CG. The process described
134
6786_6452CopyServices.fm
above is repeated every 3 to 5 seconds if the interval for Consistency Group formation is set to zero. Otherwise, it will be repeated at the specified interval plus 3 to 5 seconds. The Global Mirror processes are discussed in greater detail in IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788, and IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787.
Server write
1
135
6786_6452CopyServices.fm
Intermediate Site
Local Site
Remote Site
Metropolitan distance
Unlimited distance
Metro Mirror
P
DS8000 Metro Mirror Secondary
X
DS8000 z/OS Global Mirror Secondary
Metro Mirror
Metro Mirror is a function for synchronous data copy at a distance. The following considerations apply: There is no data loss and it allows for rapid recovery for distances up to 300 km. There will be a slight performance impact for write operations.
Global Copy
Global Copy is a function for non-synchronous data copy at very long distances, which is only limited by the network implementation. The following considerations apply: It can copy your data at nearly an unlimited distance, making it suitable for data migration and daily backup to a remote distant site. The copy is normally fuzzy but can be made consistent through a synchronization procedure. To create a consistent copy for Global Copy, you need a go-to-sync operation; that is, synchronize the secondary volumes to the primary volumes. During the go-to-sync operation, the mode of remote copy changes from a non-synchronous copy to a synchronous copy. Therefore, the go-to-sync operation might cause a performance impact to your application system. If the data is heavily updated and the network bandwidth for remote copy is limited, the time for the go-to-sync operation becomes longer. An alternative method to acquire a consistent copy is to pause the applications until all changed data at the local site has drained to the remote site. When all consistent data is at the remote site, suspend Global Copy,
136
6786_6452CopyServices.fm
restart the applications, issue the FlashCopy and then return to the non-synchronous (Global Copy) operation.
Global Mirror
Global Mirror is an asynchronous copy technique; you can create a consistent copy in the secondary site with an adaptable Recovery Point Objective (RPO). RPO specifies how much data you can afford to recreate if the system needs to be recovered. The following considerations apply: Global Mirror can copy to nearly an unlimited distance. It is scalable across the storage units. It can realize a low RPO if there is enough link bandwidth; when the link bandwidth capability is exceeded with a heavy workload, the RPO might grow. Global Mirror causes only a slight impact to your application system.
137
6786_6452CopyServices.fm
commands are issued using the Ethernet via the SSPC to the HMC. When the HMC has the command requests from these interfaces, including those for Copy Services, HMC communicates with each server in the storage units through the Ethernet network. Therefore, the HMC is a key component to configure and manage the DS8000 Copy Services functions. The network components for Copy Services are illustrated in Figure 7-15.
SSPC
TPC
DS Storage Manager DS HMC 1 (internal)
DS Storage Manager realtime
Ethernet Switch 1
Processor Complex 0
DS8000
Customer Network
DS HMC 2 (external)
DS Storage Manager realtime
DS CLI DS API
Ethernet Switch 2
Processor Complex 1
Each DS8000 will have an internal HMC in the base frame, and you can have an external HMC for redundancy. For further information about the HMC, see Chapter 9, DS HMC planning and setup on page 161.
IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788 IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787 For additional information about the DS Storage Manager usage, see also Chapter 14, Configuration with DS Storage Manager GUI on page 287.
138
6786_6452CopyServices.fm
IBM System Storage DS8000: Command-Line Interface Users Guide, SC26-7916 IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788 IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787 For additional information about DS CLI usage, see also Chapter 15, Configuration with Command-Line Interface on page 339.
139
6786_6452CopyServices.fm
IBM TotalStorage Productivity Center for Replication Command-Line Interface Users Guide, SC32-0104 IBM TotalStorage Productivity Center for Replication on Windows 2003, SG24-7250
6786_6452CopyServices.fm
7.5 Interoperability
Remote mirror and copy pairs can only be established between disk subsystems of the same (or similar) type and features. For example, a DS8000 can have a remote mirror pair with another DS8000, a DS6000, an ESS 800, or an ESS 750. It cannot have a remote mirror pair with an RVA or an ESS F20. Note that all disk subsystems must have the appropriate features installed. If your DS8000 is mirrored to an ESS disk subsystem, the ESS must have PPRC Version 2 (which supports Fibre Channel links) with the appropriate licensed internal code level (LIC). DS8000 interoperability information can be found in the IBM System Storage Interoperability Center (SSIC) (start at the following web site):
http://www.ibm.com/systems/support/storage/config/ssic
Note: The DS8000 does not support ESCON links for Remote Mirror and Copy operations. If you want to establish a remote mirror relationship between a DS8000 and an ESS 800, you have to use FCP links.
141
6786_6452CopyServices.fm
142
6786p_Planning.fm
Part 2
Part
143
6786p_Planning.fm
144
6786ch_Physical Planning.fm
Chapter 8.
145
6786ch_Physical Planning.fm
146
6786ch_Physical Planning.fm
The following people should be briefed and engaged in the planning process for the physical installation: Systems and Storage Administrators Installation Planning Engineer Building Engineer for floor loading and air conditioning Location Electrical Engineer IBM or Business Partner Installation Engineer
Model 931 pallet or crate Model 932 Model 9B2 pallet or crate
Height 207.5 cm (81.7 in.) Width 101.5 cm (40 in.) Depth 137.5 cm (54.2 in.) Height 207.5 cm (81.7 in.) Width 101.5 cm (40 in.) Depth 137.5 cm (54.2 in.)
147
6786ch_Physical Planning.fm
Shipping container
Model 92E expansion unit Model 9AE expansion unit pallet or crate (if ordered) External HMC container (If ordered) System Storage Productivity Center (SSPC)
Height 207.5 cm (81.7 in.) Width 101.5 cm (40 in.) Depth 137.5 cm (54.2 in.) Height 69.0 cm (27.2 in.) Width 80.0 cm (31.5 in.) Depth 120.0 cm (47.3 in.) Height 43 mm (1.69 in.) Width 449 mm (17.3 in.) Depth 711 mm (28 in.)
75 kg (165 lb)
Attention: A fully configured model in the packaging can weigh over 1406 kg (3100 lbs). Use of fewer than three persons to move it can result in injury.
Model 931 Models 932/9B2 Models 92E/9AE (first expansion unit) Models 92E/9AE (second expansion unit)
1189 kg (2620 lb) 1248 kg (2750 lb) 1089 kg (2400 lb) 867 kg (1910 lb)
Important: You need to check with the building engineer or another appropriate person to be sure that the floor loading is properly considered. Raised floors can better accommodate cabling layout. The power and interface cables enter the storage unit through the rear side. Figure 8-1 on page 149 shows the location of the cable cutouts. You may use the following measurements when you cut the floor tile: Width: 45.7 cm (18.0 in.) Depth: 16 cm (6.3 in.)
148
6786ch_Physical Planning.fm
The storage unit location area should also cover the service clearance needed by IBM service representatives when accessing the front and rear of the storage unit. You can use the following minimum service clearances; the dimensions are also shown in Figure 8-2 on page 150: For the front of the unit, allow a minimum of 121.9 cm (48 in.) for the service clearance. For the rear of the unit, allow a minimum of 76.2 cm (30 in.) for the service clearance. For the sides of the unit, allow a minimum of 5.1 cm (2 in.) for the service clearance.
149
6786ch_Physical Planning.fm
Power connectors
Each DS8000 base and expansion unit has redundant power supply systems. The two line cords to each frame should be supplied by separate AC power distribution systems. Table 8-4 lists the standard connectors and receptacles for the DS8000.
Table 8-4 Connectors and receptacles Location In-line connector Wall receptacle Manufacturer
Russell-Stoll Russell-Stoll
150
6786ch_Physical Planning.fm
Location
In-line connector
Wall receptacle
Manufacturer
Japan
460C9W
460R9W
Use a 60 Ampere rating for the low voltage feature and a 25 Ampere rating for the high voltage feature. For more details regarding power connectors and line cords, refer to the publication IBM System Storage DS8000 Introduction and Planning Guide, GC35-0515.
Input voltage
The DS8000 supports a three-phase input voltage source. Table 8-5 shows the power specifications for each feature code.
Table 8-5 DS8000 input voltages and frequencies Characteristic Low voltage (Feature 9090) High voltage (Feature 9091)
Nominal input voltage (3-phase) Minimum input voltage (3-phase) Maximum input voltage (3-phase) Steady-state input frequency
200, 208, 220, or 240 RMS Vac 180 RMS Vac 264 RMS Vac 50 3 or 60 3.0 Hz
380, 400, 415, or 480 RMS Vac 333 RMS Vac 508 RMS Vac 50 3 or 60 3.0 Hz
Air circulation for the DS8000 is provided by the various fans installed throughout the frame. The power complex and most of the lower part of the machine take air from the front and exhaust air to the rear. The upper disk drive section takes air from the front and rear sides and exhausts air to the top of the machine. The recommended operating temperature for the DS8000 is between 20 to 25o C (68 to 78o F) at a relative humidity range of 40 to 50 percent. Important: Make sure that air circulation for the DS8000 base unit and expansion units is maintained free from obstruction to keep the unit operating in the specified temperature range.
151
6786ch_Physical Planning.fm
For more details regarding power control features, refer to the publication IBM System Storage DS8000 Introduction and Planning Guide, GC35-0515.
ESCON
The DS8000 ESCON adapter supports two ESCON links per card. Each ESCON port is a 64-bit, LED-type interface, which features an enhanced microprocessor, and supports 62.5 micron multimode fiber optic cable terminated with the industry standard MT-RJ connector. ESCON cables can be specified when ordering the ESCON host adapters. Table 8-7 shows the various fiber optic cable features available for the ESCON ports.
Table 8-7 ESCON cable feature Feature Length Connector Characteristic
Standard 62.5 micron Standard 62.5 micron Standard 62.5 micron Plenum-rated 62.5 micron Plenum-rated 62.5 micron
The 31-meter cable is the standard length provided with the DS8000. You can order custom length cables from IBM Global Services. Note: Feature 1432 is a conversion cable for use in the DS8000 when connecting the unit to a System z using existing cables. The 9672 processor and the IBM ESCON Director (9032) use duplex connectors.
Fibre Channel/FICON
The DS8000 Fibre Channel/FICON adapter has four ports per card. Each port supports FCP or FICON, but not simultaneously. Fabric components from IBM, CNT, McDATA, and Brocade are supported by both environments.
152
6786ch_Physical Planning.fm
FCP is supported on point-to-point, fabric, and arbitrated loop topologies. FICON is supported on point-to-point and fabric topologies. The 31-meter fiber optic cable can be ordered with each Fibre Channel adapter. Using the 9-micron cable, a longwave adapter can extend the point-to-point distance to 10 km. A shortwave adapter using 50 micron cable supports point-to-point distances of up to 500 meters at 1 Gbps and up to 300 meters at 2 Gbps. Additional distance can be achieved with the use of appropriate SAN fabric components. Table 8-8 lists the various fiber optic cable features for the FCP/FICON adapters.
Table 8-8 FCP/FICON cable features Feature Length Connector Characteristic
31 m 31 m 2m 31 m 31 m 2m
50 micron, multimode 50 micron, multimode 50 micron, multimode 9 micron, single mode 9 micron, single mode 9 micron, single mode
Note: The Remote Mirror and Copy functions use FCP as the communication link between DS8000s, DS6000s, and ESS Models 800 and 750. For more details about IBM-supported attachments, refer to the publication IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917. For the most up-to-date details about host types, models, adapters, and operating systems supported by the DS8000 unit, see the Interoperability Matrix at:
http://www.ibm.com/servers/storage/disk/ds8000/interop.html
153
6786ch_Physical Planning.fm
6786ch_Physical Planning.fm
8.3.4 DSCIMCLI
The dscimcli has to be used to configure the CIM agent running on the HMC. The DS8000 can be managed either by the CIM agent that is bundled with the HMC or with a separately installed CIM agent. The dscimcli utility, which configures the CIM agent, is available from the DS CIM agent Web site as part of the DS CIM agent installation bundle, and also as a separate installation bundle. For details about the configuration of the dscimcli, refer to the IBM DS Open Application Programming Interface Reference, GC35-0516.
Take note of the following guidelines to assist in the preparation for attaching the DS8000 to the clients LAN: 1. Assign a TCP/IP address and host name to the DS HMC in the DS8000. 2. If e-mail notification of service alert is allowed, enable the support on the mail server for the TCP/IP addresses assigned to the DS8000. 3. Use the information that was entered on the installation worksheets during your planning. We recommend service connection through the high-speed VPN network utilizing a secure Internet connection. You need to provide the network parameters for your DS HMC through the installation worksheet prior to actual configuration of the console. See Chapter 9, DS HMC planning and setup on page 161.
Chapter 8. Physical planning and installation
155
6786ch_Physical Planning.fm
Your IBM System Support Representative (SSR) will need the configuration worksheet during the configuration of your DS HMC. A worksheet is available in the IBM System Storage DS8000 Introduction and Planning Guide, GC35-0515. See also Chapter 21, Remote support on page 505 for further discussion about remote support connection.
156
6786ch_Physical Planning.fm
For detailed information, refer to the IBM Redbooks IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788, and IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787.
157
6786ch_Physical Planning.fm
Table 8-9 helps you plan for the capacity of your DS8000 system.
Table 8-9 Disk drive set capacity for open systems and System z environments Disk size (GB) Physical capacity per disk drive set in decimal GB Rank type Effective capacity of one rank in decimal GB (Number of extents) Rank of RAID-10 arrays 3+3 4+4 Rank of RAID-5 arrays 6+P 7+P
73
1168
FB CKD
206.16 (192) 204.34 (216) 414.46 (386) 411.51 (435) 842.96 (785) 835.32 (883) 1404.93 (1308) 1392.20 (1472)
274.88 (256) 272.45 (288) 557.27 (519) 549.63 (581) 1125.28 (1048) 1114.39 (1178) 1875.47 (1747) 1857.32 (1963)
414.46 (386) 410.57 (434) 836.44 (779) 825.86 (873) 1698.66 (1582) 1675.38 (1771) 2831.10 (2637) 2792.30 (2952)
483.18 (450) 479.62 (507) 976.03 (909) 963.03 (1018) 1979.98 (1844) 1954.45 (2066) 3299.97 (3073) 3257.42 (3443)
146
2336
FB CKD
300
4800
FB CKD
500
8000
FB CKD
Note: 1. Physical capacities are in decimal gigaBytes (GB). One GB is one billion Bytes. 2. Keep in mind the lower recommended usage for the 500 GB FATA drives as detailed in FATA as opposed to Fibre Channel drives on the DS8000 on page 66. 4.4.6,
Table 8-9 shows the effective capacity of one rank in the different possible configurations. A disk drive set contains 16 drives, which form two array sites. Capacity on the DS8000 is added in increments of one disk drive set. The effective capacities in the table are expressed in binary gigaBytes and as the number of extents.
158
6786ch_Physical Planning.fm
page 59, 4.4.5, Positioning FATA with Fibre Channel disks on page 61, and 4.4.6, FATA as opposed to Fibre Channel drives on the DS8000 on page 66.
159
6786ch_Physical Planning.fm
160
6786ch_DS_HMC_Plan.fm
Chapter 9.
161
6786ch_DS_HMC_Plan.fm
DS-HMC
SFI-1 SFI-2
Figure 9-1 Rear view of the DS HMC and a pair of Ethernet switches
The DS HMC has two built-in Ethernet ports: One dual-port Ethernet PCI adapter and one PCI modem for asynchronous Call Home support. The DS HMCs private Ethernet ports shown are configured into port 1 of each Ethernet switch to form the private DS8000 network. The client Ethernet port indicated is the primary port to use to connect to the client network. The empty Ethernet port is normally not used. Corresponding private Ethernet ports of the external DS HMC (FC1110) plug into port 2 of the switches, as shown. To interconnect two DS8000 base frames, FC1190 provides a pair of 31 m Ethernet cables to connect from port 16 of each switch in the second base frame into port 15 of the first frame. If the second DS HMC is installed in the second DS8000, it remains plugged into port 1 of its Ethernet switches. Each LPAR of a Storage Facility Image (SFI) is connected through a redundant Ethernet connection to the internal network.
A processor LPAR is part of a Storage Facility Image. For the client, there is no possibility to manage a processor LPAR, as is possible on the System p5s or System i5s.
162
6786ch_DS_HMC_Plan.fm
163
6786ch_DS_HMC_Plan.fm
The GUI and the CLI are comprehensive, easy-to-use interfaces for a storage administrator to perform DS8000 management tasks. They can be used interchangeably depending on the particular task. The DS Open API provides an interface for storage management application programs to the DS8000.The DS Open API communicates with the DS8000 through the IBM System Storage Common Information Model (CIM) agent, a middleware application that provides a CIM-compliant interface. The DS Open API uses the CIM technology to manage proprietary storage units like the DS8000 through storage management applications. The DS Open API allows storage management applications like TotalStorage Productivity Center (TPC) to communicate with the DS8000. Installable code for the interface programs is available on CDs that are delivered with the DS8000 unit. Subsequent versions are made available with DS8000 Licensed Internal Code updates. All front ends for the DS8000 can be installed on any of the supported workstations. Tip: We recommend that you have a directory structure in place where all software components to be installed for the DS environment are stored, including the latest levels from the Internet used for installation. The DS Storage Management Server runs in a WebSphere environment installed on the DS HMC. The DS Storage Management server provides the communication interface to the front end DS Storage Manager, which is running in a Web browser. The DS Storage Management server also communicates with the DS Network Interface Server (DS NW IFS), which is responsible for communication with the two controllers of the DS8000.
164
6786ch_DS_HMC_Plan.fm
DS8000 shipped with pre-Release 3 code ( earlier than Licence Machine Code 5.30xx.xx) can also establish the communication with the DS Storage Manager graphical user interface (GUI) by using a Web browser on any supported network-connected workstation by entering into a Web browser, the IP address of the DS HMC and the port that the DS Storage Management server is listening to, as shown in Example 9-2.
Example 9-2 Connecting to the DS Storage Management server from within a Web browser To connect to the DS Storage Manager from within a Web Browser, just type the address http://<ip-address>:8451/DS8000 or https://<ip-address>:8452/DS8000
165
6786ch_DS_HMC_Plan.fm
Enter the secondary management console IP address: Enter your username: its4me Enter your password: Date/Time: November 9, 2005 9:47:13 AM EET IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 IBM.2107-75ABTV2 dscli> lssi Date/Time: November 9, 2005 9:47:23 AM EET IBM DSCLI Version: 5.1.0.204 Name ID Storage Unit Model WWNN State ESSNet ============================================================================ IBM.2107-75ABTV1 IBM.2107-75ABTV0 9B2 5005076303FFC663 Online Enabled IBM.2107-75ABTV2 IBM.2107-75ABTV0 9B2 5005076303FFCE63 Online Enabled dscli> exit
The access information required to connect to a DS8000 by DSCLI can be predefined in the file /.../DSCLI/profile/dscli.profile by editing the lines for hmc1, and devid. Example 9-4 shows the modifications to the dscli.profile for HMC ip address 9.155.62.102 and for SFI#1of DS8000 with the machine serial number 7520280.
Example 9-4 Modifications of the file dscli.profile to DSCLI to a dedicated DS8000 # hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options. hmc1:9.155.62.102 # Default target Storage Image ID devid: IBM.2107-7520281
To prepare a CLI profile that contains the IP address and user ID information for the DS HMC, the file dscli.profile can be modified as shown in Example 9-4 and saved to the directory /... /DSCLI/ with the format *.profile. The -cfg flag is used to call this profile. Example 9-5 shows how to connect DSCLI to the DS8000 as defined in the file /.../DSCLI/75ABTV1.profile
Example 9-5 DSCLI command to call the profile for DS8000 machine serial number 75ABTV0 C:\Program Files\ibm\dscli>dscli -cfg 75abtv1.profile Date/Time: October 14, 2007 2:47:26 PM CEST IBM DSCLI Version: 5.3.0.939 DS: IBM.2107-75ABTV1 IBM.2107-75ABTV2 dscli>
To enter the script command mode, multiple DS CLI commands can be integrated into a script that can be executed by using the dscli command with the -script parameter. To call a script with DS CLI commands, the following syntax in a command prompt window of a Windows workstation can be used:
dscli -script <script_filename> -hmc1 <ip-address> -user <userid> -passwd <password>
In Example 9-6, script file lssi.cli contains just one CLI command, the lssi command.
Example 9-6 CLI script mode C:\IBM\DSCLI>dscli -script c:\ds8000\lssi.cli -hmc1 10.0.0.1 -user its4me -passwd its4pw Date/Time: November 9, 2005 9:33:25 AM EET IBM DSCLI Version: 5.1.0.204 IBM.2107-75ABTV1 IBM.2107-75ABTV2 Name ID Storage Unit Model WWNN State ESSNet ============================================================================ IBM.2107-75ABTV1 IBM.2107-75ABTV0 9B2 5005076303FFC663 Online Enabled IBM.2107-75ABTV2 IBM.2107-75ABTV0 9B2 5005076303FFCE63 Online Enabled
166
6786ch_DS_HMC_Plan.fm
Example 9-7 shows how to run a single-shot command from a workstation prompt.
Example 9-7 CLI single-shot mode C:\Program Files\ibm\dscli>dscli -cfg 75abtv1.profile lssi Date/Time: October 14, 2007 3:31:02 PM CEST IBM DSCLI Version: 5.3.0.939 Name ID Storage Unit Model WWNN State ESSNet ========================================================================================= DS8k_TIC06v1_ATS IBM.2107-75ABTV1 IBM.2107-75ABTV0 9A2 5005076303FFC663 Online Enabled DS8k_TIC06v2_SLE IBM.2107-75ABTV2 IBM.2107-75ABTV0 9A2 5005076303FFCE63 Online Enabled C:\ProgramFiles\ibm\dscli>
167
6786ch_DS_HMC_Plan.fm
3. Click Browser. The Web browser is started with no address bar and a Web page titled WELCOME TO THE DS8000 MANAGEMENT CONSOLE displays; see Figure 9-4 on page 168.
4. On the Welcome panel, click IBM TOTALSTORAGE DS STORAGE MANAGER. 5. A certificate panel opens. Click Accept. 6. The IBM System Storage DS8000 SignOn panel opens. Proceed by entering a user ID and password. The predefined user ID and password are:
168
6786ch_DS_HMC_Plan.fm
User ID: admin Password: admin The user will be forced to change the password at first login. If someone has already logged on, check with that person to obtain the new password. 7. A Wand (password manager) panel opens. Select OK.
The typical setup for a DS HMC environment assumes that the following servers and workstations exist (see Figure 9-5): DS Hardware Management Console (online configuration mode only) One management console, with all the required software preinstalled, is always located within the first DS8000 unit ordered. This can be used for a real-time (online) configuration. Optionally, a second management console can be ordered and installed in a client provided rack or in a second DS8000. DS Storage Manager workstations with DS Storage Manager code installed (online/offline configuration modes). Client-provided workstation where DS Storage Manager has been installed. This can be used for both real-time (online) and simulated (offline) configurations. The online configurator does not support new DS8000 shipped with Licence Machine Code 5.30xx.xx. Workstations with Web browser only (online configuration mode only) User-provided workstations with a Web browser to access the DS Storage Manager on the DS HMC. This can be used for a real-time (online) configuration. Starting with Licence Machine Code 5.30xx.xx (shipped with a newly ordered DS8000), the remote access to the DS8000 GUI has to go through the System Storage Productivity Center (SSPC).
169
6786ch_DS_HMC_Plan.fm
Other supported HMC configurations include: 1:2 - 1 HMC driving two DS8000s 2:1 - 1 internal (FC1100) HMC and 1 external (FC1110) HMC attached to one DS8000 2:2 - 2 HMCs supporting 2 DS8000s, most likely an internal HMC in each DS8000 2:3 - 2 HMCs supporting 3 DS8000s, most likely an HMC in the first two DS8000s 2:4 - 2 HMCs supporting 4 DS8000s
170
6786ch_DS_HMC_Plan.fm
171
6786ch_DS_HMC_Plan.fm
Code for DS Storage Manager workstations. The CD that comes with every DS8000 contains DS Storage Manager code that can be installed on a workstation to support simulated (offline) configuration. This includes WebSphere as well as DB2, the Java Virtual Machine, and the DS-specific server codes for DS Storage Manager Server and DS NW IFS. A new version of DS Storage Manager code might become available with DS8000 microcode update packages. The level of code for the DS Storage Manager workstations needs to be planned by IBM and the clients organization. For initial configuration of the DS8000, it is possible to do configuration of the DS8000 offline on a DS Storage Manager workstation and apply the changes afterwards to the DS8000. To configure a DS8000 in simulation mode, you must import the hardware configuration from an eConfig order file, from an already configured DS8000, or manually enter the configuration details into the Storage Manager. You can then modify the configuration while disconnected from the network. The resultant configuration can then be exported and applied to a new or unconfigured DS8000 Storage Unit. Plan for installation activities of DS Storage Manager workstations. The installation activities for all workstations that need to communicate with the DS HMC need to be identified as part of the overall project plan. This will include time and responsibility information for the physical setup of the DS Storage Manager workstations (if any additional are needed). Code for supported Web browser. The Web browser to be used on any administration workstation should be a supported one, as mentioned in the installation guide or in the Information Center for the DS8000. A decision should be made as to which Web browser should be used. The code should then be made available. The Web browser is the only software that is needed on workstations that will do configuration tasks online using the DS Storage Manager GUI.
172
6786ch_DS_HMC_Plan.fm
To prepare for the activation of the license keys, the Disk storage feature activation (DSFA) Internet page can be used to create activation codes and to download keyfiles, which then need to be applied to the DS8000. The following information is needed for creating license activation codes: Serial number of the DS8000 The serial number of a DS8000 can be taken from the front of the base frame (lower right corner). If several machines have been delivered, this is the only way to obtain the serial number of a machine located in a specific place in the computer center. Machine signature The machine signature can only be obtained using the DS Storage Manager or the DS CLI after installation of the DS8000 and DS HMC. License Function Authorization document In most situations, the DSFA application can locate your 239x/2244 function authorization record when you enter the DS8000 (242x/2107) serial number and signature. However, if the function authorization record is not attached to the 242x/2107 record, you must assign it to the 242x/2107 record in the DSFA application. In this situation, you will need the 239x/2244 serial number (which you can find on the License Function Authorization document). If you are activating codes for a new storage unit, the License Function Authorization documents are included in the shipment of the storage unit. If you are activating codes for an existing storage unit, IBM sends these documents to you in an envelope. For more information about required DS8000 features and function authorizations, as well as activation tasks, see Chapter 11, Features and license keys on page 227. The Operating Environment license must be for a capacity greater than or equal to the total physical capacity of the system. If it is not, you will not be able to configure any storage for a new box or the new capacity for an upgrade. For each of the other features, you need to order a license for capacity greater than or equal to the capacity of the storage format with which it will be used. For example, assume you have a 10 TB box with 4 TB of storage for count key data (CKD) and 6 TB for fixed block data (FB). If you only wanted to use Metro Mirror for the CKD storage, you would need to order the Metro Mirror license for 4 TB. Note: Applying increased feature activation codes is a concurrent action, but a license reduction or deactivation is a disruptive action. During planning, the capacity ordered for each of the Copy Services functions should be reviewed. After activation of the features, it should be checked to verify that they match the capacity assigned in the DS8000 for Copy Services functions.
173
6786ch_DS_HMC_Plan.fm
Microcode install The IBM service representative will install the changes that IBM does not make available for you to download. If the machine does not function as warranted and your problem can be resolved through your application of downloadable Licensed Machine Code, you are responsible for downloading and installing these designated Licensed Machine Code changes as IBM specifies. Check if the new microcode requires new levels of DS Storage Manager, DS CLI, and DS Open API and plan on upgrading them on relevant workstations if necessary. Host prerequisites When planning for initial installation or for microcode updates, make sure that all prerequisites for the hosts are identified correctly. Sometimes, a new level is required for the SDD as well. The Interoperability Matrix should be the primary source to identify supported operating systems, HBAs, and hardware of hosts. To prepare for the download of drivers, refer to the HBA Support Matrix referenced in the Interoperability Matrix and make sure that drivers are downloaded from the IBM Internet pages. This is to make sure that drivers are used with the settings corresponding to the DS8000, and that we do not use settings that might work with another storage subsystem but would not work or would not be optimal with the DS8000. Important: The Interoperability Matrix always reflects information regarding the latest supported code levels. This does not necessarily mean that former levels of HBA firmware or drivers are no longer supported. If in doubt about any supported levels, contact your IBM representative. The Interoperability Matrix and the HBA Support Matrix can respectively be found at the IBM System Storage technical support Web site at:
http://www-03.ibm.com/servers/storage/disk/ds8000/pdf/ds8000-matrix.pdf http://www-03.ibm.com/servers/storage/support/config/hba/index.wss
Alternatively, DS8000 interoperability information can now also be found AT the IBM System Storage Interoperability Center (SSIC), accessible from the following web site):
http://www.ibm.com/systems/support/storage/config/ssic
Maintenance windows Even though the microcode update of the DS8000 is a nondisruptive action, any prerequisites identified for the hosts (for example, patches, new maintenance levels, or new drivers) could make it necessary to schedule a maintenance window. The host environments can then be upgraded to the level needed in parallel to the microcode update of the DS8000 taking place. For more information about microcode upgrades, see Chapter 19, Licensed machine code on page 487.
174
6786ch_DS_HMC_Plan.fm
For additional information, see also Chapter 21, Remote support on page 505. When the VPN connection is used, if there is a firewall in place to shield your network from the open Internet, the firewall must be configured to allow the HMC to connect to the IBM VPN servers. The HMC establishes connection to the following TCP/IP addresses: 207.25.252.196 IBM Boulder VPN Server 129.42.160.16 IBM Rochester VPN Server You must also enable the following ports and protocols:
175
6786ch_DS_HMC_Plan.fm
Protocol ESP Protocol UDP Port 500 Protocol UDP Port 4500 The setup of the remote support environment is done by the IBM service representative during initial installation.
Important: The password of the admin user ID will need to be changed before it can be used. The GUI will force you to change the password when you first log in. The DS CLI will allow you to log in but will not allow you to issue any other commands until you have changed the password. As an example, to change the admin users password to passw0rd, use the following command: chuser-pw passw0rd admin. Once you have issued that command, you can then issue any other command. During the planning phase of the project, a worksheet or a script file was established with a list of all people who need access to the DS GUI or DS CLI. The supported roles are: Administrator has access to all available commands. Physical operator has access to maintain the physical configuration (storage complex, storage image, array, rank, and so on). Logical operator has access to maintain the logical configuration (logical volume, host, host ports, and so on). Copy Services operator has access to all Copy Services functions and the same access as the monitor group. Monitor group has access to all read-only list and show commands. No access could be used by the administrator to temporarily deactivate a user ID. General password settings include the time period in days after which passwords expire and a number that identifies how many failed logins are allowed. Whenever a user is added, a password is entered by the administrator. During the first sign-on, you need to change this password. The user ID is deactivated if an invalid password is entered more times than the limit defined by the administrator for the password settings. Only a user with administrator rights can then reset the user ID with a new initial password. If the access is denied for the administrator due to the number of invalid tries, a procedure can be obtained from your IBM representative to reset the administrators password. Tip: User names and passwords are both case sensitive. If you create a user name called Anthony, you cannot log on using the user name anthony. DS CLI commands, however, are not case sensitive. So the commands LSUSER or LSuser or lsuser will all work. The password for each user account is forced to adhere to the following rules:
176
6786ch_DS_HMC_Plan.fm
The length of the password must be between six and 16 characters. It must begin and end with a letter. It must have at least five letters. It must contain at least one number. It cannot be identical to the user ID. It cannot be a previous password.
rmuser This command removes an existing user ID. In Example 9-9, we remove a user called Enzio.
Example 9-9 Removing a user dscli> rmuser Enzio Date/Time: 10 November 2005 21:21:33 IBM DSCLI Version: 5.1.0.204 CMUC00135W rmuser: Are you sure you want to delete user Enzio? [y/n]:y CMUC00136I rmuser: User Enzio successfully deleted.
chuser This command changes the password and/or group of an existing user ID. It is also used to unlock a user ID that has been locked by exceeding the allowable login retry count. You could also lock a user ID if you want. In Example 9-10 on page 177, we unlock the user, change the password, and change the group membership for a user called Sharon. A user has to use the chpass command when they use that user ID for the first time.
Example 9-10 Changing a user with chuser dscli> chuser -unlock -pw passw0rd -group monitor Sharon Date/Time: 10 November 2005 22:55:43 IBM DSCLI Version: 5.1.0.204 CMUC00134I chuser: User Sharon successfully modified.
lsuser With this command, a list of all user IDs can be generated. In Example 9-11, we can see three users.
Example 9-11 Using the lsuser command to list users dscli> lsuser Date/Time: 10 November 2005 21:14:18 IBM DSCLI Version: 5.1.0.204 Name Group State ========================== Pierre op_storage active
177
6786ch_DS_HMC_Plan.fm
showuser The account details of user IDs can be displayed with this command. In Example 9-12, we list the details of Arielles user ID.
Example 9-12 Using the showuser command to list user information dscli> showuser Arielle Date/Time: 10 November 2005 21:25:34 IBM DSCLI Version: 5.1.0.204 Name Arielle Group op_volume State active FailedLogin 0
managepwfile This command creates or adds to an encrypted password file that will be placed onto the local machine. This file can be referred to in a DS CLI profile. This allows you to run scripts without specifying a DS CLI user password in clear text. If manually starting DS CLI, you can also refer to a password file with the -pwfile parameter. By default, the file is placed in: c:\Documents and settings\<Windows user name>\DSCLI\security.dat or in $HOME/dscli/security.dat (for non-Windows-based operating systems) In Example 9-13, we manage our password file by adding the user ID called BenColeman. The password is now saved in an encrypted file called security.dat.
Example 9-13 Using the managepwfile command dscli> managepwfile -action add -name BenColeman -pw passw0rd Date/Time: 10 November 2005 23:40:56 IBM DSCLI Version: 5.1.0.204 CMUC00206I managepwfile: Record 10.0.0.1/BenColeman successfully added to password file C:\Documents and Settings\AnthonyV\dscli\security.dat.
chpass This command lets you change two password rules: password expiration (days) and failed logins allowed. In Example 9-14 on page 178, we change the expiration to 365 days and 5 failed logon attempts. If you set both values to zero, then passwords never expire and unlimited logon attempts are allowed, which we do not recommend.
Example 9-14 Changing rules using the chpass command dscli> chpass -expire 365 -fail 5 Date/Time: 10 November 2005 21:44:33 IBM DSCLI Version: 5.1.0.204 CMUC00195I chpass: Security properties successfully set.
showpass This command lists the properties for passwords (Password Expiration days and Failed Logins Allowed). In Example 9-15, we can see that passwords have been set to expire in 365 days and that 5 login attempts are allowed before a user ID is locked.
Example 9-15 Using the showpass command dscli> showpass Date/Time: 10 November 2005 21:44:45 IBM DSCLI Version: 5.1.0.204 Password Expiration 365 days Failed Logins Allowed 5
178
6786ch_DS_HMC_Plan.fm
The exact syntax for any DS CLI command can be found in the IBM System Storage DS8000: Command-Line Interface Users Guide, SC26-7916. You can also use the DS CLI help command to get further assistance.
The administrator can perform several tasks from the Select Action pull-down: Add User (DS CLI equivalent is mkuser) Modify User (DS CLI equivalent is chuser) Lock or Unlock User: Choice will toggle based on user state (DS CLI equivalent is chuser) Delete User (DS CLI equivalent is rmuser) Password Settings (DS CLI equivalent is chpass) If you click a user name, it will bring up the Modify User panel. Note: If a user who is not in the Administrator group logs on to the DS GUI and goes to the User Administration panel, the user will only be able to see their own user ID in the list. The only action they will be able to perform is to change their password. Selecting Add User displays a window in which a user can be added by entering the user ID, the temporary password, and the role (see Figure 9-8). The role will decide what type of activities can be performed by this user. In this window, the user ID can also be temporarily deactivated by selecting No access (only).
179
6786ch_DS_HMC_Plan.fm
In Figure 9-8, we add a user with the user name Frontdesk. This user is being placed into the Monitor group.
180
6786ch_DS_HMC_Plan.fm
Improved remote support In many environments, the DS8000 and internal HMC will be secured behind a firewall in a users internal LAN. In this case, it can be very difficult for IBM to provide remote support. An external DS HMC can be configured in such a way that it is able to communicate with both the DS8000 and IBM. Thus, the dual HMC configuration can greatly enhance remote support capabilities. High availability for configuration operations In open systems environments, all configuration commands must go through the HMC. This is true regardless of whether you use the DS CLI, the DS Storage Manager, or the DS Open API. An external DS HMC will allow these operations to continue to work despite a failure of the internal DS HMC. High availability for Advanced Copy Services operations In open systems environments, all Advanced Copy Services commands must also go through the HMC. This is true regardless of whether you use the DS CLI, the DS Storage Manager, or the DS Open API. An external DS HMC will allow these operations to continue to work despite a failure of the internal DS HMC. To take advantage of the high availability features of the second HMC (high availability for configuration operations and Advanced Copy Services), you must configure the DS CLI or the DS Storage Manager to take advantage of the second HMC. When you issue a configuration or Copy Services command, the DS CLI or DS Storage Manager will send the command to the first HMC. If the first HMC is not available, it will automatically send the command to the second HMC instead. Typically, you do not have to reissue the command. Any changes made by using the first HMC are instantly reflected in the second HMC. There is no caching done within the HMC, so there are no cache coherency issues. By first HMC, we mean the HMC defined as HMC1. It is even possible to define the external HMC as the first HMC, and vice versa, but this is not typical.
Alternatively, you can modify the following lines in the dscli.profile (or other profile) file:
# Management Console/Node IP Address(es) # hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options. hmc1:mitmuzik.ibm.com hmc2:mitgasse.ibm.com
After you make these changes, the DS CLI will use the second HMC in the unlikely event that the first HMC fails. This change will allow you to perform both configuration and Copy Services commands with full redundancy.
181
6786ch_DS_HMC_Plan.fm
2. On the next panel, check the Define a second Management console check box, and add the IP addresses of both HMC machines. Then click OK. Figure 9-10 on page 183 shows adding two HMCs, 10.10.10.10 and 10.10.10.11.
182
6786ch_DS_HMC_Plan.fm
3. At this point, the DS Storage Manager is configured to use the second HMC if the first HMC should fail.
183
6786ch_DS_HMC_Plan.fm
184
6786_6452_Perf.fm
10
Chapter 10.
Performance
This chapter discusses the performance characteristics of the DS8000 regarding physical and logical configuration. The considerations discussed in the present chapter will help you when you plan the physical and logical setup. This chapter covers the following topics: DS8000 hardware: Performance characteristics Software performance enhancements: Synergy items Performance and sizing considerations for open systems Performance and sizing considerations for System z
185
6786_6452_Perf.fm
To host servers
Processor
Memory
Processor
Host server
Adapter
Adapter
Processor
Processor
To storage servers
Adapter Adapter
20 port switch
o oo
16 DDM
20 port switch
Figure 10-1 Switched FC-AL disk subsystem
186
Storage server
Adapter
Adapter
Memory
6786_6452_Perf.fm
These switches use FC-AL protocol and attach FC-AL drives through a point-to-point connection. The arbitration message of a drive is captured in the switch, processed and propagated back to the drive, without routing it through all the other drives in the loop. Performance is enhanced, because both device adapters (DAs) connect to the switched Fibre Channel disk subsystem back end as displayed in Figure 10-2. Note that each DA port can concurrently send and receive data.
To host servers
Adapter
Adapter
Adapter
Adapter
To next switch
20 port switch
ooo
16 DDM
20 port switch
Figure 10-2 High availability and increased bandwidth connect both DAs to two logical loops
These two switched point-to-point connections to each drive, plus connecting both DAs to each switch, account for the following: There is no arbitration competition and interference between one drive and all the other drives, because there is no hardware in common for all the drives in the FC-AL loop. This leads to an increased bandwidth, which utilizes the full speed of a Fibre Channel for each individual drive. This architecture doubles the bandwidth over conventional FC-AL implementations due to two simultaneous operations from each DA to allow for two concurrent read operations and two concurrent write operations at the same time. In addition to the superior performance, we must not forget the improved reliability, availability, and serviceability (RAS) over conventional FC-AL. The failure of a drive is detected and reported by the switch. The switch ports distinguish between intermittent failures and permanent failures. The ports understand intermittent failures, which are recoverable, and collect data for predictive failure statistics. If one of the switches itself fails, a disk enclosure service processor detects the failing switch and reports the failure using the other loop. All drives can still connect through the remaining switch.
Storage server
Processor
Memory
Processor
187
6786_6452_Perf.fm
This discussion has just outlined the physical structure. A virtualization approach built on top of the high performance architectural design contributes even further to enhanced performance. See Chapter 6, Virtualization concepts on page 95.
To host servers
Adapter
Adapter
DA
Processor Memory Processor
Storage server
PowerPC
Adapter
Adapter
Fibre Channel Protocol Proc Fibre Channel Protocol Proc
The RAID device adapter is built on PowerPC technology with four Fibre Channel ports and high function, high performance ASICs. Note that each DA performs the RAID logic and frees up the processors from this task. The actual throughput and performance of a DA is not only determined by the ports speed and hardware used, but also by the firmware efficiency.
188
6786_6452_Perf.fm
To host servers
Fibre Channel Protocol Proc Fibre Channel Protocol Proc
Adapter
Adapter
Storage server
Processor
Memory
Processor
PowerPC
HA
Adapter
Adapter
With FC adapters that are configured for FICON, the DS8000 series provides the following configuration capabilities: Either fabric or point-to-point topologies A maximum of 64 host adapter ports on the DS8100 Model 931, and a maximum of 128 host adapter ports on DS8300 Model 932 and Model 9B2 A maximum of 509 logins per Fibre Channel port A maximum of 8,192 logins per storage unit A maximum of 1,280 logical paths on each Fibre Channel port Access to all control-unit images over each FICON port A maximum of 512 logical paths per control unit image FICON host channels limit the number of devices per channel to 16,384. To fully access 65,280 devices on a storage unit, it is necessary to connect a minimum of four FICON host channels to the storage unit. This way, by using a switched configuration, you can expose 64 control-unit images (16,384 devices) to each host channel. The front end with the 2 Gbps or 4 Gbps ports scales up to 128 ports for a DS8300. This results in a theoretical aggregated host I/O bandwidth of 128 times 4 Gbps for the 4 Gbps ports and outperforms an ESS by a factor of sixteen. The DS8100 still provides eight times more bandwidth at the front end than an ESS.
10.1.4 System p POWER5+ is the heart of the DS8000 dual cluster design
The DS8000 series Turbo models incorporate the System p POWER5+ processor technology. System p5 servers are capable of scaling from a 1-way to a 16-way SMP using standard 4U building blocks. The DS8000 series Turbo Model 931 utilizes two-way processor complexes, and the Turbo Models 932 and 9B2 utilize four-way processor complexes in a dual implementation for all the three models. The following sections discuss configuration and performance aspects based on the two-way processor complexes used in the DS8100.
189
6786_6452_Perf.fm
Among the most exciting capabilities that the System p inherited from System z are the dynamic LPAR mode and the micropartitioning capability. This System p-based functionality has the potential to be exploited also in future disk storage server enhancements. For details about LPAR implementation in the DS8000, see Chapter 3, Storage system logical partitions (LPARs) on page 31. Besides the self-healing features and advanced RAS attributes, the RIO-G structures provide a very high I/O bandwidth interconnect with DAs and host adapters (HAs) to provide system-wide balanced aggregated throughput from top to bottom. A simplified view is in Figure 10-5.
To host servers
Adapter
Adapter
Processor
L1,2 Memory
Memory
Storage server
Processor
Processor
Processor
L1,2 Memory
L3 Memory
Adapter
Adapter
RIO-G Module
Figure 10-5 Standard System p5 2-way SMP processor complexes for DS8100 Model 931
The smallest processor complex within a DS8100 is the POWER5+ two-way SMP processor complex. The dual-processor complex approach allows for concurrent microcode loads, transparent I/O failover and failback support, and redundant, hot-swapable components. See Figure 10-6 on page 191.
190
6786_6452_Perf.fm
HA
PowerPC
RIO-G Module
Server 1
L1,2 Memory
Server 0
Memory
L1,2 Memory
Processor
Processor
Memory
L3 Memory
L1,2 Memory
Processor
RIO-G Interconnect
Processor
L1,2 Memory
L3 Memory
RIO-G Module
RIO-G Module
DA I/O enclosure
RIO-G Module
PowerPC
DA
4 HAs
Figure 10-6 DS8100-931 with four I/O enclosures
Figure 10-6 provides a less abstract view and outlines some details of the dual 2-way processor complex of a DS8100 Model 931, its gates to host servers through HAs, and its connections to the disk storage back end through the DAs. Each of the two processor complexes is interconnected through the System p-based RIO-G interconnect technology and includes up to four I/O enclosures, which equally communicate to either processor complex. Note that there is some affinity between the disk subsystem and its individual ranks to either the left processor complex, server 0, or to the right processor complex, server 1. This affinity is established at the creation of an extent pool. Each single I/O enclosure itself contains six Fibre Channel adapters: Two DAs, which install in pairs Four HAs, which install as required Each adapter itself contains four Fibre Channel ports. Although each HA can communicate with each server, there is some potential to optimize traffic on the RIO-G interconnect structure. RIO-G provides a full duplex communication with 1 GBps in either direction which means 2 GBps throughput per RIO-G interconnect. There is no such thing as arbitration. Figure 10-6 shows that the two left I/O enclosures might communicate with server 0, each in full duplex. The two right I/O enclosures communicate with server 1, also in full duplex mode. Basically, there is no affinity between HA and server. As we see later, the server, which owns certain volumes through its DA, communicates with its respective HA when connecting to the host.
191
6786_6452_Perf.fm
Server 1
Processor Processor
L1,2 Memory
Memory
L3 Memory
L1,2 Memory
Processor
RIO-G Interconnect
Processor
L1,2 Memory
L3 Memory
RIO-G Module
DA
RIO-G Module
RIO-G Module
DA
Note that DA and HA positions in I/O enclosures are shown to suit the intention of the figure
20 port switch
ooo
16 DDM
20 port switch
Figure 10-7 Fibre Channel switched back-end connect to processor complexes: Partial view
All I/O enclosures within the RIO interconnect fabric are equally served from either processor complex. Each I/O enclosure contains two DAs. Each DA with its four ports connects to four switches to reach out to two sets of 16 drives or disk drive modules (DDMs) each. Note that each 20-port switch has two ports to connect to the next switch pair with 16 DDMs when vertically growing within a DS8000. As outlined before, this dual two-logical loop approach allows for multiple concurrent I/O operations to individual DDMs or sets of DDMs and minimizes arbitration through the DDM/switch port mini-loop communication.
192
6786_6452_Perf.fm
Server 0
Memory
L1,2 Memory
I/O enclosure
Processor
I/O enclosure
Processor
Server 1
L1,2 Memory
Memory
L3 Memory
L1,2 Memory
Processor
RIO-2 Interconnect
I/O enclosure I/O enclosure
Processor
L1,2 Memory
L3 L3 Memory Memory
Server 0
Memory
L1,2 Memory
Server 1
Processor Processor
L1,2 Memory
Memory
L3 Memory
L1,2 Memory
Processor
RIO-2 Interconnect
Dual four-way processor complex
Processor
L1,2 Memory
L3 L3 Memory Memory
RIO-G Module
RIO-G Module
Memory
L1,2 Memory
Processor
Processor
L1,2 Memory
Memory
L3 Memory
L1,2 Memory
Processor
RIO-2 Interconnect
Processor
L1,2 Memory
L3 L3 Memory Memory
RIO-G Module
Figure 10-8 DS8100 to DS8300 scale performance linearly: View without disk subsystems
Although Figure 10-8 does not display the back-end part, it can be derived from the number of I/O enclosures, which suggests that the disk subsystem also doubles, as does everything else, when switching from a DS8100 to an DS8300. Doubling the number of processors and I/O enclosures accounts also for doubling the performance or even more. Again, note here that a virtualization layer on top of this physical layout contributes to additional performance potential.
10.2.1 End to end I/O priority: Synergy with System p AIX and DB2
End to end I/O priority is a new addition, requested by IBM, to the SCSI T10 standard. This feature allows trusted applications to override the priority given to each I/O by the operating system. This is only applicable for raw volumes (no file system) and with the 64-bit kernel. Currently, AIX supports this feature in conjunction with DB2. The priority is delivered to storage subsystem in the FCP Transport Header.
Chapter 10. Performance
193
6786_6452_Perf.fm
The priority of an AIX process can be 0 (no assigned priority) or any integer value from 1 (highest priority) to 15 (lowest priority). All I/O requests associated with a given process inherit its priority value, but with end to end I/O priority, DB2 can change this value for critical data transfers. At the DS8000, the host adapter will give preferential treatment to higher priority I/O, improving performance for specific requests deemed important by the application, such as requests that might be prerequisites for others, for example, DB2 logs.
10.2.3 Long busy wait host tolerance: Synergy with System p AIX
Another new addition to the SCSI T10 standard is SCSI long busy wait, which provides the target system a method by which to specify not only that it is busy but also how long the initiator should wait before retrying an I/O. This information, provided in the Fibre Channel Protocol (FCP) status response, prevents the initiator from retrying too soon only to fail again. This in turn reduces unnecessary requests and potential I/O failures due to exceeding a set threshold for the number of retries. IBM System p AIX supports SCSI long busy wait with MPIO, and it is also supported by the DS8000.
6786_6452_Perf.fm
the 8 pack. This should be reduceded by the same factor when using a RAID-5 configuration over the 8 DDM pack. Back at the host side, consider an example with 1,000 IOPS from the host, a read to write ratio of 3 to 1, and 50% read cache hits. This leads to the following IOPS numbers: 750 read IOPS 375 read I/Os must be read from disk (based on the 50% read cache hit ratio) 250 writes with RAID-5 results in 1,000 disk operations due to RAID-5 write penalty (read old data and parity, write new data and parity). This totals to 1375 disk I/Os. With 15k RPM DDMs, doing 1000 random IOPS from the server we actually do 1375 I/O operations on disk compared to a maximum of 1440 operations for 7+P configurations or 1260 operations for 6+P+S configurations. Hence 1000 random I/Os from a server with a standard read-to-write ratio and a standard cache hit ratio saturate the disk drives. We made the assumption that server I/O is purely random. When there are sequential I/Os, track-to-track seek times are much lower and higher I/O rates are possible. We also assumed that reads have a hit ratio of only 50%. With higher hit ratios higher workloads are possible. This shows the importance of intelligent caching algorithms as used in the DS8000. Important: When sizing a storage subsystem you should not only consider the capacity but also the number of disk drives that is needed to satisfy the performance needs. For a single disk drive, various disk vendors provide the disk specifications on their Internet product sites. Since the access times for the Fibre Channel disks (not FATA) are the same, but they have different capacities, the I/O density is different. 146 GB 15k RPM disk drives can be used for access densities up 1 I/O/GB/s. For 73GB drives the access density can be 2 I/O/GB/s and for 300GB drives it is 0.5 I/O/GB/s. While this discussion is theoretical in approach, it provides a first estimate. Once the speed of the disk has been decided, the capacity can be calculated based on your storage capacity needs and the effective capacity of the RAID configuration you will use. For this, refre to Table 8-9 on page 158.
RAID level
The DS8000 offers RAID-5 and RAID-10.
RAID-5
Normally RAID-5 is used as it provides very good performance for random and sequential workloads and on the other hand it does not need a lot of additional storage for redundancy (one parity drive). The DS8000 can detect sequential workload. When a complete stripe is in cache for destage, the DS8000 switches to a RAID-3 like algorithm. As a complete stripe has to be destaged, the old data and parity needs not be read, instead the new parity is calculated across the stripe and data and parity are destaged to disk. This provides a very good sequential performance. A random write, as any write, causes a cache hit, but the I/O is not complete until a copy of the write data is put in NVS. When data is destaged to disk, a write in
Chapter 10. Performance
195
6786_6452_Perf.fm
RAID-5 causes four disk operations, the so called write penalty. Old data must be read as well as the old parity information. New parity is calculated in the device adapter and data and parity written to disk. Must of this activity is hidden to the server or host as the I/O is complete when data has entered cache and NVS. However, in a system with high random write activity the disk drives might not catch up and become a bottleneck.
RAID-10
A workload that is dominated by random writes will benefit from RAID-10. Here data is striped across several disks and at the same time mirrored to another set of disks. A write causes only two disk operations compared to RAID-5s four operations. On the other side, however, you need nearly twice as many disk drives for the same capacity when compared to RAID-5. Hence for twice the number of drives (and probably cost) we can do four times more random writes. So it is worth to consider RAID-10 for high performance random write workloads.
196
6786_6452_Perf.fm
The DS8000 cache is organized in 4K Byte pages called cache pages or slots. This unit of allocation (which is smaller than the values used in other storage systems) ensures that small I/Os do not waste cache memory. The decision to copy some amount of data into the DS8000 cache can be triggered from two policies: demand paging and prefetching. Demand paging means that eight disk blocks (a 4K cache page) are brought in only on a cache miss. Demand paging is always active for all volumes and ensures that I/O patterns with some locality find at least some recently used data in the cache. Prefetching means that data is copied into the cache speculatively even before it is requested. To prefetch, a prediction of likely future data accesses is needed. Because effective, sophisticated prediction schemes need extensive history of page accesses (which is not feasible in real systems), SARC uses prefetching for sequential workloads. Sequential access patterns naturally arise in video-on-demand, database scans, copy, backup, and recovery. The goal of sequential prefetching is to detect sequential access and effectively preload the cache with data so as to minimize cache misses. Today prefetching is ubiquitously applied in web servers and clients, databases, file servers, on-disk caches, and multimedia servers. For prefetching, the cache management uses tracks. A track is a set of 128 disk blocks (16 cache pages). To detect a sequential access pattern, counters are maintained with every track to record if a track has been accessed together with its predecessor. Sequential prefetching becomes active only when these counters suggest a sequential access pattern. In this manner, the DS8000 monitors application read-I/O patterns and dynamically determines whether it is optimal to stage into cache: Just the page requested That page requested plus remaining data on the disk track An entire disk track (or a set of disk tracks), which has not yet been requested The decision of when and what to prefetch is made in accordance with the Adaptive Multi-stream Prefetching (AMP) algorithm which dynamically adapts the amount and timing of prefetches optimally on a per-application basis (rather than a system-wide basis). AMP is described further in the next section. To decide which pages are evicted when the cache is full, sequential and random (non-sequential) data is separated into different lists; see Figure 10-9.
RANDOM
MRU
SEQ
MRU
197
6786_6452_Perf.fm
Figure 10-9 lists of the SARC algorithm for random and sequential data A page, which has been brought into the cache by simple demand paging, is added to the Most Recently Used (MRU) head of the RANDOM list. Without further I/O access, it goes down to the Least Recently Used (LRU) bottom. A page, which has been brought into the cache by a sequential access or by sequential prefetching, is added to the MRU head of the SEQ list and then goes in that list. Additional rules control the migration of pages between the lists so as to not keep the same pages in memory twice. To follow workload changes, the algorithm trades cache space between the RANDOM and SEQ lists dynamically and adaptively. This makes SARC scan-resistant, so that one-time sequential requests do not pollute the whole cache. SARC maintains a desired size parameter for the sequential list. The desired size is continually adapted in response to the workload. Specifically, if the bottom portion of the SEQ list is found to be more valuable than the bottom portion of the RANDOM list, then the desired size is increased; otherwise, the desired size is decreased. The constant adaptation strives to make optimal use of limited cache space and delivers greater throughput and faster response times for a given cache size. Additionally, the algorithm modifies dynamically not only the sizes of the two lists, but also the rate at which the sizes are adapted. In a steady state, pages are evicted from the cache at the rate of cache misses. A larger (respectively, a smaller) rate of misses effects a faster (respectively, a slower) rate of adaptation. Other implementation details take into account the relationship of read and write (NVS) cache, efficient destaging, and the cooperation with Copy Services. In this manner, the DS8000 cache management goes far beyond the usual variants of the Least Recently Used/Least Frequently Used (LRU/LFU) approaches.
198
6786_6452_Perf.fm
managing the contents of the SEQ list to maximize the throughput obtained for the sequential workloads. While SARC impacts cases which involve both random and sequential workloads, AMP helps any workload which has a sequential read component including pure sequential read workloads. AMP dramatically improves performance for common sequential and batch processing workloads. It also provides excellent performance synergy with DB2 by preventing table scans to be I/O bound and improve performance of index scans and DB2 utilities like Copy and Recover. Furthermore, AMP reduces the potential for array hot spots which results due to extreme sequential workload demands. For a detailed description of AMP and the theoretical analysis for its optimality, see AMP: Adaptive Multi-stream Prefetching in a Shared Cache by Binny Gill and Luis Angel D. Bathen, USENIX File and Storage Technologies (FAST), February 13-16, 2007, San Jose, CA. For a more detailed description, see Optimal Multistream Sequential Prefetching in a Shared Cache, by Binny Gill and Luis Angel D. Bathen, in the ACM Journal of Transactions on Storage, October 2007.
199
6786_6452_Perf.fm
Use multi rank Extent Pools. Stripe your logical volume across several ranks. Consider placing specific database objects (such as logs) on different ranks. For an application use volumes from even and odd numbered Extent Pools (even numbered pools are managed by server 0, odd numbers are managed by server 1) For large applications consider two dedicated Extent Pools (one managed by server 0 , the other managed by server1) Consider different Extent Pools for 6+P+S arrays and 7+P arrays. If you use Storage Pool Striping this will ensure that your ranks are equally filled. Important: It is very important that you balance your ranks and Extent Pools between the two DS8000 servers. Half of the ranks should be managed by each server (see Figure 10-10).
Server 0
Server 1
Figure 10-10 Ranks in a multi-rank Extent Pool configuration balanced across DS8000 servers
Note: Database logging usually consists of sequences of synchronous sequential writes. Log archiving functions (copying an active log to an archived space) also tend to consist of simple sequential read and write sequences. You should consider isolating log files on separate arrays. All disks in the storage disk subsystem should have roughly equivalent utilization. Any disk that is used more than the other disks will become a bottleneck to performance. A practical method is to use Storage Pool Striping or make extensive use of volume level striping across disk drives.
200
6786_6452_Perf.fm
For optimal performance your data should be spread across as many hardware resources as possible. RAID-5 and RAID-10 already spreads the data across the drives of an array. But usually this is not enough. There are two approaches of spreading your data across even more disk drives.
Rank 1
Extent
Rank 2
8 GB LUN
1GB
Rank 3
Rank 4
In Performance considerations for disk drives on page 194 we have discussed how many random I/O/s can be performed for a standard workload on a rank. If a volume resides on just one rank, this ranks I/O capability also applies to the volume. However, if this volume is striped across several ranks, the I/O rate to this volume can be much higher. Figure 10-12 shows the potential benefit of Storage Pool Striping. This is an extreme example that considers 32 LUNs in one rank compared to 32 LUNs striped across 32 ranks. You might not observe the same improvement in your environment, but this demonstrates the potential of Storage Pool Striping. Of course the total number of I/Os that can be performed on a given set of ranks does not change with Storage Pool Striping
201
6786_6452_Perf.fm
IO/sec Thousands
80 70 60 50 40 30 20 10 0
32 Vols. No SPS Random Read
68.5 53.3
2.5
2.9
32 Vols. SPS DBO (70/30/50)
On the other hand, if you stripe all your data across all ranks and you loose just one rank, for example because you loose two drives at the same time in a RAID-5 array, all your data is gone. Tip: Use Storage Pool Striping and Extent Pools with four to eight ranks of the same characteristics (RAID type, disk RPM) to avoid hot spots on the disk drives. Figure 10-13 shows a good configuration. The ranks are attached to DS8000 server 0 and server 1 half and half, ranks on different device adapters are used in a multi-rank Extent Pool and there are separate Extent Pools for 6+P+S and 7+P ranks.
6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 7+P 7+P 7+P 7+P 7+P 7+P 7+P 7+P
6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 7+P 7+P 7+P 7+P 7+P 7+P 7+P 7+P
DA2 DA0 DA3 DA1 DA2 DA0 DA3 DA1 DA2 DA0 DA3 DA1 DA2 DA0 DA3 DA1
Extent Pool P0
DA1 DA2 DA0 DA3 DA1 DA2 DA0 DA3 DA1 DA2 DA0 DA3 DA1
Extent Pool P1
Extent Pool P2
Extent Pool P3
202
6786_6452_Perf.fm
There is no reorg function for Storage Pool Striping. If you have to expand an Extent Pool, the extents are not rearranged. Tip: If you have to expand a nearly full Extent Pool it is better to add several ranks at once instead of just one rank, to benefit from striping across the newly added ranks. If you add just one rank to a full Extent Pool, new volumes created afterwards can not be striped.
LSS 00
LSS 01
DA pair 1
Server0
DA pair 1 DA pair 2
DA pair 2
Server1
203
6786_6452_Perf.fm
If you use a logical volume manager (such as LVM on AIX) on your host, you can create a host logical volume from several DS8000 logical volumes (LUNs). You can select LUNs from different DS8000 servers, device adapter pairs, and loops as shown in Figure 10-14 on page 203. By striping your host logical volume across the LUNs, you will get the best performance for this LVM volume. Figure 10-14 shows an optimal distribution of eight logical volumes within a DS8000. Of course, you could have more extent pools and ranks, but when you want to distribute your data for optimal performance, you should make sure that you spread it across the two servers, across different device adapter pairs, across the loops, and across several ranks. To be able to create very large logical volumes or to be able to use extent pool striping, you must consider having extent pools with more than one rank. If you use multi-rank extent pools and you do not use Storage Pool Striping you have to be careful where to put your data, you might easily get an imbalanced system (see Figure 10-15 on the right side).
Non-balanced implementation: LUNs across ranks More than 1 rank per extent pool
Extent pool 1
2 GB LUN 1
8GB LUN
Extent
Rank 2
1GB
Extent pool 2
2GB LUN 2
Rank 6
Extent 1GB
Extent pool 3
2GB LUN 3
Rank 7
Rank 3
Extent pool 4
2GB LUN 4
Rank 8
Rank 4
Combining extent pools made up of one rank and then LVM striping over LUNs created on each extent pool will offer a balanced method to evenly spread data across the DS8000 without using extent pool striping as shown in Figure 10-15 on the left side.
204
6786_6452_Perf.fm
We recommend that you define stripe sizes using your hosts logical volume manager in the range of 4 MB to 64 MB. You should choose a stripe size close to 4 MB if you have a large number of applications sharing the arrays and a larger size when you have very few servers or applications sharing the arrays.
205
6786_6452_Perf.fm
I/O from server: update Vol 100 Trk 17 Cache Server 0 Cache
I/O complete New process with IBM FlashCopy SE Track table of repository
:
Server 1
NVS
destaging FlashCopy relationship? New update? Release Data in NVS
NVS
yes
Got it
Lookup
Write update
Write
A FlashCopy onto a Space Efficient volume is established with the nocopy option. In normal operation (when we do not run a backup job or other activity on the Space Efficient target volume) only writes go to Space Efficient volumes. Usually a repository will hold more than just one volume and writes will come from different volumes. Hence the workload to a repository will be purely random writes. This will stress the disk drives given that a random write triggers four disk operations on a RAID-5 array (see RAID-5 on page 195). The size and placement of repositories for Space Efficient volumes must be carefully planned. There are several options. We recommend using RAID-10 ranks for repositories due to the expected higher performance (keep in mind that there can only be one repository in an extent pool). RAID-10 provides a higher throughput for random writes compared to RAID-5 (see RAID-10 on page 196). You should also use fast 15K RPM drives (10K RPM drives are now obsolete anyway) and small capacity drives. Storage Pool Striping has good synergy with the repository (volume) function. With Storage Pool Striping the repository space is striped across multiple RAID arrays in an Extent Pool and this helps balance the volume skew that may appear on the sources. It is generally best to use four RAID arrays in the multi-rank extent pool intended to hold the repository, and no more than eight. On a heavily loaded system, the disks in the backend, particularly where the repository resides, might be overloaded. In this case data will stay for a longer time than usual in NVS. You might consider increasing your cache/NVS when introducing IBM FlashCopy SE. Finally, try to use at least the same number of disk spindles on the repository as the source volumes. Avoid severe fan in configurations such as 32 ranks of source disk being mapped
206
6786_6452_Perf.fm
to an 8 rank repository. This type of configuration will likely have performance problems unless the update rate to the source is very modest. Also, although it is possible to share the repository with production volumes on the same extent pool, use caution when doing this as contention between the two could impact performance. To summarize: we can expect a very high random write workload for the repository.To prevent the repository from becoming overloaded, you can do the following: Have the repository in an extent pool with several ranks (a repository is always striped). Use at least four ranks but not more than eight. Use fast 15K RPM and small capacity disk drives for the repository ranks. Use RAID 10 instead of RAID 5 as it can sustain a higher random write workload. Avoid placing repository and standard volumes in the same extent pool.
10.6.2 Determining the number of connections between the host and DS8000
When you have determined your workload requirements in terms of throughput, you have to choose the appropriate number of connections to put between your open systems servers and the DS8000 to sustain this throughput. A 4 Gbps Fibre Channel host port can sustain a data transfer of about 400 MB/s. As a general recommendation, you should at least have two FC connections between your hosts and your DS8000.
207
6786_6452_Perf.fm
Increasing the number of paths increases the amount of CPU used because the multipathing software must choose among all available paths each time an I/O is issued. A good compromise is between two and four paths per LUN.
208
6786_6452_Perf.fm
IBM
Reads Reads
HAs do not have DS8000 server affinity
HA
LUN1
I/Os
FC0 FC1
I/Os
Memory
L1,2 Memory
Processor
Processor
L1,2 Memory
SERVER 0
L3 Memory L1,2 Memory
Memory
Processor
RIO-2 Interconnect
Extent poolport switch pool 4 Extent 20 1
16 DDM
SERVER 1
Processor
L1,2 Memory L3 Memory
RIO-G Module
RIO-G Module
LUN1
ooo
20 port switch
DA
DAs with an affinity to server 0
20 port switch
LUN1
DA
DAs with an affinity to server 1
ooo
16 DDM
20 port switch
Extent pool 1 oooo Extent pool 4 controlled by server 0 controlled by server 1
209
6786_6452_Perf.fm
Parallel Sysplex
z/OS 1.4+
FICON Director
HA
RIO-G Module
RIO-G Module
Server 0
Memory
L1,2 Memory
Server 1
Processor Processor
L1,2 Memory
Memory
L3 Memory
L1,2 Memory
Processor
RIO-G Interconnect
Processor
L1,2 Memory
L3 Memory
210
6786_6452_Perf.fm
211
6786_6452_Perf.fm
This section discusses some System z specific topics including SMS storage pools. Note the independence of logical subsystems (LSSs) from ranks in the DS8000. Because an LSS is congruent with a System z logical control unit (LCU), we need to understand the implications. It is now possible to have volumes within the very same LCU, which is the very same LSS, but these volumes might reside in different ranks. A horizontal pooling approach assumes that volumes within a logical pool of volumes, such as all DB2 volumes, are evenly spread across all ranks. This is independent of how these volumes are represented in LCUs. The following sections assume horizontal volume pooling across ranks, which might be congruent with LCUs when mapping ranks accordingly to LSSs.
HA
RIO-G Module RIO-G Module
Memory
L1,2 Memory
Processor
Processor
L1,2 Memory
Memory
L3 Memory
L1,2 Memory
Processor
Processor
L1,2 Memory
L3 Memory
Server0
POWER5 2-way SMP RIO-G Module RIO-G Module
Server1
POWER5 2-way SMP
DAs Rank
Extent pool0
1
Extent pool2
3
Extent pool4
5
Extent pool1
2
Extent pool3
4
Extent pool5
6
11
10
12
Extent pool6
Extent pool8
Extent pool10
Extent pool7
Extent pool9
Extent pool11
Figure 10-19 Extent pool affinity to processor complex with one extent pool for each rank
Now all volumes which are comprised of extents out of an extent pool have also a respective server affinity when scheduling I/Os to these volumes.
212
6786_6452_Perf.fm
This allows you to place certain volumes in specific ranks to avoid potential clustering of many high activity volumes within the same rank. You can create system-managed (SMS) storage groups, which are congruent to these extent pools to ease the management effort of such a configuration. But you can still assign multiple storage groups when you are not concerned about the placement of less active volumes. Figure 10-19 also indicates that there is no affinity nor a certain preference between HA and processor complexes or servers in the DS8000. In this example, either one of the two HAs can address any volume in any of the ranks, which range here from rank number 1 to 12. Note there is an affinity of DAs to the processor complex. A DA pair connects to a pair of switches; see Figure 10-2 on page 187. The first DA of this DA pair connects to the left processor complex or server 0. The second DA of this DA pair connects to the other processor complex or server 1.
HA
RIO-G Module RIO-G Module
Memory
L1,2 Memory
Processor
Processor
L1,2 Memory
Memory
L3 Memory
L1,2 Memory
Processor
Processor
L1,2 Memory
L3 Memory
Server0
POWER5 2-way SMP RIO-G Module RIO-G Module
Server1
POWER5 2-way SMP
DAs
Rank
1 5 9
3 7 11
2 6 10
4 8 12
Extent pool0
Extent pool1
Figure 10-20 Extent pool affinity to processor complex with pooled ranks in two extent pools
Again what is obvious here is the affinity between all volumes residing in extent pool 0 to the left processor complex, server 0, and the opposite is true for the volumes residing in extent pool 1 and their affinity to the right processor complex or server 1.
213
6786_6452_Perf.fm
This has been discussed in Chapter 10.5, Performance considerations for logical configuration on page 199.
blue I/O
red I/O
blue I/O
red I/O
blue server
red server
Server0
POWER5 2-way SMP
DAs
Server1
POWER5 2-way SMP
Rank
3 7
5 9
3 7
SGPRIM
SGHPC1
11
11
SGHPC2
Extent pool5
Consider grouping the two larger extent pools into a single SMS storage group. SMS will eventually spread the workload evenly across both extent pools. This allows a system-managed approach to place data sets automatically in the right extent pools. With more than one DS8000, you might consider configuring each DS8000 in a uniform fashion. We recommend grouping all volumes from all the large extent pools into one large SMS storage group, SGPRIM. Cover the smaller, high performance extent pools through discrete SMS storage groups for each DS8000. With two of the configurations displayed in Figure 10-21, this ends up with one storage group, SGPRIM, and six smaller storage groups. SGLOG1 contains Extent pool0 in the first DS8100 and the same extent pool in the second DS8100. Similar considerations are true for 214
6786_6452_Perf.fm
SGLOG2. For example, in a dual logging database environment, this allows you to assign SGLOG1 to the first logging volume and SGLOG2 for the second logging volume. For very demanding I/O rates and to satisfy a small set of volumes, you might consider keeping Extent pool 4 and Extent pool 5 in both DS8100s separate, through four distinct storage groups, SGHPC1-4. Figure 10-21 shows, again, that there is no affinity between HA and processor complex or server. Each I/O enclosure connects to either processor complex. But there is an affinity between extent pool and processor complex and, therefore, an affinity between volumes and processor complex. This requires some attention, as outlined previously, when you define your volumes.
215
6786_6452_Perf.fm
Appl. A
UCB 100 UCB Busy
Appl. B
UCB 100
Appl. C
UCB 100 Device Busy
System z
System z
100
Figure 10-22 Traditional z/OS behavior
From a performance standpoint, it did not make sense to send more than one I/O at a time to the storage disk subsystem, because the hardware could process only one I/O at a time. Knowing this, the z/OS systems did not try to issue another I/O to a volume, in z/OS represented by a Unit Control Block (UCB), while an I/O was already active for that volume, as indicated by a UCB busy flag; see Figure 10-22. Not only were the z/OS systems limited to processing only one I/O at a time, but also the storage subsystems accepted only one I/O at a time from different system images to a shared volume, for the same reasons mentioned above. See Figure 10-23 on page 217.
216
6786_6452_Perf.fm
Appl. A
UCB 100
Appl. B
UCB 1FF alias to UCB 100
Appl. C
UCB 1FE alias to UCB 100
100
Logical volume
Physical layer
Figure 10-23 z/OS behavior with PAV
The DS8000 has the capability to do more than one I/O to a CKD volume. Using the alias address in addition to the conventional base address, a z/OS host can use several UCBs for the same logical volume instead of one UCB per logical volume. For example, base address 100 might have alias addresses 1FF and 1FE, which allow for three parallel I/O operations to the same volume; see Figure 10-23. This feature that allows parallel I/Os to a volume from one host is called Parallel Access
Volume (PAV).
The two concepts that are basic in the PAV functionality are: Base address: The base device address is the conventional unit address of a logical volume. There is only one base address associated with any volume. Alias address: An alias device address is mapped to a base address. I/O operations to an alias run against the associated base address storage space. There is no physical space associated with an alias address. You can define more than one alias per base. Alias addresses have to be defined to the DS8000 and to the I/O definition file (IODF). This association is predefined, and you can add new aliases nondisruptively. Still, the association between base and alias is not fixed; the alias address can be assigned to a different base address by the z/OS Workload Manager. For guidelines about PAV definition and support, see 17.3.2, Parallel Access Volume (PAV) definition on page 446.
217
6786_6452_Perf.fm
UCB 100
WLM IOS
Assign to base 100
Base
100
Alias
1F0
to 100
to 100
Alias
1F1
Alias
1F2
to 110
Alias
1F3
to 110
Base
110
z/OS Workload Manager in Goal mode tracks the system workload and checks if the workloads are meeting their goals established by the installation; see Figure 10-25.
WLMs exchange performance information Goals not met because of IOSQ ? Who can donate an alias ?
System z
WLM IOSQ on 100 ?
System z
WLM IOSQ on 100 ?
System z
WLM IOSQ on 100 ?
System z
WLM IOSQ on 100 ?
Base
100
Alias
to 100
to 100
Alias
Alias
to 110
Alias
to 110
Base
110
Dynamic PAVs
Dynamic PAVs
DS8000
Figure 10-25 Dynamic PAVs in a Sysplex
218
6786_6452_Perf.fm
WLM also keeps track of the devices utilized by the different workloads, accumulates this information over time, and broadcasts it to the other systems in the same Sysplex. If WLM determines that any workload is not meeting its goal due to IOS queue (IOSQ) time, WLM will attempt to find an alias device that can be reallocated to help this workload achieve its goal; see Figure 10-25 on page 218. Actually there are two mechanisms to tune the alias assignment: 1. The first mechanism is goal-based. This logic attempts to give additional aliases to a PAV-enabled device that is experiencing I/O supervisor (IOS) queue delays and is impacting a service class period that is missing its goal. 2. The second is to move aliases to high contention PAV-enabled devices from low contention PAV devices. High contention devices are identified by having a significant amount of IOSQ time. This tuning is based on efficiency rather than directly helping a workload to meet its goal. As mentioned before, the WLM must be in Goal mode to cause PAVs to be shifted from one logical device to another. The movement of an alias from one base to another is serialized within the Sysplex. IOS tracks a token for each PAV-enabled device. This token is updated each time an alias change is made for a device. IOS and WLM exchange the token information. When the WLM instructs IOS to move an alias, WLM also presents the token. When IOS has started a move and updated the token, all affected systems are notified of the change through an interrupt.
219
6786_6452_Perf.fm
RDEV E100
RDEV E101
RDEV E102
Guest 1
Figure 10-26 z/VM support of PAV volumes dedicated to a single guest virtual machine
Guest 1
Guest 2
Guest 3
In this way, PAV provides to the z/VM environments the benefits of a greater I/O performance (throughput) by reducing I/O queuing. With the small programming enhancement (SPE) introduced with z/VM 5.2.0 and APAR VM63952, additional enhancements are available when using PAV with z/VM. For more information, see 17.4, z/VM considerations on page 451.
220
6786_6452_Perf.fm
Appl. A
UCB 100
Appl. B
UCB 100
System z
Multiple Allegiance
System z
DS8000
100
Logical volume
Physical layer
Figure 10-28 Parallel I/O capability with Multiple Allegiance
The DS8000 accepts multiple I/O requests from different hosts to the same device address, increasing parallelism and reducing channel overhead. In older storage disk subsystems, a device had an implicit allegiance, that is, a relationship created in the control unit between the device and a channel path group when an I/O operation is accepted by the device. The allegiance causes the control unit to guarantee access (no busy status presented) to the device for the remainder of the channel program over the set of paths associated with the allegiance. With Multiple Allegiance, the requests are accepted by the DS8000 and all requests are processed in parallel, unless there is a conflict when writing data to a particular extent of the CDK logical volume; see Figure 10-28. Still, good application software access patterns can improve the global parallelism by avoiding reserves, limiting the extent scope to a minimum, and setting an appropriate file mask, for example, if no write is intended. In systems without Multiple Allegiance, all except the first I/O request to a shared volume were rejected, and the I/Os were queued in the System z channel subsystem, showing up as PEND time in the RMF reports. Multiple Allegiance provides significant benefits for environments running a Sysplex, or System z systems sharing access to data volumes. Multiple Allegiance and PAV can operate together to handle multiple requests from multiple hosts.
10.7.10 HyperPAV
The DS8000 series offers enhancements to Parallel Access Volumes (PAV) with support for HyperPAV, which is designed to enable applications to achieve equal or better performance than PAV alone, while also using the same or fewer operating system resources.
221
6786_6452_Perf.fm
z/OS Image
UCB 08F1 UCB 08F0 UCB 0801 UCB 08F3 UCB 08F2 UCB 0802
DS8000
z/OS Sysplex
UCB 08F1 UCB 08F0 UCB 0801 UCB 08F3 UCB 08F2 UCB 0802
z/OS Image
Multiple Allegiance and PAV allow multiple I/Os to be executed concurrently against the same volume: With Multiple Allegiance, the I/Os are coming from different system images. With PAV, the I/Os are coming from the same system image: Static PAV: Aliases are always associated with the same base addresses. Dynamic PAV: Aliases are assigned up front, but can be reassigned to any base address as need dictates by means of the Dynamic Alias Assignment function of the Workload Manager - reactive alias assignment.
HyperPAV
With HyperPAV, an on demand proactive assignment of aliases is possible, see Figure 10-30 on page 223. Dynamic PAV required the WLM to monitor the workload and goals. It took some time until the WLM detected an I/O bottleneck. Then the WLM had to coordinate the reassignment of alias addresses within the sysplex and the DS8000. All of this took some time and if the workload was fluctuating or had a burst character, the job that caused the overload of one volume could have ended before the WLM had reacted. In these cases, the IOSQ time was not eliminated completely. With HyperPAV, the WLM is no longer involved in managing alias addresses. For each I/O, an alias address can be picked from a pool of alias addresses within the same LCU.
222
6786_6452_Perf.fm
z/OS Image
UCB 0801
DS8000
z/OS Sysplex
Logical Subsystem (LSS) 0800 Alias UA=F0 Alias UA=F1 Alias UA=F2 Alias UA=F3 Base UA=01
z/OS Image
UCB 08F0 UCB 0801 UCB 08F1 UCB 08F3 UCB 0802
UCB 08F2
Base UA=02
P O O L
This capability also allows different HyperPAV hosts to use one alias to access different bases, which reduces the number of alias addresses required to support a set of bases in a System z environment with no latency in targeting an alias to a base. This functionality is also designed to enable applications to achieve better performance than possible with the original PAV feature alone while also using the same or fewer operating system resources.
Benefits of HyperPAV
HyperPAV has been designed to: Provide an even more efficient Parallel Access Volumes (PAV) function Help clients, who implement larger volumes, to scale I/O rates without the need for additional PAV alias definitions Exploit FICON architecture to reduce overhead, improve addressing efficiencies, and provide storage capacity and performance improvements: More dynamic assignment of PAV aliases improves efficiency. Number of PAV aliases needed might be reduced, taking fewer from the 64 K device limitation and leaving more storage for capacity use. Enable a more dynamic response to changing workloads Simplified management of aliases Enable users to stave off migration to larger volume sizes
223
6786_6452_Perf.fm
2107 machine types: HyperPAV is an optional feature, available with the HyperPAV indicator FC0782 and corresponding DS8000 series function authorization (2244-PAV HyperPAV FC7899). HyperPAV also requires the purchase of one or more PAV licensed features and the FICON/ESCON Attachment licensed feature; the FICON/ESCON Attachment licensed feature applies only to the DS8000 Turbo Models 931, 932, and 9B2.
Priority queuing
I/Os from different z/OS system images can be queued in a priority order. It is the z/OS Workload Manager that makes use of this priority to privilege I/Os from one system against the others. You can activate I/O priority queuing in WLM Service Definition settings. WLM has to run in Goal mode.
224
6786_6452_Perf.fm
System A
System B
WLM
IO with Priority X'21'
: : : :
I/O from B Pr X'21'
DS8000
3390
When a channel program with a higher priority comes in and is put in front of the queue of channel programs with lower priority, the priority of the low-priority programs is also increased; see Figure 10-31. This prevents high-priority channel programs from dominating lower priority ones and gives each system a fair share.
225
6786_6452_Perf.fm
226
6786ch_Activate_License.fm
11
Chapter 11.
227
6786ch_Activate_License.fm
Operating Environment FICON/ESCON Attachment Database Protection Point-in-Time Copy Point-in-Time Copy Add on FlashCopy SE FlashCopy SE Add on Metro/Global Mirror Metro Mirror Global Mirror Metro Mirror Add on
0700 and 70xx 0702 and 7090 0708 and 7080 0720 and 72xx 0723 and 72xx 0730 and 73xx 0733 and 73xx 0742 and 74xx 0744 and 74xx 0746 and 74xx 0754 and 75xx
239x Model LFA, 70xx 239x Model LFA, 7090 239x Model LFA, 7080 239x Model LFA, 72xx 239x Model LFA, 72xx 239x Model LFA, 73xx 239x Model LFA, 73xx 239x Model LFA, 74xx 239x Model LFA, 74xx 239x Model LFA, 74xx 239x Model LFA, 75xx
228
6786ch_Activate_License.fm
Licensed function for Turbo Models 93x/9B2 with Enterprise Choice warranty
Global Mirror Add on Remote Mirror for z/OS Parallel Access Volumes HyperPAV
0756 and 75xx 0760 and 76xx 0780 and 78xx 0782 and 7899
239x Model LFA, 75xx 239x Model LFA, 76xx 239x Model LFA, 78xx 239x Model LFA, 7899
Table 11-2 DS8000 series Turbo Models 93x/9B2 without Enterprise Choice length of warranty Licensed function for Turbo Models 93x/9B2 without Enterprise Choice warranty IBM 2107 indicator feature number IBM 2244 function authorization model and feature numbers
Operating Environment FICON/ESCON Attachment Database Protection Point-in-Time Copy Point-in-Time Copy Add on FlashCopy SE FlashCopy SE Add on Metro/Global Mirror Metro Mirror Global Mirror Metro Mirror Add on Global Mirror Add on Remote Mirror for z/OS Parallel Access Volumes HyperPAV
0700 0702 0708 0720 0723 0730 0733 0742 0744 0746 0754 0756 0760 0780 0782
2244-OEL, 70xx 2244-OEL, 7090 2244-OEL, 7080 2244-PTC, 72xx 2244-PTC, 72xx 2244-PTC, 73xx 2244-PTC, 73xx 2244-RMC, 74xx 2244-RMC, 74xx 2244-RMC, 74xx 2244-RMC, 75xx 2244-RMC, 75xx 2244-RMZ, 76xx 2244-PAV, 78xx 2244-PAV, 7899
Table 11-3 DS8000 series Models 92x and 9A2 licensed functions Licensed function for Models 92x/9A2 IBM 2107 indicator feature number IBM 2244 function authorization model and feature numbers
Operating Environment Database Protection Point-in-Time Copy Point-in-Time Copy Add on FlashCopy SE FlashCopy SE Add on Remote Mirror and Copy
2244-OEL, 70xx 2244-OEL, 7080 2244-PTC, 72xx 2244-PTC, 72xx 2244-PTC, 73xx 2244-PTC, 73xx 2244-RMC, 74xx
229
6786ch_Activate_License.fm
Metro/Global Mirror Remote Mirror for z/OS Parallel Access Volumes HyperPAV
The HyperPAV licence is a flat-fee add-on licence which requires the Parallel Access Volumes (PAV) licence to be installed. The Copy Services Add-on licence features are cheaper than the complementary basic licence feature. An Add-on can only be specified when the complementary basic feature exists. The condition for this is that capacity licensed by Add-on features must not exceed the capacity licensed by the corresponding basic feature. The licence for the Space Efficient FlashCopy, FlashCopy SE, does not require the ordinary FlashCopy (PTC) licence. As with the ordinary FlashCopy, the FlashCopy SE is licensed in tiers by gross amount of TB installed. FlashCopy (PTC) and FlashCopy SE can be complementary licences: A client who wants to add a 20 TB FlashCopy SE licence to a DS8000 that already has a 20 TB FlashCopy licence can use the 20 TB FlashCopy SE Add on license (2x #7333) for this. The Remote Mirror and Copy (RMC) licence on the older models 92x/9A2 got replaced by the Metro Mirror (MM) and Global Mirror (GM) licences for the newer models. Models with the older type of licence can replicate to models with the newer type and vice versa. Metro Mirror (MM licence) and Global Mirror (GM) can be complementary features also. Note: For a detailed explanation of the features involved and the considerations you must have when ordering DS8000 licensed functions, refer to the announcement letters: IBM System Storage DS8000 Series (IBM 2107 and IBM 242x) IBM System Storage DS8000 Function Authorizations (IBM 2244 or IBM 239x). IBM announcement letters can be found at:
http://www.ibm.com/products
You can activate the license keys all at the same time (for example, on initial activation of the storage unit) or activate them individually (for example, additional ordered keys). Before connecting to the IBM DSFA Web site to obtain your feature activation codes, ensure that you have the following items: The IBM License Function Authorization documents. If you are activating codes for a new storage unit, these documents are included in the shipment of the storage unit. If you are 230
IBM System Storage DS8000 Series: Architecture and Implementation
6786ch_Activate_License.fm
activating codes for an existing storage unit, IBM sends these documents to you in an envelope. A diskette or USB memory device for downloading your activation codes into a file if you cannot access the DS Storage Manager from the system that you are using to access the DSFA Web site. Instead of downloading the activation codes in softcopy format, you can also print the activation codes and manually enter them using the DS Storage Manager GUI. However, this is slow and error prone, because the activation keys are 32-character long strings. For a discussion of the activities in preparation to the activation of the licensed functions, see also 9.4.5, Activation of Advanced Function licenses on page 172.
Use the following procedure to obtain the required information: 1. Start the DS Storage Manager application. Log in using a user ID with administrator access. If this is the first time you are accessing the machine, contact your IBM service representative for the user ID and password. After successful login, the DS8000 Storage Manager Welcome panel opens. Select, in order, Real-time manager Manage hardware as shown in Figure 11-1 on page 231. 2. In the My Work navigation panel on the left side, select Storage units. The Storage units panel opens (Figure 11-2).
231
6786ch_Activate_License.fm
3. On the Storage units panel, select the storage unit by clicking the box to the left of it, click Properties in the Select Action pull-down list. The Storage Unit Properties panel opens (Figure 11-3 on page 232).
4. On the Storage Unit Properties panel, click the General tab. Gather the following information about your storage unit: From the MTMS field, note the machine's serial number. The Machine Type - Model Number - Serial Number (MTMS) is a string that contains the machine type, model number, and serial number. The last seven characters of the string are the machine's serial number. From the Machine signature field, note the machine signature. You can use Table 11-4 to document the information. Later, you enter the information on the IBM DSFA Web site.
232
6786ch_Activate_License.fm
Table 11-4 DS8000 machine information table Property Your storage units information
2. Click IBM System Storage DS8000 series. This brings you to the Select DS8000 series machine panel (Figure 11-5). Select the appropriate Machine Type.
233
6786ch_Activate_License.fm
Note: The examples we discuss in this section of the book illustrate the activation of the licensed functions for 2107-922 and 9A2 models. For this reason, the machine type and function authorizations that you see in the panels correspond to Table 11-3 on page 229. For the DS8000 series Turbo Models 93x and 9B2, the machine types and function authorizations correspond to Table 11-1 on page 228 and Table 11-2 on page 229. 3. Enter the machine information and click Submit. The View machine summary panel opens (Figure 11-6).
234
6786ch_Activate_License.fm
The View machine summary panel shows the total purchased licenses and how many of them are currently assigned. The example in Figure 11-6 shows a storage unit where all licenses have already been assigned. When assigning licenses for the first time, the Assigned field shows 0.0 TB. 4. Click Manage activations. The Manage activations panel opens. Figure 11-7 on page 236 shows the Manage activations panel for a 2107 Model 9A2 with two storage images. For each license type and storage image, enter the license scope (fixed block data (FB), count key data (CKD), or All) and a capacity value (in TB) to assign to the storage image. The capacity values are expressed in decimal teraBytes with 0.1 TB increments. The sum of the storage image capacity values for a license cannot exceed the total license value.
235
6786ch_Activate_License.fm
5. When you have entered the values, click Submit. The View activation codes panel opens, showing the license activation codes for the storage images (Figure 11-8 on page 237). Print the activation codes or click Download to save the activation codes in a file that you can later import in the DS8000. The file contains the activation codes for both storage images.
236
6786ch_Activate_License.fm
Note: In most situations, the DSFA application can locate your 2244/239x licensed function authorization record when you enter the DS8000 (2107 or 242x) serial number and signature. However, if the 2244/239x licensed function authorization record is not attached to the 2107/242x record, you must assign it to the 2107/242x record using the Assign function authorization link on the DSFA application. In this case, you need the 2244/239x serial number (which you can find on the License Function Authorization document).
237
6786ch_Activate_License.fm
Important: Before you begin this task, you must resolve any current DS8000 problems. Contact IBM support for assistance in resolving these problems. The easiest way to apply the feature activation codes is to download the activation codes from the IBM Disk Storage Feature Activation (DSFA) Web site to your local computer and import the file into the DS Storage Manager. If you can access the DS Storage Manager from the same computer that you use to access the DSFA Web site, you can copy the activation codes from the DSFA window and paste them into the DS Storage Manager window. The third option is to manually enter the activation codes in the DS Storage Manager from a printed copy of the codes. 1. In the My Work navigation panel on the DS Storage Manager Welcome panel, select Real-time manager Manage hardware Storage images. The Storage images panel opens (Figure 11-9).
2. On the Storage images panel, select a storage image whose activation codes you want to apply. Select Apply activation codes in the Select Action pull-down list. The Apply Activation codes panel displays (Figure 11-10 on page 239). If this is the first time that you apply activation codes, the fields in the panel are empty; otherwise, the current license codes and values are displayed in the fields and you can modify or overwrite them, as appropriate.
238
6786ch_Activate_License.fm
Figure 11-10 DS8000 Apply Activation codes input panel (92x/9A2 models)
Note: As we have already mentioned, the example we present corresponds to a 2107-9A2 machine; for this reason, the Apply Activation codes panel looks like Figure 11-10. For a 93x/9B2 model, the Apply Activation codes panel looks like Figure 11-11.
239
6786ch_Activate_License.fm
Figure 11-11 DS8000 Apply Activation codes input panel: Turbo models
3. If you are importing your activation codes from a file that you downloaded from the DSFA Web site, click Import key file. The Import panel displays (Figure 11-12).
Enter the name of your key file, then click OK to complete the import process. If you did not download your activation codes into a file, manually enter the codes into the appropriate fields on the Apply Activation codes panel. 4. After you have entered the activation codes, either manually or by importing a key file, click Apply. The Capacity and Storage type fields now reflect the license information contained in the activation codes, as in Figure 11-13.
240
6786ch_Activate_License.fm
Click OK to complete the activation code apply process. Note: For the 9A2 and 9B2 models, you need to perform the code activation process for both storage images, one image at a time.
241
6786ch_Activate_License.fm
ESSNet Volume Group os400Serial NVS Memory Cache Memory Processor Memory MTS
2. Obtain your license activation codes from the IBM DSFA Web site. See 11.2.2, Obtaining activation codes on page 233. 3. Use the applykey command to activate the codes and the lskey command to verify which type of licensed features are activated for your storage unit. a. Enter an applykey command at the dscli command prompt as follows. The -file parameter specifies the key file. The second parameter specifies the storage image.
dscli> applykey -file c:\2107_7520780.xml IBM.2107-7520781
b. Verify that the keys have been activated for your storage unit by issuing the DS CLI lskey command as shown in Example 11-2.
Example 11-2 Using lskey to list installed licenses dscli> lskey ibm.2107-7520781 Date/Time: 05 November 2007 10:50:23 CET IBM DSCLI Version: 5.3.0.991 DS: ibm.2107-7520781 Activation Key Authorization Level (TB) Scope ============================================================ Operating environment (OEL) 105 All Parallel access volumes (PAV) 100 CKD Point in time copy (PTC) 105 All IBM FlashCopy SE 105 All Remote mirror for z/OS (RMZ) 60 CKD Remote mirror and copy (RMC) 55 FB Metro/Global mirror (MGM) 55 FB
Note: For 9A2 and 9B2 models, you need to perform the code activation process for both storage images. For example, using the serial number from Example 11-2, you use IBM.2107-7520781 and IBM.2107-7520782. For more details about the DS CLI, refer to IBM System Storage DS8000: Command-Line Interface Users Guide, SC26-7916.
242
6786ch_Activate_License.fm
1 2 3 4
This function is only used by open systems hosts. This function is only used by System z hosts. This function is used by both open systems and System z hosts. This function is currently only needed by open systems hosts, but we might use it for System z at some point in the future. This function is currently only needed by System z hosts, but we might use it for open systems hosts at some point in the future. This function has already been set to All.
Select FB. Select CKD. Select All. Select FB and change to scope All if and when the System z requirement occurs. Select CKD and change to scope All if and when the open systems requirement occurs. Leave the scope set to All. Changing the scope to CKD or FB at this point requires a disruptive outage.
Any scenario that changes from FB or CKD to All does not require an outage. If you choose to change from All to either CKD or FB, then you take a disruptive outage. If you are absolutely certain that your machine will only ever be used for one storage type (for example, only CKD or only FB), then you can also quite safely just use the All scope.
243
6786ch_Activate_License.fm
Example 11-3 Trying to use a feature for which we are not licensed dscli> lskey IBM.2107-7520391 Date/Time: 05 November 2007 14:45:28 CET IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7520391 Activation Key Authorization Level (TB) Scope ============================================================ Operating environment (OEL) 5 All Remote mirror and copy (RMC) 5 All Point in time copy (PTC) 5 FB The FlashCopy scope is currently set to FB dscli> lsckdvol Date/Time: 05 November 2007 14:51:53 CET IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7520391 Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl) ========================================================================================= 0000 Online Normal Normal 3390-3 CKD Base P2 3339 0001 Online Normal Normal 3390-3 CKD Base P2 3339 dscli> mkflash 0000:0001 We are not able to create CKD FlashCopies Date/Time: 05 November 2007 14:53:49 CET IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7520391 CMUN03035E mkflash: 0000:0001: Copy Services operation failure: feature not installed
244
6786ch_Activate_License.fm
Date/Time: 05 November 2007 15:52:46 CET IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7520391 CMUC00137I mkflash: FlashCopy pair 0000:0001 successfully created.
In this scenario, we have made a downward license feature key change. We must schedule an outage of the storage image. We should in fact only make the downward license key change immediately before taking this outage. Restriction: Making a downward license change and then not immediately performing a reboot of the storage image is not supported. Do not allow your machine to be in a position where the applied key is different than the reported key.
245
6786ch_Activate_License.fm
At this point, this is still a valid configuration, because the configured ranks on the machine total less than 5 TB of storage. In Example 11-7, we then try to create a new rank that brings the total rank capacity above 5 TB. This command fails.
Example 11-7 Creating a rank when we are exceeding a license key dscli> mkrank -array A1 -stgtype CKD Date/Time: 05 November 2007 19:43:10 CET IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7520391 CMUN02403E mkrank: Unable to create rank: licensed storage amount has been exceeded
To configure the additional ranks, we must first increase the license key capacity of every installed license. In this example, that is the FlashCopy license.
246
6786ch_Activate_License.fm
Example 11-8 Displaying array site and rank usage dscli> lsrank Date/Time: 05 November 2007 20:31:43 CET IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-75ABTV1 ID Group State datastate Array RAIDtype extpoolID stgtype ========================================================== R0 0 Normal Normal A0 5 P0 ckd R4 0 Normal Normal A6 5 P4 fb dscli> lsarray Date/Time: 05 November 2007 21:02:49 CET IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-75ABTV1 Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B) ==================================================================== A0 Assigned Normal 5 (6+P+S) S1 R0 0 300.0 A1 Unassigned Normal 5 (6+P+S) S2 0 300.0 A2 Unassigned Normal 5 (6+P+S) S3 0 300.0 A3 Unassigned Normal 5 (6+P+S) S4 0 300.0 A4 Unassigned Normal 5 (7+P) S5 0 146.0 A5 Unassigned Normal 5 (7+P) S6 0 146.0 A6 Assigned Normal 5 (7+P) S7 R4 0 146.0 A7 Assigned Normal 5 (7+P) S8 R5 0 146.0
So for CKD scope licenses, we currently use 2,400 GB. For FB scope licenses, we currently use 1,168 GB. For licenses with a scope of All, we currently use 3,568 GB. Using the limits shown in Example 11-6 on page 246, we are within scope for all licenses. If we combine Example 11-6 on page 246, Example 11-7 on page 246, and Example 11-8, we can also see why the mkrank command in Example 11-7 on page 246 failed. In Example 11-7 on page 246, we tried to create a rank using array A1. Now, array A1 uses 300 GB DDMs. This means that for FB scope and All scope licenses, we will use 300 x 8 = 2,400 GB, more license key. Now from Example 11-6 on page 246, we had only 5 TB of FlashCopy license with a scope of All. This means that we cannot have total configured capacity that exceeds 5,000 TB. Because we already use 3,568 GB, the attempt to use 2,400 more GB will fail, because 3,568 plus 2,400 equals 5,968 GB, which is clearly more than 5,000 GB. If we increase the size of the FlashCopy license to 10 TB, then we can have 10,000 GB of total configured capacity, so the rank creation will then succeed.
247
6786ch_Activate_License.fm
248
6786p_Config.fm
Part 3
Part
Storage Configuration
In this part, we discuss the configuration tasks required on your DS8000. We cover the following topics: System Storage Productivity Center (SSPC) Configuration with DS Storage Manager GUI Configuration with Command-Line Interface
249
6786p_Config.fm
250
6786ch_Configflow.fm
12
Chapter 12.
Configuration flow
This chapter gives a brief overview of the tasks required to configure the storage in a DS8000 system.
251
6786ch_Configflow.fm
252
6786ch_Configflow.fm
8. Create volume groups. Create volume groups where FB volumes will be assigned and select the host attachments for the volume groups. 9. Create open systems volumes. Create striped open systems FB volumes and assign them to one or more volume groups. 10.Create System z logical control units (LCUs). Define their type and other attributes, such as subsystem identifiers (SSIDs). 11.Create striped System z volumes. Create System z CKD base volumes and Parallel Access Volume (PAV) aliases for them. The actual configuration can be done using either the DS Storage Manager GUI or DS Command-Line Interface, or a mixture of both. A novice user might prefer to use the GUI, while a more experienced user might use the CLI, particularly for some of the more repetitive tasks such as creating large numbers of volumes. For a more detailed discussion of how to perform the specific tasks, refer to: Chapter 11, Features and license keys on page 227 Chapter 14, Configuration with DS Storage Manager GUI on page 287 Chapter 15, Configuration with Command-Line Interface on page 339
253
6786ch_Configflow.fm
254
6786ch_ConfigSSPC.fm
13
Chapter 13.
255
6786ch_ConfigSSPC.fm
SSPC Hardware
The SSPC (IBM model 2805) Server contains the following hardware components: x86 server Rack installed, 1U 1 Quad-Core Intel Xeon E5310 (1.60 GHz 8 MB L2) 4 GB Memory PC2-5300 ECC DDR2 667 MHz 2 x 146 GB 15K Disk Drive Modules (RAID 1) 1 CD-RW/DVD Combo Optical Drive Dual integrated 10/100/1000 Mbps Network Interface Optional components are: Display Monitor, Track Pointer Unit Feature code 1800 hardware upgrade (required to run TPC-R on SSPC) 1additional Quad-Core Processor 4 additional GB Memory
256
6786ch_ConfigSSPC.fm
SSPC Software:
The IBM System Storage Productivity Center (SSPC) includes the following pre-installed (separately purchased) software, running under a licensed Microsoft Windows 2003 Enterprise Edition R2 (included): IBM TotalStorage Productivity Center IBM System Storage SAN Volume Controller CIM Agent and GUI Optionally, the following components can be installed on the SSPC: IBM TotalStorage Productivity Center for Replication (TPC-R) DS CIM Agent Command Line Interface (DSCIMCLI) Antivirus Software Customers have the option to purchase and install the individual software components to create their own SSPC server.
Provides an Advanced topology viewer by a linked graphical and detailed view of overall SAN, including device relationships and visual notifications. Provides a status dashboard
257
6786ch_ConfigSSPC.fm
in-depth performance monitoring and analysis on SAN fabric performance. This provides you with a central storage management application to monitor, plan, configure, report and do problem determination on heterogeneous storage infrastructure. TPC-SE offers the follow capabilities: Device configuration and management of SAN-attached devices from a single console. It also allows users to gather and analyze historical and real-time performance metrics. Management of file systems and databases which are designed to allow enterprise-wide reports, monitoring and alerts, policy-based action, and file system capacity automation in heterogeneous environments. Management, monitoring, and control of SAN fabric which are designed to help automate device discovery, topology rendering, error detection fault isolation, SAN error predictor, zone control, real-time monitoring and alerts and event management for heterogeneous enterprise SAN environments. In addition it allows to collect performance statistics from IBM TotalStorage, Brocade, Cisco, and McDATA fabric switches and directors that implement the SNIA SMI-S specification.
258
6786ch_ConfigSSPC.fm
Figure 13-1 Components and logical flow of communication (lines) between SSPC and DS8000
259
6786ch_ConfigSSPC.fm
260
6786ch_ConfigSSPC.fm
You can also consult the IBM TotalStorage Productivity Center V3.3 Hints and Tips at:
http://www-1.ibm.com/support/docview.wss?rs=2198&context=SS8JB5&dc=DA400&uid=swg27008254&lo c=en_US&cs=UTF-8&lang=en&rss=ct2198tivoli
261
6786ch_ConfigSSPC.fm
Install DSCIMCLI
To enable TPC to access DS8000, the CIM agent must be configured first. The utility to configure the CIM agent is the DSCIMCLI. This command line interface utility for the CIM agent must be downloaded and installed on the SSPC or another workstation. There is no installer for this utility. It must be set up manually. To launch DSCIMCLI after the zip file has been unpacked, open a command prompt in Windows and enter the full path to the file \...\pegasus\bin\dscimcli.exe or execute the command dscimcli in the directory where the file dscimcli.exe is located.
Example 13-1 Launching DSCIMCLI C:\Program Files\IBM\DSCIMCLI\pegasus\bin>dscimcli <argument string>
For information about download, installation and usage of the CIM agent management command line interface see:
http://www.ibm.com/servers/storage/support/software/cimdsoapi/installing.html.
The steps to configure DS8000 by DSCIMCLI are described in Configuring DS8000 for TPC-BE access.
262
6786ch_ConfigSSPC.fm
In addition, to enable the CIM agent on the HMC, a firewall change for the HMC outgoing traffic is required.
Create a new CIM agent user Each SSPC should use a dedicated CIM agent user. In case more than one SSPC does access one DS8000, it is recommended to create a new user at CIM level. The CIM agent user will later be used by SSPC to access the DS8000. To create a new CIM agent user, see Example 13-2.
Example 13-3 DSCIMCLI commands to create a new user and verify the list of all users C:\>dscimcli mkuser CIMuser1 -password XXXXXXX -s <HMCIPaddress>:6989 User created. C:\>dscimcli lsuser -l -s <HMCIPaddress>:6989 Username ========= CIMuser1 superuser
Access DS8000 by CIM agent To enable the CIM agent to access the DS8000, a new device representing the DS8000 needs to be created in the CIM. The CIM agent will access the DS8000 as described in Define a new DS8000 User on page 263.
263
6786ch_ConfigSSPC.fm
Example 13-4 DSCIMCLI commands to enable CIM agent on DS8000 HMC and verify its success C:\>dscimcli mkdev <HMCIPaddr.> -type ds -user <DSGUI user> -password <DS GUI password> -s <HMCIPaddr.>:6989 Device successfully added. C:\>dscimcli lsdev -l -s 9.155.62.101:6989 Type IP IP2 Username Storage Image Status Code Level Min Codelevel ===== =============== =============== ========= ================ ========== ============== DS 9.155.62.101 - SSPC_1 IBM.2107-75P6151 successful 5.2.420.690 5.1.0.309
Select multiple times OK to exit user management Tip: To simplify user administration, name the Windows user group identical to the user groups role in TPC. Example: create the windows user group Disk Administrator and assign this group to the TPC Role Disk Administrator.
264
6786ch_ConfigSSPC.fm
TPC Role
Has full access to all TPC functions Has full access to all operations in the TPC GUI Has full access to TPC GUI disk functions including tape devices. Can launch DS8000 GUI by using stored passwords in TPC element manager. Can add / delete volumes by TPC
Table 13-1 TPC Administration levels depending on TPC user role defined
265
6786ch_ConfigSSPC.fm
TPC Role
Disk Operator
Has access to reports of disk functions and tape devices Has to enter Username/password to launch DS8000 GUI. Can not start cimom discoveries or probes Can not take actions in TPC, e.g. delete / add volumes
Table 13-1 TPC Administration levels depending on TPC user role defined
Figure 13-3 Assigning the Windows user group Disk Administrator to the TPC Role Disk Administrator
266
6786ch_ConfigSSPC.fm
Tip: Set the SSPC IP address in the TPC-GUI entry screen. At the workstation where you launched the browser to TPC GUI, go to: C:\Documents and Settings\<UserID>\ApplicationData\IBM\Java\Deployment\javaws\cache\http\<SSPC_I P_address>\P9550\DMITSRM\DMapp Open the file 'AMtpcgui.jnlp' and change the settings for <argument>SSPC_Name:9549</argument> to <argument><SSPC ipaddress>:9549</argument>
267
6786ch_ConfigSSPC.fm
For DS8000 on R1 and R2, Start Internet Explorer and launch an Element Manager by opening the URL: https://<element_manager_ip>:8452/DS8000/Login. A security alert dialog is displayed. Click View Certificate. A window that contains information about the certificate appears. Click Install Certificate. Follow the prompts in the wizard and continue to the next step after the certificate is installed. Close and reopen TotalStorage Productivity Center.
268
6786ch_ConfigSSPC.fm
Once DS8000 CIMOMs have been discovered, CIMOM login authentication to these subsystems is required. How long the CIMOM discovery takes depends on the number of CIMOMs and the number of subsystems added to the discovery. The CIMOM discovery can be run on a schedule. How often you run it depends on how dynamic your environment is. It must be run to detect a new subsystem. The CIMOM discovery also performs basic health checks of the CIMOM and subsystem. To run the CIMOM discovery from TPC, go to Administrative Services Data Sources CIMOM Agents and select Add CIMOM. Enter the required information and Modify the Port from 5989 to 6989. Then select Save to add the CIMOM to the list and test availability of the connection. In dual DS8000 HMC setup, the same procedure should be done for the second HMC as well.
Once the CIMOM has been added to the list of devices, the initial CIMOM discovery can be executed: Go to Administrative Services Discovery Deselect the field Scan Local Subnet in the folder Options Select When to Run Run Now Save this setting to start the CIMOM discovery
269
6786ch_ConfigSSPC.fm
The initial CIMOM discovery will be listed in the navigation Tree. Selecting this entry allows you to verify the progress of the discovery and the details about actions done while probing the systems.Once the discovery has completed, the entry in the navigation tree will change from blue to green or red depending on the success (or not) of the discovery. After the initial setup action, future discoveries should be scheduled. As displayed in Figure 13-7, this can be setup by the following actions: Specify the start time and frequency in when to Run Select Run Repeatable Save the configuration The CIMOM discoveries will now run in the time increments configured.
Probe DS8000
After TPC had been made aware of a DS8000s CIMOM, the storage subsystem needs to be probed to collect detailed informations. Probes use agents to collect statistics, which include hard disks, LSS/LCU, extent pools and volumes. Probe jobs can also discover information about new or removed disks. The results of probe jobs are stored in the repository and are used in TPC Standard Edition to supply the data necessary for generating a number of reports, including Asset, Capacity, and Storage Subsystem reports. To configure a probe, from TPC, go to IBM Total Productivity Center - Monitoring - right click to Probes and select Create Probe. In the next window, specify under What to Run, which systems will be probed. To add a system to a probe, double click on the subsystem to have it added to the Current Selection list. Then select when to probe the system, assign a name to the probe and save the session. Tip: Configure individual probes for every DS8000 but run these probes at different times from each other.
270
6786ch_ConfigSSPC.fm
271
6786ch_ConfigSSPC.fm
272
6786ch_ConfigSSPC.fm
273
6786ch_ConfigSSPC.fm
The Removed Resource Retention setting can be set to a low value, such as 0 or 1 day. This removes the missing entities from the display the next time the Removed Resource Retention process runs. As long as the replaced DDM is not removed form the history database, it will be displayed in the topology view as Missing.
The value in the Audit Logging Configuration field also applies to the other services for which tracing is enabled.If the Enable Trace checkbox is unchecked, tracing will not be performed. The Audittrace.log files documenting the user actions done by the GUI will be written to the 274
IBM System Storage DS8000 Series: Architecture and Implementation
6786ch_ConfigSSPC.fm
directory <TPC_HOME>/data/log/. When the user defined maximum number of files has been reached, tracing will rollover and start writing to the first file.
275
6786ch_ConfigSSPC.fm
Accept Terms and Conditions To upgrade TPC-BE, select the Typical Installation and make sure the fields Agents and Register with the agent manager are not selected. If the TPC-BE should be upgraded to additional licences while FW upgrade select custom installation and select the functions which should be upgraded. Once the selection had been made as displayed in Figure 13-13, click next
Click Next at the user and password dialog window, without making changes. Click Install in the next window to start the TPC firmware upgrade
276
6786ch_ConfigSSPC.fm
When Disk1 installation has been completed, TPC will come up with a query to locate disk 2. This is shown in Figure 13-14. Select Browse and navigate to the location disk 2 is stored.
Figure 13-14 Navigate TPC installer to the location Dsik2 is stored in.
Once disk 2 installation is completed, the TPC-BE firmware upgrade is finished To verify the new firmware level of TPC, check the file Version.txt in the directory TPC is installed in.
277
6786ch_ConfigSSPC.fm
access the GUI front end. Any DS8000 LIC Release level GUI can be accessed by working with its HMCs. The Element Manager can be launched from inside TPC by clicking the Element Manager button in the upper left corner of the TPC GUI. It can also be launched by the navigation tree Element Manager. To launch an Element, select the Element in the view or double click to the element in the list as displayed in Figure 13-15.
Figure 13-15 TPC Element Manager view: Options to add and launch Elements
To add an Element, select Add Element Manager and enter the required information as displayed in Figure 13-16. The TPC user has the option to save the DS Storage Manager password in this view. If a DS8000 is equipped with two HMC, both HMCs should be added to the Element Manager list. Once the Elements have been added to the Element Manager list, the CIMOM connectivity can be set up by Select Actions Add CIMOM Connection. The required information to configure the CIMOM are the HMC IP address, the CIM agent username and the CIM agent password. The setup of these parameters for the DS8000 is described in Configuring DS8000 for TPC-BE access on page 262. For further details, see the System Storage Productivity Center Software Installation and Users Guide, SC23-8823 shipped with the SSPC. The document is also available at
http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp?topic=/com.ibm.com.sspc_v1 1.doc/welcome.html.
278
6786ch_ConfigSSPC.fm
Figure 13-16 Screen to configure a new DS8000 Element in the TPC Element Manager view
279
6786ch_ConfigSSPC.fm
The lower view is a tabular view which displays details about the selected graphical view. This view displays basic capacity information about the environment, such as: Environment wide view of DS8000 Model, Type, Serial number and Firmware Version Allocated and available Storage Capacity by Storage Subsystem Size of Disks, Volumes and Extent Pools by Storage Subsystem * Available capacity by Extent pool. (Note that, Space Efficient volumes are not reported by TPC).
Figure 13-17 The TPC status of several systems in the topology view.
Navigating the Topology view of a DS8000, a storage system flagged as being in warning or error status as displayed in Example 13-17 can be investigated for further details down to the failing device. In the table of the topology viewer as displayed in the disk view in Figure 13-19 on page 281, the operational status of Disk Drives is displayed. Failed Disk Drive Modules (DDM) can have the following status:
280
6786ch_ConfigSSPC.fm
Error: Further investigation on the DS8000 may be indicated. Predictive Failure: A DDM failed and was set to deferred maintenance. Starting: DDM is initializing. This status should be temporary. In Service: DDM Service actions are in progress. This status should be temporary. Dormant: The DDM is member of the DS8000, but not yet configured. Unknown: Further investigation on the DS8000 may be indicated.
Figure 13-18 Categorize a known computer from the category Other to the category Computer
Figure 13-19 Drill down of the topology viewer for a DS8000 extent pool.
281
6786ch_ConfigSSPC.fm
Details about the displayed health status are discussed at Storage health management on page 283.
Figure 13-20 Topology view of physical paths between one Host and one DS8000.
In Figure 13-20, the display of the topology view points out non redundant physical paths between the host and its volumes located on the DS8000. Upgrading TPC-BE with additional TPC licences will enable TPC to assess and warn you about this lack of redundancy.
Figure 13-21 Topology view: detailed view of the DS8000 Host Ports assigned to one of its two switches.
282
6786ch_ConfigSSPC.fm
In Example 13-21 the display of the topology viewer points out that the switch connectivity does not match one of the recommendations given by the DS8000 Infocenter on host attachment path considerations for a storage image. In this example, we have two I/O enclosures in each I/O enclosure pairs (I1/I2 or I3/I4) located on different RIO loop halves (the DS8000 Infocenter mentions that you can place two host attachments, one in each of the two I/O enclosures of any I/O enclosure pair). In the example also, all switch connections are assigned to one DS8000 RIO loop only (R1-I1 and R1-I2).
2. Element Manager navigation tree > Disk Manager > Storage Subsystems
283
6786ch_ConfigSSPC.fm
3. Select a DS8000 and then select Create Volume. This will bring up a window where to define the attributes of the new volume, as displayed in Example 13-22. Once the Selection is done, click Next
4. In the next window displayed in Figure 13-23, the Host adapter connected to the SAN can be selected. By entering the WWPNs gathered in step one, a search for the desired ports can be done and the ports to be assigned to the volume can be set
284
6786ch_ConfigSSPC.fm
5. Once the Host adapter had been defined, the next Volume Wizard panel allows to select specific DS8000 subsystem host ports connected to the same fabric than the host port. 6. The Volume Wizard completes the Volume configuration with a review panel. 7. For Brocade out of band switches, the Wizard allows to add a new zone to the existing SAN setup after the Volume configuration has been completed. This is displayed in Figure 13-24.
8. Selecting Next, the new zone will be configured and the volume made available to the host.
285
6786ch_ConfigSSPC.fm
286
6786ch_ConfigGUI.fm
14
Chapter 14.
287
6786ch_ConfigGUI.fm
SSPC
TPC DS8000 HMC 1
DS GUI Back-end
DS 8000 Complex 1
DS8000 HMC 2
DS GUI Back-end
Preinstalled Browser DS GUI Front-end
DS 8000 Complex 2
Remote desktop
Directly
When you upgrade an existing DS8000 to the R3 microcode (Licence Machine Code 5.30xx.xx), the back-end part of DS GUI resides in DS Storage Manager at the HMC, while 288
IBM System Storage DS8000 Series: Architecture and Implementation
6786ch_ConfigGUI.fm
the front-end part of the DS GUI can run in a web browser on any workstation having a TCP/IP connection to the HMC. It also can run in a browser started directly at the HMC. The DS Storage manager communicates with the DS Network Interface Server which is responsible for communication with the two controllers of the DS8000. Refer to Figure 14-2 for an illustration.
TCP/IP
DS 8000 Complex 1
DS GUI Back-end
DS8000 HMC 2
DS GUI Back-end
DS 8000 Complex 2
289
6786ch_ConfigGUI.fm
2. Type in TotalStorage Productivity Center userid and password as is shown in Figure 14-4
290
6786ch_ConfigGUI.fm
3. In the TotalStorage Productivity Center window seen Figure 14-5, click then Element Management button in the task bar to launch the Element Manager.
Note: Here we assume that the Disk Systems are already configured in TPC, as is described in SSPC setup and configuration on page 261. 4. Once the Element Manager is launched, expand the Select a view pull-down and select the Disk System you want to work with, as shown in Figure 14-6.
291
6786ch_ConfigGUI.fm
5. Next, you are presented with the DS GUI Welcome screen for the selected Disk System, as shown in Figure 14-7.
292
6786ch_ConfigGUI.fm
information how to connect to SSPC via remote desktop refer to 13.3.8, Accessing the DS8000 GUI by SSPC on page 272
3. Click Browser. The Web browser is started with no address bar and a Web page titled WELCOME TO THE DS8000 MANAGEMENT CONSOLE displays; see Figure 14-9 on page 294.
293
6786ch_ConfigGUI.fm
4. In the Welcome panel, click IBM TOTALSTORAGE DS STORAGE MANAGER. 5. A certificate panel opens. Click Accept. 6. The IBM System Storage DS8000 SignOn panel opens. Proceed by entering a user ID and password. The predefined user ID and password are: User ID: admin Password: admin The password must be changed on first login. If someone had already logged on, check with that person to obtain the new password. 7. A Wand (password manager) panel opens. Select OK. 8. This launches the DS GUI in the browser on HMC.
294
6786ch_ConfigGUI.fm
Toggel Banner
Information Center
In the Welcome panel of the DS8000 Storage Manger GUI, you see these options: Show all tasks: Opens the Task Manager panel, from where you can end a task or switch to another task. Hide Task List: To hide the Task list and to expand your work area. Toggle Banner: To remove the banner with the IBM System Storage name and to expand the working space. Information Center: To launch the Information Center. The Information Center is the online help for the DS8000. The Information Center provides contextual help for each panel, but also is independently accessible from the Internet. Close Task: To close the active task. On the left side of the panel is the navigation panel.
295
6786ch_ConfigGUI.fm
To select / deselect all items from the table below or from all tables
Select Action drop down menu. From this menu you can select a specific task you want to perform.
Downloads the information from the table below in csv (comma separated values) format. You can open this file with a spreadsheet program.
To set / unset filters for the table below. To list only specific items.
Starts the printer dialog from your PC to print the table below.
The DS GUI displays the configuration of your DS8000 in tables. To make this more convenient, there are several options you can use: To download the information from the table, click Download. This can be useful if you want to document your configuration. The file is in comma-separated value (csv) format and you can open the file with a spreadsheet program. This function is also useful if the table on the DS8000 Manager consists of several pages; the csv file includes all pages. Print report opens a new window with the table in HTML format and starts the printer dialog of your PC if you want to print the table. The Select Action pull-down menu provides you with specific actions you can perform (for example, Create). There are also buttons to set and clear filters so that only specific items are displayed in the table (for example, show only FB ranks in the table). This can be useful if you have tables with large numbers of items.
6786ch_ConfigGUI.fm
The Add Storage Complex panel displays. Specify the IP address of the storage complex. Click Ok. The storage complex that you added is available for selection on the Storage Complexes main panel. Alternatively, you can define a storage complex, and then add the storage unit from the Storage Unit panel.
297
6786ch_ConfigGUI.fm
Simulated manager are not executed on a real DS8000 system, but on a simulated DS8000 configuration which is maintained by the Simulated manager. The simulated configurations are saved in a configuration file. The main applications of the Simulated manager are: To create a logical configuration for a DS8000 using the Simulated manager, and apply it on an unconfigured DS8000 Storage Image. To model different logical configurations before configuring a real DS8000. As an educational tool to learn how to use the DS GUI interface. The functions of the Simulated manager are for the most part identical to the Real-time manager, with a few exceptions relating to the fact that the Simulated manager does not directly interact with a real DS8000 system. The main differences include: The Simulated manager includes panels for managing simulated configuration files The Simulated manager includes panels for defining the hardware configuration of a simulated DS8000 Storage Unit The Simulated manager allows you to import the hardware configuration from a DS8000, and to apply a logical configuration on a DS8000. These actions require a network connection to the DS8000. The Simulated manager does not support Copy Services functions. In this section we discuss how to use the Simulated manager. In particular, we cover the aspects of the DS GUI interface that are unique to the Simulated manager.
298
6786ch_ConfigGUI.fm
For detailed installation instructions and system requirements for both Linux and Windows environments, see the IBM System Storage DS8000 Users Guide, SC26-7915 manual.
299
6786ch_ConfigGUI.fm
The Simulated manager panel structure is identical to the Real-time manager. The panel components are explained in 14.1, DS Storage Manager GUI overview on page 288.
300
6786ch_ConfigGUI.fm
The new enterprise configuration is named Enterprise N. It automatically becomes the current open configuration. See Figure 14-15.
301
6786ch_ConfigGUI.fm
Delete - Delete an enterprise configuration. If you delete the current configuration, the Default configuration will be opened. You cannot delete the Default configuration. Export - Export the contents of an enterprise configuration in a .xml file. Import - Import a .xml file that contains an exported enterprise configuration. When you perform an action that forces the current configuration to be closed, such as Create New or Open, the action message shown in Figure 14-16 is issued as a warning.
The message allows you to select what you want to do with the current open configuration. Select OK if you have made changes in the current configuration and want to save them. Select Continue if you have not made any changes or do not want to save the changes. Select Cancel to cancel the action. The current configuration is said to be temporary if it has never been saved. If you close a temporary configuration without saving it, the configuration will be deleted. When you perform an action which forces a temporary configuration to be closed, the action message shown in Figure 14-17 is issued as a warning.
The message box allows you to select what you want to do with the temporary configuration. Select OK if you want to save it. If you select Continue, the temporary configuration gets deleted. Select Cancel to cancel the action.
6786ch_ConfigGUI.fm
configuration on the simulated Storage Unit. The logical configuration tasks are identical to the Real-time manager, and are covered in other sections in this chapter. There are three ways to create a simulated DS8000 hardware configuration: Manually enter the hardware details of the DS8000 system. Import the hardware configuration from an eConfig purchase order file (.cfr file). eConfig is an IBM administrative system that is used to configure systems for ordering from manufacturing. The eConfig file contains hardware details of orderable systems. Import the configuration of an installed and operational DS8000 system. You can import just the hardware configuration or optionally also the logical configuration of the DS8000. You need TCP/IP connectivity to the DS8000 in order to import its configuration. We describe these three tasks in the following sections in more detail.
To create a new simulated Storage Complex, select Create from the Select Action pull-down menu. This launches a wizard that leads you through the Storage Complex creation steps. 1. Define properties Provide the following information on the Define properties panel: Give a nickname for the Storage Complex and enter an optional description. If you already have defined Storage Units which have not been assigned a Storage Complex, you can assign them at this time. You also have the option to create a new Storage Unit from this panel.
Chapter 14. Configuration with DS Storage Manager GUI
303
6786ch_ConfigGUI.fm
Click Next to proceed to the next panel. 2. Verification Check the information on the Verification panel, then click Finish to complete the Storage Complex creation task. The Storage complexes: Simulated panel is redisplayed showing an updated list of Storage Complexes in the configuration file.
To create a new simulated Storage Unit, select Create from the Select Action pull-down menu. This launches a wizard that leads you step by step through the Storage Unit creation process. On each panel, provide the requested information, then click Next to proceed to the next panel. You can return to the previous panel by clicking Back. 1. General storage unit information Provide the following information on the General storage unit information panel: Select the Machine Type-Model of the DS8000 Give a nickname and optional description for the Storage Unit Select the Storage Complex where this Storage Unit should reside. If you dont select a Storage Complex, the Storage Unit becomes unassigned. You also have the option to launch the Storage Complex create wizard from this panel. 2. Specify DDM packs Provide the following information on the Specify DDM packs panel: Specify the type and quantity of DDM packs, then click Add to add the DDM packs on the DDM list. You can add multiple sets of DDM packs on the list. Note the following restrictions: A minimum of two DDM packs (drive sets) are required for a DS8300 LPAR model 9A2 or 9B2. Both DDM packs in a disk enclosure pair have to be identical (same size and rpm). This restriction is not enforced by the Simulated manager.
304
6786ch_ConfigGUI.fm
Provide the following information on the Define licensed function panel: Specify the quantity of Storage Images in the DS8000. Select quantity 2 if the system is an LPAR model 9A2 or 9B2, otherwise use the default 1. If you select 2, the panel changes to displays license fields for both Storage Images of the Storage Unit. Enter the Operating Environment (OEL) authorization value. The field is primed with the correct value calculated based on the selected DDM packs. You only need to enter the OEL license value on the panel. The other optional licenses are not relevant, they are not used by the Simulated manager. Note, that the decimal symbol in the primed value is based on your systems regional settings, but the DS Storage Manager only accepts a dot (.) as a decimal separator. 4. Specify I/O Adapter configuration Provide the following information on the Specify I/O Adapter configuration panel: Specify the number of different host adapters (cards) on the DS8000. Note the following restrictions: A minimum of two host adapters are required. The two adapters have to be of the same type. A minimum of four host adapters are required in a DS8300 LPAR model 9A2/9B2, two identical adapters for each Storage Image. This requirement is not enforced by the Simulated manager.
5. Verification Check the information on the Verification panel, then click Finish to complete the Storage Unit creation task. The Storage units panel is redisplayed showing an updated list of simulated Storage Units in the current enterprise configuration.
305
6786ch_ConfigGUI.fm
Select Import from eConfig file from the Select Action pull-down menu. The Import Storage Unit(s) From eConfig File panel is displayed. Enter the eConfig file (.cfg file) name and path on the panel or locate the eConfig file using the Browse button, then click OK to start the import. The Storage Unit configuration is imported from the eConfig file to the current enterprise configuration. When complete, the Storage units panel is redisplayed showing an updated list of simulated Storage Units (Figure 14-21).
To give the imported Storage Unit a more meaningful nickname, and to assign it to a Storage Complex, checkmark it, then select Modify from the Select Action pull-down menu.
306
6786ch_ConfigGUI.fm
You can import either a whole Storage Complex including all Storage Units in the complex, or just a single Storage Unit. When you import a Storage Complex, a simulated Storage Complex entry and one or more simulated Storage Unit entries are created in the current enterprise configuration.
Select Import from the Select Action pull-down menu. This launches a wizard that leads you through the import process. On each panel, provide the requested information, then click Next to proceed to the next panel. You can return to the previous panel by clicking Back. 1. Specify management console IP Provide the following information on the Specify management console IP panel: Enter the IP address of the primary DS8000 HMC console. Optionally, you can enter the IP address of the secondary HMC. A valid HMC user name and password is required for the import task to access the HMC console. By default, the user name and password that you used to sign on the Simulated manager are used. If they are not valid on the HMC, you can specify the sign-on information for the HMC from which you are importing data. 2. Storage unit On the Storage unit panel, select the Storage Unit which you want to import. 3. Import data On the Import data panel, specify the amount of data to import. The options are: Physical configuration only Physical and logical configuration Physical and logical configuration plus host attachments 4. Properties Provide the following information on the Properties panel: Give a nickname to the simulated Storage Unit that is created. The field is primed with the name of the real Storage Unit you are importing. Enter an optional description for the Storage Unit.
307
6786ch_ConfigGUI.fm
5. Verification Check the information on the Verification panel, then click Finish to complete the Storage Unit import task. The Storage units panel is redisplayed showing an updated list of simulated Storage Units in the enterprise configuration. The imported Storage Unit is initially unassigned. To assign it to a simulated Storage Complex, select Modify from the Select Action pull-down menu.
Checkmark the simulated Storage Image whose logical configuration you want to apply, then select Apply Configuration from the Select Action pull-down menu. This launches a wizard that leads you through the apply process. On each panel, provide the requested information, then click Next to proceed to the next panel. You can return to the previous panel by clicking Back. 1. Specify management console IP Provide the following information on the Specify management console IP panel: Enter the IP address of the primary DS8000 HMC console where you want to apply the configuration. Optionally, enter the IP address of the secondary HMC. 308
IBM System Storage DS8000 Series: Architecture and Implementation
6786ch_ConfigGUI.fm
A valid HMC user name and password is required for the apply task to access the HMC console. By default, the user name and password that you used to sign on the Simulated manager are used. If they are not valid on the HMC, you can specify the sign-on information for the HMC which you are accessing. 2. Storage unit On the Storage unit panel, select the DS8000 Storage Unit on which you want to apply the configuration. 3. Storage image On the Storage image panel, select the DS8000 Storage Image on which you want to apply the configuration. The panel is only displayed if the target system is a DS8300 LPAR model (9A2 or 9B2). 4. Verification Check the information on the Verification panel, then click Finish to complete the task.
309
6786ch_ConfigGUI.fm
From the pull-down in Manage Extent Pools section, select Create Extent Pool. This brings up the panel Create New Extent Pools which is divided into the following sections: Define Storage Characteristics - where you specify the type of storage (FB or CKD) and RAID protection used for the extent pool being created. Select Available Capacity - where you specify the type of configuration (Automatic or Manual), type of physical disks, capacity, and usage of device adapter pairs for the new extent pool. If you specify type of configuration Automatic the system automatically chooses which ranks to use based on your selections of capacity, disk type, and adapter usage. But if you specify type of configuration Manual you choose which ranks to use for the new extent pool. Define Extent Pool Characteristics - where you insert the number of extent pools you want to create, the prefix for the name of extent pools, maximum percentage of storage threshold to generate an alert, and percentage of reserved storage. You can also specify to which processor to assign a specific extent pool, or you leave the system to automatically assign it. After making the desired selections, click OK. An example of creating an extent pool is shown in Figure 14-25. In this example we use Automatic configuration; We want to use 73 GB physical disks, 1680 GB capacity, and we want the used ranks to be spread across DA pairs. As soon as we make selection the new assigned ranks are shown under DA Pair usage scheme. We create two extent pools, specifying ITSO as prefix for the extent pool names, leave threshold and reserved storage as default, and we use Automatic assignment of extent pools to servers. This can be observed in Figure 14-25.
310
6786ch_ConfigGUI.fm
Next, a confirmation panel is shown, where you check the name of the extent pools that are going to be created, their capacity, server assignments, RAID protection, and so on. If you want to add capacity to the extent pools or add another extent pool you use the pull-down Select action. Once you are satisfied with the specified values click Create all to create the extent pools. The Create Extent Pool Verification panel is shown in Figure 14-26.
311
6786ch_ConfigGUI.fm
During the creation of extent pools, the DS GUI displays information about the progress of the task in a progress window, as can be seen in Figure 14-27.
312
6786ch_ConfigGUI.fm
Once extent pools are successfully created, the progress window indicates successful completion of the task, as is shown in Figure 14-28.
You may click on View Details to observe the details of finished task in the Long Running Task Summary panel, which is shown in Figure 14-29.
313
6786ch_ConfigGUI.fm
3. Select Configure I/O Ports from the drop-down menu, as shown in Figure 14-30. 4. The Configure I/O Ports panel displays; see Figure 14-31. Here you select the ports that you want to format as FcSf, FC-AL, or FICON, and then select the port format from the drop-down menu.
314
6786ch_ConfigGUI.fm
You get a message warning you that if hosts are currently accessing these ports and you reformat them, it is possible that the hosts can lose access to the ports, because it is at this point in the configuration that you select whether a port is to be FICON or FCP. 5. You can repeat this step to format all ports to their required function.
Use this panel to debug host access and switch configuration issues. To create a new host system, do the following: 1. Select Manage hardware. 2. Click Host Systems. The Host systems panel displays; see Figure 14-33.
315
6786ch_ConfigGUI.fm
Figure 14-33 gives an overview of the host connections. To create a new host, select the link Create new host connection. The following process guides you through the host configuration; see Figure 14-34.
316
6786ch_ConfigGUI.fm
In the General host information panel, you have to enter the following information: Type: The host type; in our example, we create a Windows host. The pull-down menu gives you a list of types you can select. Nickname: Name of the host. Enter the host WWPN numbers of the host or select the WWPN from the drop-down menu. Attachment Port Type: You must specify if the host is attached over a FC Switch fabric (P-P) or direct FC arbitrated loop to the DS8000. When you have entered the necessary information, click Next. The window shown in Figure 14-35 appears.
The Map Host Ports to a Volume Group panel displays; see Figure 14-35. In this panel, you can choose the following options: Select the option Map at a later time to create a host connection without mapping host ports to a volume group. Select the option Map to a new volume group to create a new volume group to use in this host connection. Select the option Map to an existing volume group to map to a volume group that is already defined. Choose an existing volume group from the menu. Only volume groups that are compatible with the host type that you selected from the previous page are displayed. After you enter the information, click on Next. The Define I/O Ports panel displays; see Figure 14-36.
317
6786ch_ConfigGUI.fm
From the Define I/O ports panel, you can choose if you want to automatically assign your I/O ports or manually select them from the table. Defining I/O ports determines which I/O ports can be used by the host ports in this host connection. If specific I/O ports are chosen, then the host ports are only able to access the volume group on those specific I/O ports. After defining I/O ports, selecting the Next button directs you to the verification panel where you can approve your choices before you commit them. The Verification panel is displayed; see Figure 14-37.
318
6786ch_ConfigGUI.fm
In the Verification panel, you check the information that you entered during the process. If you want to make modifications, select Back or you can Cancel the process. After you have verified the information, click Finish to create the host system. If you need to make changes to a host system definition, you can go to the Host system panel, select the host system, and then select Modify from the drop-down menu.
4. Choose Create from the Select Action pull-down menu. The Select extent pool panel displays; see Figure 14-39.
319
6786ch_ConfigGUI.fm
5. In the Select extent pool panel, choose the previously created extent pool from which you want to create the volume (you can choose only one extent pool). When finished with the previous task, you then have to define the volume characteristics. See Figure 14-40.
6. In the Define volume characteristics panel (see Figure 14-40), the fields are: Volume type: You can choose Standard Open Systems volumes in either binary (DS sizes) or decimal (ESS sizes) for compatibility with ESS. In general, the DS sizes are fine. You can select either protected or unprotected iSeries volumes. Select unprotected iSeries volumes if you are going to use iSeries mirroring. Select volume groups: You can select one or more volume groups to which you want to assign the volumes. If you choose no volume group, you can assign the volumes later to a volume group by using the Modify option. Storage allocation method: Standard: Designates that the system fully allocate the volume with real extents at volume creation time. This is the default value. Track Space Efficient Method (TSE): Designates that a TSE logical volume is provisioned with a set of virtual extents and is associated with a repository. Extent allocation method: Rotate volumes: Allocates extents from a single rank that has the most available space. Rotate extents: Allocates extents from all ranks within an extent pool. This allocation can improve performance for random workloads because more ranks are operating simultaneously to send or receive data. However, the volume is dependent on multiple ranks to provide data. When complete, click Next to define the volume properties; see Figure 14-41.
320
6786ch_ConfigGUI.fm
7. In the Define volume properties panel, enter the following information: Quantity: The number of volumes you want to create. The calculator tells you how many volumes of your chosen size can fit in the available space in the extent pool. Size: Size of the volumes in GB (binary or decimal). The minimum allocation is 0.1 GB; however, this consumes an entire 1 GB extent. The maximum logical unit number (LUN) size is 2 TB. iSeries volumes can only be created in the sizes supported by the operating system; these are selectable from a drop-down menu. Select logical subsystems (LSSs) for volumes: If you select this check box, you can specify the LSS for the volumes. Only the available LSSs display. In this example, only even LSSs display, because we selected an extent pool associated with server 0. You can assign the volumes to a specific LSS. This can be important if you want to use Copy Services. You can have a maximum of 256 volumes in each LSS. In this example, we create two volumes with 5 GB each assigned to LSS 46; see Figure 14-41.
321
6786ch_ConfigGUI.fm
8. In this example, the two volumes get the names MetroM0001 and MetroM0002; see Figure 14-42. You can see this also in the verification panel; see Figure 14-43 on page 322.
9. In the Verification panel, you check the information you entered during the process. If you want to make modifications, select Back or you can Cancel the process. After you verify the information, click Finish to create the volume.
322
6786ch_ConfigGUI.fm
3. Click Volume groups. The Volume groups panel displays; see Figure 14-44.
4. To create a new volume group, select Create from the Select Action pull-down menu; see Figure 14-44.
5. In the Define volume group properties panel (see Figure 14-45), enter the nickname for the volume group and select the host type from which you want to access the volume group. If you select one host (for example, pSeries), all other host types with the same addressing method are automatically selected. This does not affect the functionality of the volume group; it supports the host type selected. 6. Select the volumes to include in the volume group.
323
6786ch_ConfigGUI.fm
7. If you have to select a large number of volumes, you can specify the LSS, so that only these volumes display in the list, and then you can select all. Click Next to get the Verification panel; see Figure 14-46.
8. In the Verification panel, check the information you entered during the process. If you want to make modifications, select Back, or you can Cancel the process. After you verify the information, click Finish to create the host system attachment.
324
6786ch_ConfigGUI.fm
Select a Storage Image from the Select storage image pull-down menu. The panel is refreshed to show the LCUs on the Storage Image. To create new LCUs on the selected Storage Image, select Create from the Select Action pull-down menu. 2. The Select from available LCUs panel is displayed showing a list of available LCUs on the Storage Image (Figure 14-48).
Checkmark the LCUs you want to create. The table usually contains several pages. Scroll down the pages to find the right LCUs. The LCU IDs you select here must be defined in the host system I/O configuration (IOCP/HCD). When finished, click Next.
Chapter 14. Configuration with DS Storage Manager GUI
325
6786ch_ConfigGUI.fm
Provide the following information in the Define LCU properties panel: SSID: Enter a Subsystem ID (SSID) for the LCU. The SSID is a four character hexadecimal number. If you create multiple LCUs at one time, the SSID number is incremented by one for each LCU. The LCUs attached to the same operating system image must have different SSIDs. We recommend that you use unique SSID numbers across your whole environment. LCU type: Select the LCU type you want to create. Select 3990 Mod 6 unless your operating system does not support Mod 6. The options are: 3990 Mod 3 3990 Mod 3 for TPF 3990 Mod 6
The following parameters affect the operation of certain Copy Services functions: Concurrent copy session timeout: The time in seconds that any logical device on this LCU in a concurrent copy session stays in a long busy state before suspending a concurrent copy session. z/OS Global Mirror Session timeout: The time in seconds that any logical device in a z/OS Global Mirror session (XRC session) stays in long busy before suspending the XRC session. The long busy occurs because the data mover has not offloaded data when the logical device (or XRC session) is no longer able to accept additional data. With recent enhancements to z/OS Global Mirror there is now an option to suspend the z/OS Global Mirror session instead of presenting the long busy status to the applications. Consistency group timeout: The time in seconds that remote mirror and copy consistency group volumes on this LCU stay extended long busy after an error that causes a consistency group volume to suspend. While in the extended long busy state, I/O is withheld from updating the volume. Consistency group timeout enabled: Check the box to enable remote mirror and copy consistency group timeout option on the LCU.
326
6786ch_ConfigGUI.fm
Critical mode enabled: Check the box to enable critical heavy mode. Critical heavy mode controls the behavior of the remote copy and mirror pairs that have a primary logical volume on this LCU. Click Next to proceed to the next panel. 4. The Verification panel displays (Figure 14-50).
Check the information you entered during the process. If you want to make changes, click Back, or you can Cancel the process. After you have verified the information, click Finish to create the LCU. The LCUs panel redisplays showing an updated list of LCUs on the Storage Image (Figure 14-51).
327
6786ch_ConfigGUI.fm
Select a Storage image from the Select storage image pull-down. Select a LCU from the Select LCU pull-down. The table is refreshed to show volumes in the selected LCU. This allows you to check what LCUs and volumes currently exist. The actual LCUs where the new volumes are created is selected later. In our example, there are currently no volumes in LCU 02. The Select Action pull-down menu contains an action called Define Address Allocation Policy. It allows you to set a default on how addresses are allocated to the base volumes you create. See Section 14.5.8, CKD volume actions on page 336. To create new CKD volumes, select Create from the Select Action pull-down menu. 2. The Select extent pool panel is displayed showing a list of Extent Pools on the Storage Image (Figure 14-53).
328
6786ch_ConfigGUI.fm
Select the Extent Pool where you want to allocate the volumes. You can select only one Extent Pool at a time, although you can create volumes across multiple LCUs in one create task. To create volumes in multiple Extent Pools, you must repeat this procedure. The table on the panel has two columns to show how much capacity is available in each Extent Pool: Available Physical GB shows how much space is available for allocating standard volumes. Available Virtual GB shows how much virtual space is available for allocating Space Efficient (TSE) volumes. The total size of Space Efficient volumes in the Extent Pool cannot exceed this limit. This field is new in the DS8000 October 2007 release. Click Next to proceed. 3. The Define base volume characteristics panel displays (Figure 14-54).
329
6786ch_ConfigGUI.fm
Provide the following information in the panel: Volume type: The options are: 3380 Mod 2 3380 Mod 3 3390 Custom 3390 Standard Mod 3 3390 Standard Mod 9
Storage allocation method: This field is new in the DS8000 October 2007 release. The options are: Standard - Allocate standard volumes. Track Space Efficient (TSE) - Allocate Space Efficient volumes to be used as FlashCopy SE target volumes.
Extent allocation method: Defines how volume extents are allocated on the ranks in the Extent Pool. This field is not applicable for TSE volumes. This field is new in the DS8000 October 2007 release. The options are: Rotate volumes - All extents of a volume are allocated on the rank that contains most free extents. If the volume does not fit on any one rank, it can span multiple ranks in the Extent Pool. This method was previously used by default. Rotate extents - The extents of a volume are allocated on all ranks in the Extent Pool in a round-robin fashion. This function is called Storage Pool Striping. This allocation method can improve performance because the volume is allocated on multiple ranks. It also helps to avoid hotspots by spreading the workload more evenly on the ranks. This is the preferred allocation method.
LCU: Select the LCU in which you want to create the volumes. You can allow volumes to be created on multiple LCUs by selecting more than one LCU on the list. To select multiple LCUs, keep the Ctrl or Shift key pressed while clicking an LCU. If you select Work with all available, the volumes are created in all the listed LCUs. Note, that the list
330
6786ch_ConfigGUI.fm
contains only even or odd LCUs depending on whether you selected an even or odd Extent Pool in the previous panel. Click Next to continue. 4. The Define base volume properties panel displays. The appearance of the panel depends on the selections made on the previous panel. Allocation on a single LCU When you create 3390 Custom volumes and select only one LCU, the panel looks as shown in Figure 14-55.
Figure 14-55 Define base volume properties - 3390 Custom volume, one LCU
Provide the following information on the panel: Quantity: The number of base volumes you want to create. Size: Volume size in cylinders. This field is displayed only when you create 3390 Custom volumes. 3380 and 3390 Standard volumes have a fixed size. Base start address: The starting address of volumes you are about to create. Specify a decimal number in the range of 0 - 255. This defaults to the value specified in the Address Allocation Policy definition. Ascending/Descending: Select the address allocation order for the base volumes. This defaults to the value specified in the Address Allocation Policy definition. The volume addresses are allocated sequentially starting from the base start address in the selected order. If an address is already allocated, the next free address is used. The addressing you select here must match the base device definitions in the host I/O configuration (HCD/IOCP). The panel contains buttons for two calculator functions. The buttons are available only when you create 3390 Custom volumes. Calculate max size: Fill in the Quantity field, then click this button to calculate the maximum volume size for the volumes to fit in the Extent Pool.
331
6786ch_ConfigGUI.fm
Calculate max quantity: Fill in the Size field, then click this button to calculate how many volumes of the specified size fit in the Extent Pool. The Available storage section provides information on current Extent Pool and LCU resource usage. In the example, we define six 3390 Custom volumes of the maximum supported size of 65520 cylinders (sometimes referred to as a 3390-54 volume as its size is roughly six times the size of a 3390-9 volume). Allocation on multiple LCUs If you selected multiple LCUs in the previous panel, the Define base volume properties panel looks as shown in Figure 14-56. In addition to the fields described above, it allows you to select an Addressing policy. The Addressing policy defines, how the new volumes are allocated on the selected LCUs. You can specify whether to spread the volumes equally across the LCUs or whether to fill and spill the volumes across the first LCU, then the second LCU, and so on.
Define the volume properties, then click Next to proceed. 5. The Create volume nicknames panel displays (Figure 14-57).
332
6786ch_ConfigGUI.fm
In this panel, you can optionally give nicknames to the volumes. The nicknames serve documentation purposes only. Uncheck the Generate a sequence of nicknames based on the following box if you do not want to generate nicknames. In this example, the base volumes we create get nicknames VOL0200, VOL0201, and so on. You can specify prefixes and suffixes up to a total length of 16 characters. You have the option to use a hexadecimal addressing sequence typical of System z addressing if you checkmark the Use hexadecimal sequence box. Note: The nickname is not the System z volser of the volume. The volser is created later when the volume is initialized using the ICKDSF INIT command. Click Next to proceed. 6. The Define alias assignments panel displays (Figure 14-58). It allows you to define alias devices in the LCU.
333
6786ch_ConfigGUI.fm
Select the base volumes for which you want to create aliases by checkmarking the volumes, and provide the following information: Starting address: Enter the first alias address as a decimal number between 0 - 255. The addressing you select here must match the alias device definitions in the host I/O configuration (HCD/IOCP). Ascending/Descending: Select the address allocation order of the aliases. Aliases/Per Volumes: In these two fields, enter how many aliases you want to create for each selected base volume. This creates a ratio between base and alias addresses that is applied to all the volumes selected. The ratio can be multiple aliases for each base address, or multiple base addresses to each alias; however, only whole numbers and evenly divisible ratios are acceptable, for example: One alias : Two base addresses Two aliases : One base address Three aliases : Six base addresses Three aliases : Two base addresses (not valid)
Click Add aliases to add aliases to the selected volumes. You can repeat this step with different selections to add more aliases. To remove aliases from a volume, checkmark the volume and click Remove aliases. In this example, we have selected only one volume and create 32 aliases for it. The aliases are allocated from the top address 255 (xFF) downwards. You can this technique of assigning all aliases in the LCU to just one base volume, if you have 334
IBM System Storage DS8000 Series: Architecture and Implementation
6786ch_ConfigGUI.fm
implemented HyperPAV or Dynamic alias management. With HyperPAV, the alias devices are not permanently assigned to any base volume even though you initially assign each to a certain base volume. Rather, they reside in a common pool and are assigned to base volumes as needed on a per I/O basis. With Dynamic alias management, WLM will eventually move the aliases from the initial base volume to other volumes as needed. If your host system is using Static alias management, you need to assign aliases to all base volumes on this panel, because the alias assignments made here are permanent in nature. To change the assignments later, you have to delete and recreate aliases. When you have added all required aliases, click Next to proceed. 7. The Verification panel displays (Figure 14-59).
Verify that you entered the correct information, then click Finish to create the volumes and aliases. The Volumes - zSeries panel redisplays showing the new volumes (Figure 14-60).
335
6786ch_ConfigGUI.fm
After the operation completes, you have to initialize the volumes using the ICKDSF INIT command. The VOLSERs will be shown on the panel later after the volumes are initialized.
To perform an action, checkmark the volumes you want to operate on, then select the action from the Select Action pull-down menu. The following actions are available: 336
IBM System Storage DS8000 Series: Architecture and Implementation
6786ch_ConfigGUI.fm
Create See Section 14.5.7, Create CKD volumes on page 328. Delete Deletes base volumes and alias devices. Deleting a base volume destroys all data on the volume and makes the volume unavailable to the host system. The extents allocated to the volume are released to the Extent Pool. All alias devices assigned to the volume are also deleted. Deleting just an alias does not delete any data and can be done concurrently. Initialize TSE Volume (A new option in the DS8000 October 2007 release, LMC 5.3.x.x) This action releases all extents of a Space Efficient volume from the repository volume but the volume itself is not deleted. This causes the volume to become uninitialized (all tracks become empty) preventing it from being used by applications until it has been reinitialized using the ICKDSF INIT command. Modify Allows you to change the volume nickname. Add Aliases Use this action when you want to define additional aliases without creating new base volumes. See the instructions for Figure 14-58 on page 334 on how to use the panel. Status Provides volume status information including the Access State, Data State, and Configuration State of the selected volumes. Advanced Operations Allows you to clear a volume or resume the configuration. Increase capacity (A new option in the DS8000 October 2007 release, LMC 5.3.x.x) Use this action to increase the size of a volume. The capacity of a 3380 volume cannot be increased. After the operation completes, you need to use the ICKDSF REFORMAT REFVTOC command to adjust the volume VTOC to reflect the additional cylinders. Note that the capacity of a volume cannot be decreased. Define Address Allocation Policy This action allows you to set a default on how addresses are allocated to base volumes when they are created. It is a global setting that applies to all LCUs in the Storage Image. Specify a base start address as a decimal number (range 0 - 255) and specify whether addresses should be allocated in ascending or descending order. You can override the defaults on the Define base volume properties panel (Figure 14-55 on page 331) when you create volumes. Specifying a policy is useful if you create identical volumes in multiple LCUs. The modified policy setting is effective during one DS Storage Manager sign on session. Properties Displays volume properties such as capacity and allocation methods.
337
6786ch_ConfigGUI.fm
338
6786ch_DSCLI.fm
15
Chapter 15.
339
6786ch_DSCLI.fm
340
6786ch_DSCLI.fm
When configuring a DS8000 with the DS CLI, you are required to include the machines ID in nearly every command that is issued. If we do not want to type this ID after each command, then you need to change the DS CLI profile. If you enter the serial number of the machine and the HMCs network address into this profile, you will not have to include this field in each command. This profile can usually be found in the directory: C:\Program Files\IBM\dscli\Profile\dscli.profile. A simple way to edit the profile is to do the following: 1. From the Windows desktop, double-click the DS CLI icon. 2. From the command window that opens, enter the command: cd profile 3. Now from the profile directory, enter the command notepad dscli.profile, as shown in Example 15-2.
Example 15-2 Command prompt operation C:\Program Files\ibm\dscli>cd profile C:\Program Files\IBM\dscli\profile>notepad dscli.profile
4. Now you have notepad opened with the DS CLI profile. There are four lines you can consider adding. Examples of these are shown in bold in Example 15-3.
Example 15-3 DS CLI profile example # DS CLI Profile # # Management Console/Node IP Address(es) # hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options. #hmc1:127.0.0.1 #hmc2:127.0.0.1 # Default target Storage Image ID # "devid" and "remotedevid" are equivalent to # "-dev storage_image_ID" and "-remotedev storeage_image_ID" command options, respectively. #devid: IBM.2107-AZ12341 #remotedevid:IBM.2107-AZ12341 devid: hmc1: IBM.2107-75ABCDE 10.0.0.250
5. If you save the file to C:\Documents and Settings\Administrator\dscli\profile\ on a Windows system, you can then reference the new profile using the -cfg parameter when starting DS CLI, without having to specify the path. Important: The default profile file created when you install DS CLI will potentially be replaced every time you install a new version of DS CLI. It is a better practice to open the default profile and then save it as a new file. You can then create multiple profiles and reference the relevant profile file using the -cfg parameter. Adding the serial number using the devid parameter, and the DS HMC IP address using the hmc1 parameter is highly recommended. Adding the username and password parameters will
341
6786ch_DSCLI.fm
certainly simplify the DS CLI startup, but is not recommended because a password is saved in clear text in the profile file. It can thus be read by anyone with access to that file. It is better to create an encrypted password file with the managepwfile CLI command. Important: Take care if adding multiple devid and HMC entries. Only one should be uncommented (or more literally, unhashed) at any one time. If you have multiple hmc1 or devid entries, the DS CLI uses the one closest to the bottom of the profile.
There are three possible topologies for each I/O port: SCSI-FCP FC-AL FICON Fibre Channel switched fabric (also called point to point) Fibre channel arbitrated loop FICON - for System z hosts only
In Example 15-5 we set two I/O ports to the FICON topology and then check the results.
Example 15-5 Changing topology using setioport dscli> setioport -topology ficon I0001 Date/Time: 27 October 2005 23:04:43 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00011I setioport: I/O Port I0001 successfully configured. dscli> setioport -topology ficon I0101 Date/Time: 27 October 2005 23:06:13 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00011I setioport: I/O Port I0101 successfully configured. dscli> lsioport Date/Time: 27 October 2005 23:06:32 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 ID WWPN State Type topo portgrp =============================================================== I0000 500507630300008F Online Fibre Channel-SW SCSI-FCP 0 I0001 500507630300408F Online Fibre Channel-SW FICON 0 I0002 500507630300808F Online Fibre Channel-SW SCSI-FCP 0 I0003 500507630300C08F Online Fibre Channel-SW SCSI-FCP 0 I0100 500507630308008F Online Fibre Channel-LW FICON 0 I0101 500507630308408F Online Fibre Channel-LW FICON 0 I0102 500507630308808F Online Fibre Channel-LW FICON 0 I0103 500507630308C08F Online Fibre Channel-LW FICON 0
342
6786ch_DSCLI.fm
The first task, set the I/O ports, has already been discussed in 15.2, Configuring the I/O ports on page 342. The second task, install the license keys, has already been discussed in 11.2.4, Applying activation codes using the DS CLI on page 241.
In Example 15-6, we can see that there are four array sites and that we can therefore create four arrays. We can now issue the mkarray command to create arrays, as in Example 15-7. You will notice that in this case we have used one array site (in the first array, S1) to create a single RAID-5 array. If we wished to create a RAID-10 array, we would have to change the -raidtype parameter to 10 (instead of 5).
Example 15-7 Creating arrays with mkarray dscli> mkarray -raidtype 5 -arsite S1 Date/Time: 27 October 2005 21:57:59 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00004I mkarray: Array A0 successfully created. dscli> mkarray -raidtype 5 -arsite S2 Date/Time: 27 October 2005 21:58:24 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00004I mkarray: Array A1 successfully created. dscli>
343
6786ch_DSCLI.fm
We can now see what arrays have been created by using the lsarray command; see Example 15-8.
Example 15-8 Listing the arrays with lsarray dscli> lsarray Date/Time: 27 October 2005 21:58:27 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B) ===================================================================== A0 Unassigned Normal 5 (6+P+S) S1 0 146.0 A1 Unassigned Normal 5 (6+P+S) S2 0 146.0
Example 15-8 shows the result of the lsarray command. We can see the type of RAID array and the number of disks that are allocated to the array (in this example 6+P+S, which means the usable space of the array is 6 times the DDM size), as well as the capacity of the DDMs that are used and which array sites were used to create the arrays.
344
6786ch_DSCLI.fm
For ease of management, we create empty extent pools relating to the type of storage that is in this pool. For example, create an extent pool for high capacity disk, create another for high performance, and, if needed, extent pools for the CKD environment. For high capacity, you would consider using 300 GB DDMs, while for high performance you might consider 73 GB DDMs. It is also a good idea to note to which server the extent pool has an affinity.
Example 15-10 An extent pool layout plan FB Extent Pool high capacity 300gb disks assigned to server 0 (FB_LOW_0) FB Extent Pool high capacity 300gb disks assigned to server 1 (FB_LOW_1) FB Extent Pool high performance 146gb disks assigned to server 0 (FB_High_0) FB Extent Pool high performance 146gb disks assigned to server 0 (FB_High_1) CKD Extent Pool High performance 146gb disks assigned to server 0 (CKD_High_0) CKD Extent Pool High performance 146gb disks assigned to server 1 (CKD_High_1)
Example 15-10 shows an example of how we could divide your machine. Now in Example 15-6 on page 343 we only had four array sites, so clearly we would need more DDMs to support this many extent pools. Note that the mkextpool command forces you to name the extent pools. In Example 15-11 we first create empty extent pools using mkextpool. We then list the extent pools to get their IDs. Then we attach a rank to an empty extent pool using the chrank command. Finally we list the extent pools again using lsextpool and note the change in capacity of the extent pool.
Example 15-11 Extent pool creation using mkextpool, lsextpool, and chrank dscli> mkextpool -rankgrp 0 -stgtype fb FB_high_0 Date/Time: 27 October 2005 21:42:04 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00000I mkextpool: Extent Pool P0 successfully created. dscli> mkextpool -rankgrp 1 -stgtype fb FB_high_1 Date/Time: 27 October 2005 21:42:12 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00000I mkextpool: Extent Pool P1 successfully created. dscli> lsextpool Date/Time: 27 October 2005 21:49:33 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols =========================================================================================== FB_high_0 P0 fb 0 below 0 0 0 0 0 FB_high_1 P1 fb 1 below 0 0 0 0 0 dscli> chrank -extpool P0 R0 Date/Time: 27 October 2005 21:43:23 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00008I chrank: Rank R0 successfully modified. dscli> chrank -extpool P1 R1 Date/Time: 27 October 2005 21:43:23 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00008I chrank: Rank R1 successfully modified. dscli> lsextpool Date/Time: 27 October 2005 21:50:10 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols =========================================================================================== FB_high_0 P0 fb 0 below 773 0 773 0 0 FB_high_1 P1 fb 1 below 773 0 773 0 0
After having assigned a rank to an extent pool, we should be able to see this when we display the ranks. In Example 15-12 on page 346 we can see that rank R0 is assigned to extpool P0.
345
6786ch_DSCLI.fm
Example 15-12 Displaying the ranks after assigning a rank to an extent pool dscli> lsrank -l Date/Time: 27 October 2005 22:08:42 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 ID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts =================================================================================== R0 0 Normal Normal A0 5 P0 FB_high_0 fb 773 0 R1 1 Normal Normal A1 5 P1 FB_high_1 fb 773 0
A repository can be deleted with the rmsestg command and you get information about the repository with the showsestg command. Example 15-14 shows the output of the showsestg command. You might particularly be interested in how much capacity is used within the repository by checking the repcapalloc value.
Example 15-14 Getting information about a Space Efficient repositpry dscli> showsestg p53 Date/Time: October 17, 2007 1:30:53 IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 extentpoolID P53 stgtype fb datastate Normal configstate Normal repcapstatus below %repcapthreshold 0 repcap (2^30B) 100.0 repcapblocks 209715200 repcapcyls repcapalloc 0.0 %repcapalloc 0 vircap 200.0 vircapblocks 419430400 vircapcyls vircapalloc 0.0 %vircapalloc 0 overhead 3.0 dscli>
346
6786ch_DSCLI.fm
Note that some more storage is allocated for the repository in addition to repcap size. In Example 15-14 the line that starts with overhead indicates that 3 GB had been allocated in addition to the repcap size. Attention: In the current implementation of Space Efficient volumes it is not possible to expand a repository. Neither the physical size nor the virtual size of the repository can be changed. Therefore careful planning is required. If you have to expand a repository you must delete all Space Efficient volumes and the repository itself before creating a new one.
Looking closely at the mkfbvol command used in Example 15-15, we see that volumes 10001003 are in extpool P0. That extent pool is attached to rank group 0, which means server 0. Now rank group 0 can only contain even numbered LSSs, so that means volumes in that extent pool must belong to an even numbered LSS. The first two digits of the volume serial number are the LSS number, so in this case, volumes 10001003 are in LSS 10. For volumes 11001003 in Example 15-15, the first two digits of the volume serial number are 11, which is an odd number, which signifies they belong to rank group 1. Also note that the -cap parameter determines size, but because the -type parameter was not used, the default size is a binary size. So these volumes are 10 GB binary, which equates to 10,737,418,240 Bytes. If we used the parameter -type ess, then the volumes would be decimally sized and would be a minimum of 10,000,000,000 Bytes in size.
347
6786ch_DSCLI.fm
Finally, in Example 15-15 we named the volumes using the naming base: high_fb_0_#h. The #h means to use the hexadecimal volume number as part of the volume name. This can be seen in Example 15-16, where we list the volumes that we have created using the lsfbvol command. We then list the extent pools to see how much space we have left after the volume creation.
Example 15-16 Checking the machine after creating volumes, by using lsextpool and lsfbvol dscli> lsfbvol Date/Time: 27 October 2005 22:28:01 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) cap (10^9B) =========================================================================================================== high_fb_0_1000 1000 Online Normal Normal 2107-922 FB 512 P0 10.0 high_fb_0_1001 1001 Online Normal Normal 2107-922 FB 512 P0 10.0 high_fb_0_1002 1002 Online Normal Normal 2107-922 FB 512 P0 10.0 high_fb_0_1003 1003 Online Normal Normal 2107-922 FB 512 P0 10.0 high_fb_1_1100 1100 Online Normal Normal 2107-922 FB 512 P1 10.0 high_fb_1_1101 1101 Online Normal Normal 2107-922 FB 512 P1 10.0 high_fb_1_1102 1102 Online Normal Normal 2107-922 FB 512 P1 10.0 high_fb_1_1103 1103 Online Normal Normal 2107-922 FB 512 P1 10.0 dscli> lsextpool Date/Time: 27 October 2005 22:27:50 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols =========================================================================================== FB_high_0 P0 fb 0 below 733 5 733 0 4 FB_high_1 P1 fb 1 below 733 5 733 0 4
Important: For the DS8000, the LSSs can be ID 00 to ID FE. The LSSs are in address groups. Address group 0 is LSS 00 to 0F, address group 1 is LSS 10 to 1F, and so on. The moment you create an FB volume in an address group, then that entire address group can only be used for FB volumes. Be aware of this when planning your volume layout in a mixed FB/CKD DS8000.
The showfbvol command with the -rank option (see Example 15-18) shows that the volume we created is distributed across 12 ranks and how many extents on each rank were allocated for this volume.
348
6786ch_DSCLI.fm
Example 15-18 Getting information about a striped volume dscli> showfbvol -rank 1720 Date/Time: October 17, 2007 1:56:52 IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 Name ITSO-XPSTR ID 1720 accstate Online datastate Normal configstate Normal deviceMTM 2107-900 datatype FB 512 addrgrp 1 extpool P53 exts 15 captype DS cap (2^30B) 15.0 cap (10^9B) cap (blocks) 31457280 volgrp ranks 12 dbexts 0 sam Standard repcapalloc eam rotateexts reqcap (blocks) 31457280 ==============Rank extents============== rank extents ============ R24 2 R25 1 R28 1 R29 1 R32 1 R33 1 R34 1 R36 1 R37 1 R38 1 R40 2 R41 2 dscli>
349
6786ch_DSCLI.fm
When listing Space Efficient repositories with the lssestg command (see Example 15-20) we can see that in extent pool P53 we have a virtual allocation of 40 extents (GB), but that the allocated (used) capacity repcapalloc is still zero.
Example 15-20 Getting information about Space Efficient repositories
dscli> lssestg -l Date/Time: October 17, 2007 3:12:11 PM CEST IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc ====================================================================================================================== P4 ckd Normal Normal below 0 64.0 1.0 0.0 0.0 P47 fb Normal Normal below 0 70.0 282.0 0.0 264.0 P53 fb Normal Normal below 0 100.0 200.0 0.0 40.0 dscli>
This allocation comes from the volume just created. To see the allocated space in the repository for just this volume, we can use the showfbvol command (see Example 15-21).
Example 15-21 Checking the repository usage for a volume dscli> showfbvol 1721 Date/Time: October 17, 2007 3:29:30 PM CEST IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 Name ITSO-1721-SE ID 1721 accstate Online datastate Normal configstate Normal deviceMTM 2107-900 datatype FB 512 addrgrp 1 extpool P53 exts 40 captype DS cap (2^30B) 40.0 cap (10^9B) cap (blocks) 83886080 volgrp ranks 0 dbexts 0 sam TSE repcapalloc 0 eam reqcap (blocks) 83886080 dscli>
350
6786ch_DSCLI.fm
As the original volume had the rotateexts attribute, the additional extents are also striped (see Example 15-23).
Example 15-23 Checking the status of an expanded volume dscli> showfbvol -rank 1720 Date/Time: October 17, 2007 2:52:04 IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 Name ITSO-XPSTR ID 1720 accstate Online datastate Normal configstate Normal deviceMTM 2107-900 datatype FB 512 addrgrp 1 extpool P53 exts 20 captype DS cap (2^30B) 20.0 cap (10^9B) cap (blocks) 41943040 volgrp ranks 12 dbexts 0 sam Standard repcapalloc eam rotateexts reqcap (blocks) 41943040 ==============Rank extents============== rank extents ============ R24 2 R25 2 R28 2 R29 2 R32 2 R33 2 R34 1 R36 1 R37 1 R38 1 R40 2 R41 2 dscli>
Important: Before you can expand a volume you first have to delete all copy services relationships for that volume.
351
6786ch_DSCLI.fm
Having determined the host type, we can now make a volume group. In Example 15-25 the example host type we chose is AIX, and checking Example 15-24 on page 352, we can see the address discovery method for AIX is scsimask.
352
6786ch_DSCLI.fm
Example 15-25 Creating a volume group with mkvolgrp and displaying it dscli> mkvolgrp -type scsimask -volume 1000-1002,1100-1102 AIX_VG_01 Date/Time: 27 October 2005 23:18:07 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00030I mkvolgrp: Volume group V11 successfully created. dscli> lsvolgrp Date/Time: 27 October 2005 23:18:21 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 Name ID Type ======================================= ALL CKD V10 FICON/ESCON All AIX_VG_01 V11 SCSI Mask ALL Fixed Block-512 V20 SCSI All ALL Fixed Block-520 V30 OS400 All dscli> showvolgrp V11 Date/Time: 27 October 2005 23:18:15 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 Name AIX_VG_01 ID V11 Type SCSI Mask Vols 1000 1001 1002 1100 1101 1102
In this example, we added volumes 1000 to 1002 and 1100 to 1102 to the new volume group. We did this to spread the workload evenly across the two rank groups. We then listed all available volume groups using lsvolgrp. Finally, we listed the contents of volume group V11, since this was the volume group we created. Clearly we may also want to add or remove volumes to this volume group at a later time. To achieve this we use chvolgrp with the -action parameter. In Example 15-26 on page 353 we added volume 1003 to the volume group V11. We display the results, and then removed the volume.
Example 15-26 Changing a volume group with chvolgrp dscli> chvolgrp -action add -volume 1003 V11 Date/Time: 27 October 2005 23:22:50 IBM DSCLI Version: 5.1.0.204 CMUC00031I chvolgrp: Volume group V11 successfully modified. dscli> showvolgrp V11 Date/Time: 27 October 2005 23:22:58 IBM DSCLI Version: 5.1.0.204 Name AIX_VG_01 ID V11 Type SCSI Mask Vols 1000 1001 1002 1003 1100 1101 1102 dscli> chvolgrp -action remove -volume 1003 V11 Date/Time: 27 October 2005 23:23:08 IBM DSCLI Version: 5.1.0.204 CMUC00031I chvolgrp: Volume group V11 successfully modified. dscli> showvolgrp V11 Date/Time: 27 October 2005 23:23:13 IBM DSCLI Version: 5.1.0.204 Name AIX_VG_01 ID V11 Type SCSI Mask Vols 1000 1001 1002 1100 1101 1102 DS: IBM.2107-7503461
DS: IBM.2107-7503461
DS: IBM.2107-7503461
DS: IBM.2107-7503461
Important: Not all operating systems can deal with the removal of a volume. Consult your operating system documentation to determine the safest way to remove a volume from a host. All operations with volumes and volume groups described above can also be used with Space Efficient volumes as well.
353
6786ch_DSCLI.fm
Note that you can also use just -profile instead of -hosttype. However, we do not recommend that you do this. If you use the -hosttype parameter, it actually invokes both parameters (-profile and -hosttype), where using just -profile leaves the -hosttype column unpopulated. There is also the option in the mkhostconnect command to restrict access to only certain I/O ports. This is done with the -ioport parameter. Restricting access in this way is usually unnecessary. If you want to restrict access for certain hosts to certain I/O ports on the DS8000, do this through zoning on your SAN switch.
6786ch_DSCLI.fm
Example 15-28 Using the portgrp number to separate attached hosts dscli> lshostconnect Date/Time: 14 November 2005 4:27:15 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 Name ID WWPN HostType Profile portgrp volgrpID =========================================================================================== bench_tic17_fc0 0008 210000E08B1234B1 LinuxSuse Intel - Linux Suse 8 V1 all bench_tic17_fc1 0009 210000E08B12A3A2 LinuxSuse Intel - Linux Suse 8 V1 all p630_fcs0 000E 10000000C9318C7A pSeries IBM pSeries - AIX 9 V2 all p630_fcs1 000F 10000000C9359D36 pSeries IBM pSeries - AIX 9 V2 all p615_7 0010 10000000C93E007C pSeries IBM pSeries - AIX 10 V3 all p615_7 0011 10000000C93E0059 pSeries IBM pSeries - AIX 10 V3 all
355
6786ch_DSCLI.fm
Example 15-30 lshostvol on an AIX host using SDD dscli> lshostvol Date/Time: November 10, 2005 3:06:26 PM CET IBM DSCLI Version: 5.0.6.142 Disk Name Volume Id Vpath Name ============================================================ hdisk1,hdisk3,hdisk5,hdisk7 IBM.1750-1300247/1000 vpath0 hdisk2,hdisk4,hdisk6,hdisk8 IBM.1750-1300247/1100 vpath1
356
6786ch_DSCLI.fm
You do not have to create volume groups or host connects for CKD volumes. Provided there are I/O ports in Fibre Channel connection (FICON) mode, access to CKD volumes by FICON hosts is granted automatically.
357
6786ch_DSCLI.fm
A repository can be deleted with the rmsestg command and you get information about the repository with the showsestg command. Example 15-38 shows the output of the showsestg command. You might particularly be interested how much capacity within the repository is used by checking the repcapalloc value.
358
6786ch_DSCLI.fm
Example 15-38 Getting information about a Space Efficient CKD repositpry dscli> showsestg p4 Date/Time: October 17, 2007 4:17:04 IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 extentpoolID P4 stgtype ckd datastate Normal configstate Normal repcapstatus below %repcapthreshold 0 repcap (2^30B) 100.0 repcapblocks repcapcyls 126329 repcapalloc 0.0 %repcapalloc 0 vircap 200.0 vircapblocks vircapcyls 252658 vircapalloc 0.0 %vircapalloc 0 overhead 4.0 dscli>
Note that some more storage is allocated for the repository in addition to repcap size. In Example 15-38 the line that starts with overhead indicated that 4 GB had been allocated in addition to the repcap size. Important: In the current implementation of Space Efficient volumes it is not possible to expand a repository. Neither the physical size nor the virtual size of the repository can be changed. Therefore careful planning is required. If you have to expand a repository you must delete all Space Efficient volumes and the repository itself before creating a new one.
So first we must use the mklcu command. The format of the command is: mklcu -qty XX -id XX -ssXX Then, to display the LCUs that we have created, we can use the lslcu command. In Example 15-40 on page 359, we create two LCUs using mklcu, and then list the created LCUs using lslcu. Note that by default, the LCUs that were created are 3990-6.
Example 15-40 Creating a logical control unit with mklcu dscli> mklcu -qty 2 -id 00 -ss FF00 Date/Time: 28 October 2005 16:53:17 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00017I mklcu: LCU 00 successfully created. CMUC00017I mklcu: LCU 01 successfully created. dscli> lslcu
359
6786ch_DSCLI.fm
Date/Time: 28 October 2005 16:53:26 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 ID Group addrgrp confgvols subsys conbasetype ============================================= 00 0 0 0 0xFF00 3990-6 01 1 0 0 0xFF01 3990-6
Also note that because we created two LCUs (using the parameter -qty 2), the first LCU, ID 00 (an even number), is in address group 0, which equates to rank group 0; while the second LCU, ID 01 (an odd number), is in address group 1, which equates to rank group 1. By placing the LCUs into both address groups, we maximize performance by spreading workload across both rank groups of the DS8000. Note: For the DS8000, the CKD LCUs can be ID 00 to ID FE. The LCUs fit into one of 16 address groups. Address group 0 is LCUs 00 to 0F, address group 1 is LCUs 10 to 1F, and so on. If you create a CKD LCU in an address group, then that address group cannot be used for FB volumes. Likewise, if there were, for instance, FB volumes in LSS 40 to 4F (address group 4), then that address group cannot be used for CKD. Be aware of this when planning the volume layout in a mixed FB/CKD DS8000.
Remember, we can only create CKD volumes using LCUs that we have already created. From our examples, trying, for instance, to make volume 0200 fails with the same message seen in Example 15-39 on page 359, because we only created LCU IDs 00 and 01, meaning all CKD volumes must be in the address range 00xx (LCU ID 00) and 01xx (LCU ID 01). You also need to be aware that volumes in even numbered LCUs must be created from an extent pool that belongs to rank group 0, while volumes in odd numbered LCUs must be created from an extent pool in rank group 1.
360
6786ch_DSCLI.fm
there is enough free space on that rank). The next rank is used when the next volume is created. This allocation method is called rotate volumes. You can also specify that you want the extents of the volume you are creating be evenly distributed across all ranks within the extent pool. The extent allocation method is specified with the -eam rotateexts or -eam rotatevols option of the mkckdvol command (see Example 15-42).
Example 15-42 Creating a CKD volume with extent pool striping dscli> mkckdvol -extpool p4 -cap 10017 -name ITSO-CKD-STRP -eam rotateexts 0080 Date/Time: October 17, 2007 4:26:29 IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 CMUC00021I mkckdvol: CKD Volume 0080 successfully created. dscli>
The showckdvol command with the -rank option (see Example 15-43) shows that the volume we created is distributed across 2 ranks and how many extents on each rank were allocated for this volume.
Example 15-43 Getting information about a striped CKD volume dscli> showckdvol -rank 0080 Date/Time: October 17, 2007 4:28:47 PM CEST IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 Name ITSO-CKD-STRP ID 0080 accstate Online datastate Normal configstate Normal deviceMTM 3390-9 volser datatype 3390 voltype CKD Base orgbvols addrgrp 0 extpool P4 exts 9 cap (cyl) 10017 cap (10^9B) 8.5 cap (2^30B) 7.9 ranks 2 sam Standard repcapalloc eam rotateexts reqcap (cyl) 10017 ==============Rank extents============== rank extents ============ R4 4 R30 5 dscli>
361
6786ch_DSCLI.fm
A Space Efficient volume is created by specifying the -sam tse (track space efficient) parameter on the mkckdvol command (see Example 15-44).
Example 15-44 Creating a Space Efficient CKD volume dscli> mkckdvol -extpool p4 -cap 10017 -name ITSO-CKD-SE -sam tse 0081 Date/Time: October 17, 2007 4:34:10 IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 CMUC00021I mkckdvol: CKD Volume 0081 successfully created.
When listing Space Efficient repositories with the lssestg command (see Example 15-45) we can see that in extent pool P4 we have a virtual allocation of 7.9 GB, but that the allocated (used) capacity repcapalloc is still zero.
Example 15-45 Getting information about Space Efficient CKD repositories
dscli> lssestg -l Date/Time: October 17, 2007 4:37:34 IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc ====================================================================================================================== P4 ckd Normal Normal below 0 100.0 200.0 0.0 7.9 dscli>
This allocation comes from the volume just created. To see the allocated space in the repository for just this volume we can use the showfbvol command (see Example 15-46).
Example 15-46 Checking the repository usage for a CKD volume dscli> showckdvol 0081 Date/Time: October 17, 2007 4:49:18 IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 Name ITSO-CKD-SE ID 0081 accstate Online datastate Normal configstate Normal deviceMTM 3390-9 volser datatype 3390 voltype CKD Base orgbvols addrgrp 0 extpool P4 exts 9 cap (cyl) 10017 cap (10^9B) 8.5 cap (2^30B) 7.9 ranks 0 sam TSE repcapalloc 0 eam reqcap (cyl) 10017 dscli>
362
6786ch_DSCLI.fm
Example 15-47 Expanding a striped CKD volume dscli> chckdvol -cap 30051 0080 Date/Time: October 17, 2007 4:54:09 IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 CMUC00332W chckdvol: Some host operating systems do not support changing the volume size. Are you sure that you want to resize the volume? [y/n]: y CMUC00022I chckdvol: CKD Volume 0080 successfully modified. dscli>
As the original volume had the rotateexts attribute, the additional extents are also striped (see Example 15-48).
Example 15-48 Checking the status of an expanded CKD volume dscli> showckdvol -rank 0080 Date/Time: October 17, 2007 4:56:01 IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 Name ITSO-CKD-STRP ID 0080 accstate Online datastate Normal configstate Normal deviceMTM 3390-9 volser datatype 3390 voltype CKD Base orgbvols addrgrp 0 extpool P4 exts 27 cap (cyl) 30051 cap (10^9B) 25.5 cap (2^30B) 23.8 ranks 2 sam Standard repcapalloc eam rotateexts reqcap (cyl) 30051 ==============Rank extents============== rank extents ============ R4 13 R30 14 dscli>
Attention: Before you can expand a volume you first have to delete all copy services relationships for that volume.
363
6786ch_DSCLI.fm
Having created the BAT file, we can run it and display the output file. An illustration is shown in Example 15-50. We run the batch file samplebat.bat and the command output displays.
Example 15-50 Executing a BAT file with DS CLI commands in it D:\>samplebat.bat Date/Time: 28 October 2005 23:02:32 IBM DSCLI arsite DA Pair dkcap (10^9B) State Array ============================================= S1 0 73.0 Unassigned S2 0 73.0 Unassigned S3 0 73.0 Unassigned S4 0 73.0 Unassigned S5 0 73.0 Unassigned S6 0 73.0 Unassigned Date/Time: 28 October 2005 23:02:39 IBM DSCLI CMUC00234I lsarray: No Array found. Date/Time: 28 October 2005 23:02:47 IBM DSCLI CMUC00234I lsrank: No Rank found. Date/Time: 28 October 2005 23:02:53 IBM DSCLI CMUC00234I lsextpool: No Extent Pool found. D:\> Version: 5.1.0.204 DS: IBM.2107-7503461
Version: 5.1.0.204 DS: IBM.2107-7503461 Version: 5.1.0.204 DS: IBM.2107-7503461 Version: 5.1.0.204 DS: IBM.2107-7503461
364
6786ch_DSCLI.fm
In Example 15-51, we have the contents of a DS CLI script file. Note that it only contains DS CLI commands, though comments can be placed in the file using a hash symbol (#). One advantage of using this method is that scripts written in this format can be used by the DS CLI on any operating system into which you can install DS CLI.
Example 15-51 Example of a DS CLI script file # Sample ds cli script file # Comments can appear if hashed lsarraysite lsarray lsrank
In Example 15-52, we start the DS CLI using the -script parameter and specifying the name of the script that contains the commands from Example 15-51.
Example 15-52 Executing DS CLI with a script file C:\Program Files\ibm\dscli>dscli -script sample.script Date/Time: 28 October 2005 23:06:47 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 arsite DA Pair dkcap (10^9B) State Array ============================================= S1 0 73.0 Unassigned S2 0 73.0 Unassigned S3 0 73.0 Unassigned S4 0 73.0 Unassigned S5 0 73.0 Unassigned S6 0 73.0 Unassigned Date/Time: 28 October 2005 23:06:52 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00234I lsarray: No Array found. Date/Time: 28 October 2005 23:06:53 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00234I lsrank: No Rank found. C:\Program Files\ibm\dscli>
365
6786ch_DSCLI.fm
366
6786p_Host consideration.fm
Part 4
Part
Host considerations
In this part, we discuss the specific host considerations that you might need for implementing the DS8000 with your chosen platform. We present the following host platforms: Open systems considerations System z considerations System i considerations
367
6786p_Host consideration.fm
368
6786ch_hostconsid_open.fm
16
Chapter 16.
369
6786ch_hostconsid_open.fm
http://www.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.wss?start_ over=yes
For each query, select a storage system, a server model, an operating system and an HBA type. Each query shows a list of all supported HBAs together with the required firmware and device driver levels for your combination. Furthermore, a list of supported SAN switches and directors is displayed.
370
6786ch_hostconsid_open.fm
The System Storage Proven Web site provides more detail about the program, as well as the list of pretested configurations:
http://www.ibm.com/servers/storage/proven/index.html
QLogic Corporation
The Qlogic Web site can be found at:
http://www.qlogic.com
QLogic maintains a page that lists all the HBAs, drivers, and firmware versions that are supported for attachment to IBM storage systems:
http://support.qlogic.com/support/oem_ibm.asp
Emulex Corporation
The Emulex home page is:
http://www.emulex.com
They also have a page with content specific to IBM storage systems:
http://www.emulex.com/ts/docoem/framibm.htm
JNI/AMCC
AMCC took over the former JNI, but still markets Fibre Channel (FC) HBAs under the JNI brand name. JNI HBAs are supported for DS8000 attachment to SUN systems. The home page is:
http://www.amcc.com
Atto
Atto supplies HBAs, which IBM supports for Apple Macintosh attachment to the DS8000. Their home page is:
http://www.attotech.com
They have no IBM storage specific page. Their support page is:
http://www.attotech.com/support.html
You must register in order to downloading drivers and utilities for their HBAs.
371
6786ch_hostconsid_open.fm
372
6786ch_hostconsid_open.fm
When you click the Subsystem Device Driver downloads link, you are presented a list of all operating systems for which SDD is available. Selecting an operating system leads you to the download packages, the users guide, and additional support information. The users guide, the IBM System Storage Multipath Subsystem Device Driver Users Guide, SC30-4131, contains all the information that is needed to install, configure, and use SDD for all supported operating systems. Note: SDD and RDAC, the multipathing solution for the IBM System Storage DS4000 series, can coexist on most operating systems, as long as they manage separate HBA pairs. Refer to the DS4000 series documentation for detailed information.
16.2 Windows
DS8000 supports Fibre Channel attachment to Microsoft Windows 2000 Server and Windows Server 2003 servers. For details regarding operating system versions and HBA types, see the DS8000 Interoperability Matrix or the System Storage Interoperation Center (SSIC), available at:
http://www.ibm.com/servers/storage/disk/DS8000/interop.html http://www-03.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.wss?sta rt_over=yes
The support includes cluster service and acts as a boot device. Booting is supported currently with host adapters QLA23xx (32-bit or 64-bit) and LP9xxx (32-bit only). For a detailed discussion about SAN booting (advantages, disadvantages, potential difficulties, and troubleshooting), we highly recommend the Microsoft document Boot from SAN in Windows Server 2003 and Windows 2000 Server, available at:
http://www.microsoft.com/windowsserversystem/wss2003/techinfo/plandeploy/BootfromSANinWindo ws.mspx
373
6786ch_hostconsid_open.fm
setting the value of the Time Out Value associated with the host adapters to 60 seconds. The operating system uses the Time Out Value parameter to bind its recovery actions and responses to the disk subsystem. The value is stored in the Windows registry at: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk\TimeOutValue The value has the data type REG-DWORD and should be set to 0x0000003c hexadecimal (60 decimal).
374
6786ch_hostconsid_open.fm
Note: New assigned disks will be discovered; if not, go to Disk manager and rescan disks or go to the Device manager and scan for hardware changes.
375
6786ch_hostconsid_open.fm
============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 6 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 4 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 4 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 2 0
Another helpful command is datapath query wwpn, shown in Example 16-2. It helps you to get the Worldwide Port Name (WWPN) of your Fibre Channel adapter.
Example 16-2 datapath query WWPN C:\Program Files\IBM\Subsystem Device Driver>datapath query wwpn Adapter Name PortWWN Scsi Port2: 210000E08B037575 Scsi Port3: 210000E08B033D76
The commands datapath query essmap and datapath query portmap are not available.
Support for Windows 2000 Server and Windows Server 2003 clustering
SDD 1.6.0.0 (or later) is required to support load balancing in Windows clustering. When running Windows clustering, clustering failover might not occur when the last path is being removed from the shared resources. See Microsoft article Q294173 for additional information at:
http://support.microsoft.com/default.aspx?scid=kb;en-us;Q294173
Windows does not support dynamic disks in the Microsoft Cluster Server (MSCS) environment.
6786ch_hostconsid_open.fm
condition changes back to open and the adapter condition changes back to normal, even if the path has not been made operational again. Note: The adapter goes to DEGRAD state when there are active paths left on the adapter. It goes to FAILED state when there are no active paths. The datapath set adapter # offline command operates differently in a clustering environment as compared to a non-clustering environment. In a clustering environment, the datapath set adapter offline command does not change the condition of the path if the path is active or being reserved. If you issue the command, the following message displays: to preserve access some paths left online.
16.2.4 Subsystem Device Driver Device Specific Module for Windows 2003
Subsystem Device Driver Device Specific Module (SDDDSM) installation is a package for DS8000 devices on the Windows Server 2003 operating system. Subsystem Device Driver Device Specific Module (SDDDSM) is the IBM multipath IO solution based on Microsoft MPIO technology. Together with MPIO, it is designed to support the multipath configuration environments in the IBM System Storage DS8000. It resides in a host system with the native disk device driver and provides the following functions: Enhanced data availability Dynamic I/O load-balancing across multiple paths Automatic path failover protection Concurrent download of licensed internal code Path-selection policies for the host system SDDDSM doesn't support Windows 2000 For HBA driver, SDDDSM requires storport version of HBA miniport driver To download SDDDSM, go to the Web site:
http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D430&uid=ssg1S4000350&lo= en_US&cs=utf-8&lang=en
377
6786ch_hostconsid_open.fm
378
6786ch_hostconsid_open.fm
Note: New assigned disks will be discovered; if not, go to Disk manager and rescan disks or go to the Device manager and scan for hardware changes.
DEV#: 0 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 75207814703 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port1 Bus0/Disk2 Part0 OPEN NORMAL 203 4 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 173 1 DEV#: 1 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 75ABTV14703 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port1 Bus0/Disk3 Part0 OPEN NORMAL 180 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 158 0 DEV#: 2 DEVICE NAME: Disk4 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 75034614703 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port1 Bus0/Disk4 Part0 OPEN NORMAL 221 0 1 Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 159 0
Another helpful command is datapath query wwpn, shown in Example 16-4. It helps you to get the Worldwide Port Name (WWPN) of your Fibre Channel adapter.
Example 16-4 datapath query WWPN with SDDDSM C:\Program Files\IBM\SDDDSM>datapath query wwpn Adapter Name PortWWN Scsi Port1: 210000E08B1EAE9B Scsi Port2: 210000E08B0B8836
The commands datapath query portmap is shown in Example 16-5. It shows you a map of the DS8000 I/O ports and on which I/O ports your HBAs are connected.
379
6786ch_hostconsid_open.fm
Note: 2105 devices' essid has 5 digits, while 1750/2107 device's essid has 7 digits.
Another very helpful command is datapath query essmap, shown in Example 16-6. It gives you additional information about your LUNs and also the I/O port numbers.
Example 16-6 datapath query essmap
C:\Program Files\IBM\SDDDSM>datapath query essmap Disk Path P Location LUN SN Type ------- ------- -- ------------ ------------- ------------Disk2 Path0 Port1 Bus0 75207814703 IBM 2107900 Disk2 Path1 Port2 Bus0 75207814703 IBM 2107900 Disk3 Path0 Port1 Bus0 75ABTV14703 IBM 2107900 Disk3 Path1 Port2 Bus0 75ABTV14703 IBM 2107900 Disk4 Path0 Port1 Bus0 75034614703 IBM 2107900 Disk4 Path1 Port2 Bus0 75034614703 IBM 2107900 Size LSS Vol Rank C/A ------ ---- ---- ----- ---12.0GB 47 03 0000 2c 12.0GB 47 03 0000 2c 12.0GB 47 03 0000 0b 12.0GB 47 03 0000 0b 12.0GB 47 03 0000 0e 12.0GB 47 03 0000 0e S -Y Y Y Y Y Y Connection Port ------------- ----R1-B2-H4-ZD 143 R1-B4-H4-ZD 343 R1-B1-H1-ZA 0 R1-B2-H3-ZA 130 R1-B2-H4-ZD 143 R1-B4-H2-ZD 313 RaidMode ---------RAID5 RAID5 RAID5 RAID5 RAID5 RAID5
380
6786ch_hostconsid_open.fm
or
http://support.microsoft.com/default.aspx?scid=kb;en-us;304736&sd=tech
An example of how to expand a (DS8000) volume on a Windows 2003 host is shown in the following discussion. To list the volume size, use the command lsfbvol as shown in Example 16-7 on page 381.
Here we can see that the capacity is 12 GB, and also what the volume id is. To find what disk this volume is on the Windows 2003 host, we use the SDDDSM command, datapath query device on the windows host, shown in Example 16-8. To open a command window for SDDDSM, from your desktop, click Start Programs Subsystem Device Driver DSM Subsystem Device Driver DSM.
Example 16-8 datapath query device before expansion C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 3
DEV#: 0 DEVICE NAME: Disk2 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 75034614703 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port1 Bus0/Disk2 Part0 OPEN NORMAL 42 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 40 0 DEV#: 1 DEVICE NAME: Disk3 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 75207814703 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port1 Bus0/Disk3 Part0 OPEN NORMAL 259 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 243 0 DEV#: 2 DEVICE NAME: Disk4 Part0 TYPE: 2107900 POLICY: OPTIMIZED SERIAL: 75ABTV14703 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port1 Bus0/Disk4 Part0 OPEN NORMAL 48 0 1 Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 34 0
Here we can see that the volume with id 75207814703 is Disk3 on the Windows host, because the volume ID matches the SERIAL on the Windows host. To see the size of the
Chapter 16. Open systems considerations
381
6786ch_hostconsid_open.fm
volume on the Windows host, we use Disk Manager as shown in Figure 16-5 and Figure 16-6.
Figure 16-5 Volume size before expansion on Windows 2003, Disk Manager view
Figure 16-6 Volume size before expansion on Windows 2003, disk properties view
This shows that the volume size is 11.99 GB, equal to 12 GB. To expand the volume on the DS8000, we use the command chfbvol (see Example 16-9). The new capacity must be larger than the previous one, you can not shrink the volume. 382
IBM System Storage DS8000 Series: Architecture and Implementation
6786ch_hostconsid_open.fm
Example 16-9 Expanding a volume dscli> chfbvol -cap 18 4703 Date/Time: October 18, 2007 1:10:52 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7520781 CMUC00332W chfbvol: Some host operating systems do not support changing the volume size. Are you sure that you want to resize the volume? [y/n]: y CMUC00026I chfbvol: FB volume 4703 successfully modified.
To check that the volume has been expanded, we use the lsfbvol command as shown in Example 16-10. Here you can see that the volume 4703 has been expanded to 18 GB in capacity.
Example 16-10 lsfbvol after expansion
dscli> lsfbvol 4703 Date/Time: October 18, 2007 1:18:38 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7520781 Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) cap (10^9B) cap (blocks) ====================================================================================================== ============== ITSO_x346_3_4703 4703 Online Normal Normal 2107-900 FB 512 P53 18.0 37748736
In Disk Management on the Windows host, we have to perform a rescan for the disks, after which the new capacity is shown for the disk1, as shown in Figure 16-7.
This shows that Disk3 now has 6GB unallocated new capacity. To make this capacity available for the file system, use the diskpart command at a DOS prompt. In diskpart list the volumes, select the right volume, check the size of the volume, extend the volume and check the size again as shown in Example 16-11.
383
6786ch_hostconsid_open.fm
Example 16-11 diskpart C:\Documents and Settings\Administrator>diskpart Microsoft DiskPart version 5.2.3790.3959 Copyright (C) 1999-2001 Microsoft Corporation. On computer: X346-TIC-3 DISKPART> list volume Volume ### Ltr ---------- --Volume 0 Z Volume 1 E Volume 2 D Volume 3 C Label ----------DS8000_EXP Fs ----NTFS NTFS NTFS Type ---------Partition Partition DVD-ROM Partition Size ------12 GB 34 GB 0 B 68 GB Status --------Healthy Healthy Healthy Healthy Info -------System Boot
DISKPART> select volume 0 Volume 0 is the selected volume. DISKPART> detail volume Disk ### -------* Disk 3 Status Size Free ---------- ------- ------Online 18 GB 6142 MB Dyn --Gpt ---
Readonly : No Hidden : No No Default Drive Letter: No Shadow Copy : No DISKPART> extend DiskPart successfully extended the volume. DISKPART> detail volume Disk ### -------* Disk 3 Status Size Free ---------- ------- ------Online 18 GB 0 B Dyn --Gpt ---
The result of the expansion at the disk manager is shown in Figure 16-8.
384
6786ch_hostconsid_open.fm
The example here is referred to as a Windows Basic Disk. Dynamic Disks can be expanded by expanding the underlying DS8000 volume. The new space will appear as unallocated space at the end of the disk. In this case, you do not need to use the Diskpart tool, but just Windows Disk Management functions, to allocate the new space. Expansion works irrespective of the volume type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded without stopping I/O in most cases. The Windows 2000 operating system might require a hotfix as documented at Microsoft knowledge base article Q327020. http://support.microsoft.com/default.aspx?scid=kb;en-us;Q327020 Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without backing up your data, because this operation is disruptive for the data, due to a different position of the LBA in the disks.
385
6786ch_hostconsid_open.fm
Windows Server 2003 will allow for boot disk and the cluster server disks hosted on the same bus. However, you need to use Storport miniport HBA drivers for this functionality to work. This is not a supported configuration in combination with drivers of other types (for example, SCSI port miniport or Full port drivers). If you reboot a system with adapters while the primary path is in a failed state, you must manually disable the BIOS on the first adapter and manually enable the BIOS on the second adapter. You cannot enable the BIOS for both adapters at the same time. If the BIOS for both adapters is enabled at the same time and there is a path failure on the primary adapter, the system stops with an INACCESSIBLE_BOOT_DEVICE error upon reboot.
VDS software providers enable you to manage disks and volumes at the operating system level. VDS hardware providers supplied by the hardware vendor enable you to manage
hardware RAID Arrays. Windows Server 2003 components that work with VDS include the Disk Management Microsoft management console (MMC) snap-in; the DiskPart command-line tool; and the DiskRAID command-line tool, which is available in the Windows Server 2003 Deployment Kit. Figure 16-9 on page 386 shows the VDS architecture.
Hardware Providers
Other Disk Subsystem HDDs Hardware Microsoft Functionality Non-Microsoft Functionality LUNs DS8000/DS6000
For a detailed description of VDS, refer to the Microsoft Windows Server 2003 Virtual Disk Service Technical Reference at:
http://www.microsoft.com/Resources/Documentation/windowsserv/2003/all/techref/en-us/W2K3TR_ vds_intro.asp
386
6786ch_hostconsid_open.fm
The DS8000 can act as a VDS hardware provider. The implementation is based on the DS Common Information Model (CIM) agent, a middleware application that provides a CIM-compliant interface. The Microsoft Virtual Disk Service uses the CIM technology to list information and manage LUNs. See the IBM System Storage DS Open Application Programming Interface Reference, GC35-0516, for information about how to install and configure VDS support. The following sections present examples of VDS integration with advanced functions of the DS8000 storage systems that became possible with the implementation of the DS CIM agent.
W in d o w s 2 0 0 3 S e rv e r
W in d o w s 2 0 0 3 S e rv e r
W in d o w s V S S
IB M A P I s u p p o r t fo r M ic r o s o f t V o lu m e S h a d o w C o p y S e r v ic e
W in d o w s V S S
I B M A P I s u p p o r t f o r M ic r o s o f t V o lu m e S h a d o w C o p y S e r v ic e
M a n a g e m e n t s e rv e r
C IM
C lie n t
D S C LI E S S C LI
S M C
S H M C
D S 6000
D S 8000
E S S
After the installation of these components, which is described in IBM System Storage DS Open Application Programming Interface Reference, GC35-0516, you have to: Define a VSS_FREE volume group and virtual server. Define a VSS_RESERVED volume group and virtual server.
387
6786ch_hostconsid_open.fm
Assign volumes to the VSS_FREE volume group. The WWPN default for the VSS_FREE virtual server is 50000000000000; the WWPN default for the VSS_RESERVED virtual server is 50000000000001. This disks are available for the server like a pool of free available disks. If you want to have different pools of free disks, you can define your own WWPN for another pool; see Example 16-12.
Example 16-12 ESS Provider Configuration Tool Commands Help C:\Program Files\IBM\ESS Hardware Provider for VSS>ibmvssconfig.exe /? ESS Provider Configuration Tool Commands ---------------------------------------ibmvssconfig.exe <command> <command arguments> Commands: /h | /help | -? | /? showcfg listvols <all|free|vss|unassigned> add <volumeID list> (separated by spaces) rem <volumeID list> (separated by spaces) Configuration: set targetESS <5-digit ESS Id> set user <CIMOM user name> set password <CIMOM password> set trace [0-7] set trustpassword <trustpassword> set truststore <truststore location> set usingSSL <YES | NO> set vssFreeInitiator <WWPN> set vssReservedInitiator <WWPN> set FlashCopyVer <1 | 2> set cimomPort <PORTNUM> set cimomHost <Hostname> set namespace <Namespace>
With the ibmvssconfig.exe listvols command, you can also verify what volumes are available for VSS in the VSS_FREE pool; see Example 16-13.
Example 16-13 VSS list volumes at free pool C:\Program Files\IBM\ESS Hardware Provider for VSS>ibmvssconfig.exe listvols free Listing Volumes... LSS Volume Size Assigned to --------------------------------------------------10 003AAGXA 5.3687091E9 Bytes5000000000000000 11 103AAGXA 2.14748365E10 Bytes5000000000000000
Also, disks that are unassigned in your disk subsystem can be assigned with the add command to the VSS_FREE pool. In Example 16-14, we verify the volumes available for VSS.
Example 16-14 VSS list volumes available for vss C:\Program Files\IBM\ESS Hardware Provider for VSS>ibmvssconfig.exe listvols vss Listing Volumes...
388
6786ch_hostconsid_open.fm
LSS Volume Size Assigned to --------------------------------------------------10 001AAGXA 1.00000072E10 BytesUnassigned 10 003AAGXA 5.3687091E9 Bytes5000000000000000 11 103AAGXA 2.14748365E10 Bytes5000000000000000
Backup App
Requestor
Writers Writers
Apps
Volume Shadow Copy Service I/O
VSS_FREE Pool
Target
FlashCopy
Source
389
6786ch_hostconsid_open.fm
16.3 AIX
This section covers items specific to the IBM AIX operating system. It is not intended to repeat the information that is contained in other publications. We focus on topics that are not covered in the well-known literature or are important enough to be repeated here. For complete information, refer to IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917.
Part Number.................00P4494 EC Level....................A Serial Number...............1A31005059 Manufacturer................001A Feature Code/Marketing ID...2765 FRU Number.................. 00P4495 Network Address.............10000000C93318D6 ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951 Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C93318D6 Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U1.13-P1-I1/Q1
You can also print the WWPN of an HBA directly by running: lscfg -vl <fcs#> | grep Network The # stands for the instance of each FC HBA you want to query.
390
6786ch_hostconsid_open.fm
DEV#: 0 DEVICE NAME: vpath0 TYPE: 2107900 POLICY: Optimized SERIAL: 75065711002 LUN IDENTIFIER: 6005076303FFC0B60000000000001002 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk1 OPEN NORMAL 843 0 1 fscsi0/hdisk3 OPEN NORMAL 906 0 2 fscsi1/hdisk5 OPEN NORMAL 900 0 3 fscsi1/hdisk8 OPEN NORMAL 867 0 DEV#: 1 DEVICE NAME: vpath1 TYPE: 2107900 POLICY: Optimized SERIAL: 75065711003 LUN IDENTIFIER: 6005076303FFC0B60000000000001003 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk2 CLOSE NORMAL 0 0 1 fscsi0/hdisk4 CLOSE NORMAL 0 0 2 fscsi1/hdisk6 CLOSE NORMAL 0 0 3 fscsi1/hdisk9 CLOSE NORMAL 0 0 DEV#: 2 DEVICE NAME: vpath2 TYPE: 1750500 POLICY: Optimized SERIAL: 13AAGXA1000 LUN IDENTIFIER: 600507630EFFFC6F0000000000001000 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk10 OPEN NORMAL 2686 0 1* fscsi0/hdisk12 OPEN NORMAL 0 0 2 fscsi1/hdisk14 OPEN NORMAL 2677 0 3* fscsi1/hdisk16 OPEN NORMAL 0 0 DEV#: 3 DEVICE NAME: vpath3 TYPE: 1750500 POLICY: Optimized SERIAL: 13AAGXA1100 LUN IDENTIFIER: 600507630EFFFC6F0000000000001100 ==========================================================================
391
6786ch_hostconsid_open.fm
Path# 0* 1 2* 3
Select 0 0 0 0
Errors 0 0 0 0
The datapath query portmap command shows the usage of the ports. In Example 16-17 on page 392, you see a mixed DS8000 and DS6000 disk configuration seen by the server. For the DS6000, the datapath query portmap command uses capital letters for the preferred paths and lower case letters for non preferred paths; this does not apply to the DS8000.
Example 16-17 Datapath query portmap on AIX
root@sanh70:/ > datapath query portmap BAY-1(B1) ESSID DISK H1 H2 H3 H4 ABCD ABCD ABCD ABCD BAY-5(B5) H1 H2 H3 H4 ABCD ABCD ABCD ABCD 13AAGXA vpath2 Y--- ---- ---- ---13AAGXA vpath3 o--- ---- ---- ---7506571 vpath0 -Y-- ---- ---- ---7506571 vpath1 -O-- ---- ---- ---Y O N PD = = = = = online/open online/closed offline path not configured path down BAY-2(B2) H1 H2 H3 ABCD ABCD ABCD BAY-6(B6) H1 H2 H3 ABCD ABCD ABCD y--- ---- ---O--- ---- ------- ---- -Y----- ---- -O-BAY-3(B3) H1 H2 H3 ABCD ABCD ABCD BAY-7(B7) H1 H2 H3 ABCD ABCD ABCD ---- ---- ------- ---- ------- ---- ------- ---- ---BAY-4(B4) H1 H2 H3 ABCD ABCD ABCD BAY-8(B8) H1 H2 H3 ABCD ABCD ABCD ---- ---- ------- ---- ------- ---- ------- ---- ----
Note: 2105 devices' essid has 5 digits, while 1750/2107 device's essid has 7 digits.
Sometimes the lsvpcfg command helps you get an overview of your configuration. You can easily count how many physical disks there are, with which serial number, and how many paths. See Example 16-18.
Example 16-18 lsvpcfg command root@sanh70:/ > lsvpcfg vpath0 (Avail pv sdd_testvg) 75065711002 = hdisk1 (Avail ) hdisk3 (Avail ) hdisk5 (Avail ) hdisk8 (Avail ) vpath1 (Avail ) 75065711003 = hdisk2 (Avail ) hdisk4 (Avail ) hdisk6 (Avail ) hdisk9 (Avail )
There are also some other valuable features in SDD for AIX: Enhanced SDD configuration methods and migration. SDD has a feature in the configuration method to read the pvid from the physical disks and convert the pvid from hdisks to vpaths during the SDD vpath configuration. With this feature, you can skip the process of converting the pvid from hdisks to vpaths after configuring SDD devices. Furthermore, SDD migration can skip the pvid conversion process. This tremendously reduces the SDD migration time, especially with a large number of SDD devices and LVM configuration environment. Allow mixed volume groups with non-SDD devices in hd2vp, vp2hd, and dpovgfix. Mixed volume group is supported by three SDD LVM conversion scripts: hd2vp, vp2hd, and dpovgfix. These three SDD LVM conversion script files allow pvid conversion even if the volume group consists of SDD-supported devices and non-SDD-supported devices. Non-SDD-supported devices allowed are IBM RDAC, EMC Powerpath, NEC MPO, and Hitachi Dynamic Link Manager devices.
392
6786ch_hostconsid_open.fm
Migration option for large device configuration. SDD offers an environment variable, SKIP_SDD_MIGRATION, for you to customize the SDD migration or upgrade to maximize performance. The SKIP_SDD_MIGRATION environment variable is an option available to permit the bypass of the SDD automated migration process backup, restoration, and recovery of LVM configurations and SDD device configurations. This variable can help decrease the SDD upgrade time if you choose to reboot the system after upgrading SDD. For details about these features, see the IBM System Storage Multipath Subsystem Device Driver Users Guide, SC30-4131.
The management of MPIO devices is described in the online guide System Management Guide: Operating System and Devices for AIX 5L from the AIX documentation Web site at:
http://publib16.boulder.ibm.com/pseries/en_US/aixbman/baseadmn/manage_mpio.htm
For information about SDDPCM commands, refer to the IBM System Storage Multipath Subsystem Device Driver Users Guide, SC30-4131. The SDDPCM Web site is located at:
http://www-1.ibm.com/servers/storage/support/software/sdd/index.html
Benefits of MPIO
There are several reasons to prefer MPIO with SDDPCM to traditional SDD: Performance improvements due to direct integration with AIX Better integration if different storage systems are attached Easier administration through native AIX commands
393
6786ch_hostconsid_open.fm
DS8000 volumes to the operating system as MPIO manageable. Of course, you cannot have SDD and MPIO/SDDPCM on a given server at the same time.
Like SDD, MPIO with PCM supports the preferred path of DS6000. In the DS8000, there are no preferred paths. The algorithm of load leveling can be changed like SDD. Example 16-20 shows a pcmpath query device command for a mixed environment, with two DS8000s and one DS6000 disk.
Example 16-20 MPIO pcmpath query device root@san5198b:/ > pcmpath query device DEV#: 2 DEVICE NAME: hdisk2 TYPE: 2107900 ALGORITHM: Load Balance SERIAL: 75065711100 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 OPEN NORMAL 1240 0 1 fscsi0/path1 OPEN NORMAL 1313 0 2 fscsi0/path2 OPEN NORMAL 1297 0 3 fscsi0/path3 OPEN NORMAL 1294 0 DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2107900 ALGORITHM: Load Balance SERIAL: 75065711101 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi0/path0 CLOSE NORMAL 0 0 1 fscsi0/path1 CLOSE NORMAL 0 0 2 fscsi0/path2 CLOSE NORMAL 0 0 3 fscsi0/path3 CLOSE NORMAL 0 0 DEV#: 4 DEVICE NAME: hdisk4 TYPE: 1750500 ALGORITHM: Load Balance SERIAL: 13AAGXA1101 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0* fscsi0/path0 OPEN NORMAL 12 0
394
6786ch_hostconsid_open.fm
1 2* 3
3787 17 3822
0 0 0
All other commands are similar to SDD, such as pcmpath query essmap or pcmpath query portmap. In Example 16-21, you see these commands in a mixed environment with two DS8000 disks and one DS6000 disk.
Example 16-21 MPIO pcmpath queries in a mixed DS8000 and DS6000 environment
root@san5198b:/ > Disk Path e ----------hdisk2 path0 hdisk2 path1 hdisk2 path2 hdisk2 path3 hdisk3 path0 hdisk3 path1 hdisk3 path2 hdisk3 path3 hdisk4 path0 hdisk4 path1 hdisk4 path2 hdisk4 path3 pcmpath query essmap P Location adapter ---------1p-20-02[FC] 1p-20-02[FC] 1p-20-02[FC] 1p-20-02[FC] 1p-20-02[FC] 1p-20-02[FC] 1p-20-02[FC] 1p-20-02[FC] 1p-20-02[FC] 1p-20-02[FC] 1p-28-02[FC] 1p-28-02[FC] -------fscsi0 fscsi0 fscsi0 fscsi0 fscsi0 fscsi0 fscsi0 fscsi0 fscsi0 fscsi0 fscsi1 fscsi1 LUN SN -------75065711100 75065711100 75065711100 75065711100 75065711101 75065711101 75065711101 75065711101 13AAGXA1101 13AAGXA1101 13AAGXA1101 13AAGXA1101 Type -----------IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM 2107-900 2107-900 2107-900 2107-900 2107-900 2107-900 2107-900 2107-900 1750-500 1750-500 1750-500 1750-500 Size ---5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 10.0 10.0 10.0 10.0 LSS --17 17 17 17 17 17 17 17 17 17 17 17 Vol --0 0 0 0 1 1 1 1 1 1 1 1 Rank ---0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 C/A --17 17 17 17 17 17 17 17 07 07 07 07 S Y Y Y Y Y Y Y Y Y Y Y Y Connection port RaidMod
----------- ---- ------R1-B1-H1-ZB R1-B2-H3-ZB R1-B3-H4-ZB R1-B4-H2-ZB R1-B1-H1-ZB R1-B2-H3-ZB R1-B3-H4-ZB R1-B4-H2-ZB R1-B1-H1-ZA R1-B2-H1-ZA R1-B1-H1-ZA R1-B2-H1-ZA 1 131 241 311 1 131 241 311 0 100 0 100 RAID5 RAID5 RAID5 RAID5 RAID5 RAID5 RAID5 RAID5 RAID5 RAID5 RAID5 RAID5
* *
root@san5198b:/ > pcmpath query portmap BAY-1(B1) ESSID DISK H1 H2 H3 H4 ABCD ABCD ABCD ABCD BAY-5(B5) ESSID DISK H1 H2 H3 H4 ABCD ABCD ABCD ABCD 7506571 hdisk2 -Y-- ---- ---- ---7506571 hdisk3 -O-- ---- ---- ---13AAGXA hdisk4 y--- ---- ---- ---Y O N ? PD = = = = = =
BAY-2(B2) H1 H2 H3 ABCD ABCD ABCD BAY-6(B6) H1 H2 H3 ABCD ABCD ABCD ---- ---- -Y----- ---- -O-Y--- ---- ----
BAY-3(B3) H1 H2 H3 ABCD ABCD ABCD BAY-7(B7) H1 H2 H3 ABCD ABCD ABCD ---- ---- ------- ---- ------- ---- ----
BAY-4(B4) H1 H2 H3 ABCD ABCD ABCD BAY-8(B8) H1 H2 H3 ABCD ABCD ABCD ---- -Y-- ------- -O-- ------- ---- ----
online/open y = (alternate path) online/open online/closed o = (alternate path) online/closed offline n = (alternate path) offline path not configured path information not available path down
Note: 2105 devices' essid has 5 digits, while 1750/2107 device's essid has 7 digits.
Note that the non-preferred path asterisk is only for the DS6000.
395
6786ch_hostconsid_open.fm
Enabled Enabled Enabled Enabled Enabled ... Missing Missing Missing Missing Missing ... Enabled Enabled Enabled Enabled Enabled
hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 hdisk9 hdisk10 hdisk11 hdisk12 hdisk13 hdisk96 hdisk97 hdisk98 hdisk99 hdisk100
scsi0 scsi0 scsi0 scsi7 scsi7 fscsi0 fscsi0 fscsi0 fscsi0 fscsi0 fscsi2 fscsi6 fscsi6 fscsi6 fscsi6
The chpath command is used to perform change operations on a specific path. It can either change the operational status or tunable attributes associated with a path. It cannot perform both types of operations in a single invocation. The rmpath command unconfigures or undefines, or both, one or more paths to a target device. It is not possible to unconfigure (undefine) the last path to a target device using the rmpath command. The only way to do this is to unconfigure the device itself (for example, use the rmdev command). Refer to the man pages of the MPIO commands for more information.
LVM striping
Striping is a technique for spreading the data in a logical volume across several physical disks in such a way that all disks are used in parallel to access data on one logical volume. The primary objective of striping is to increase the performance of a logical volume beyond that of a single physical disk. In the case of a DS8000, LVM striping can be used to distribute data across more than one array (rank).
396
6786ch_hostconsid_open.fm
physical volumes have the same size, optimal I/O load distribution among the available physical volumes will be achieved.
LVM mirroring
LVM has the capability to mirror logical volumes across several physical disks. This improves availability, because in case a disk fails, there is another disk with the same data. When creating mirrored copies of logical volumes, make sure that the copies are indeed distributed across separate disks. With the introduction of SAN technology, LVM mirroring can even provide protection against a site failure. Using longwave Fibre Channel connections, a mirror can be stretched up to a 10 km distance.
Synchronous I/O
Synchronous I/O occurs while you wait. An applications processing cannot continue until the I/O operation is complete. This is a very secure and traditional way to handle data. It ensures consistency at all times, but can be a major performance inhibitor. It also does not allow the operating system to take full advantage of functions of modern storage devices, such as queuing, command reordering, and so on.
Asynchronous I/O
Asynchronous I/O operations run in the background and do not block user applications. This improves performance, because I/O and application processing run simultaneously. Many applications, such as databases and file servers, take advantage of the ability to overlap processing and I/O. They have to take measures to ensure data consistency, though. You can configure, remove, and change asynchronous I/O for each device using the chdev command or SMIT.
397
6786ch_hostconsid_open.fm
Tip: If the number of asynchronous I/O (AIO) requests is high, then we recommend that you increase maxservers to approximately the number of simultaneous I/Os that there might be. In most cases, it is better to leave the minservers parameter to the default value, because the AIO kernel extension will generate additional servers if needed. By looking at the CPU utilization of the AIO servers, if the utilization is even across all of them, that means that they are all being used; you might want to try increasing their number in this case. Running pstat -a allows you to see the AIO servers by name, and running ps -k shows them to you as the name kproc.
Direct I/O
An alternative I/O technique called Direct I/O bypasses the Virtual Memory Manager (VMM) altogether and transfers data directly from the users buffer to the disk and from the disk to the users buffer. The concept behind this is similar to raw I/O in the sense that they both bypass caching at the file system level. This reduces CPU overhead and makes more memory available to the database instance, which can make more efficient use of it for its own purposes. Direct I/O is provided as a file system option in JFS2. It can be used either by mounting the corresponding file system with the mount o dio option, or by opening a file with the O_DIRECT flag specified in the open() system call. When a file system is mounted with the o dio option, all files in the file system use Direct I/O by default. Direct I/O benefits applications that have their own caching algorithms by eliminating the overhead of copying data twice, first between the disk and the OS buffer cache, and then from the buffer cache to the applications memory. For applications that benefit from the operating system cache, do not use Direct I/O, because all I/O operations are synchronous. Direct I/O also bypasses the JFS2 read-ahead. Read-ahead can provide a significant performance boost for sequentially accessed files.
Concurrent I/O
In 2003, IBM introduced a new file system feature called Concurrent I/O (CIO) for JFS2. It includes all the advantages of Direct I/O and also relieves the serialization of write accesses. It improves performance for many environments, particularly commercial relational databases. In many cases, the database performance achieved using Concurrent I/O with JFS2 is comparable to that obtained by using raw logical volumes. A method for enabling the concurrent I/O mode is to use the mount -o cio option when mounting a file system.
398
6786ch_hostconsid_open.fm
Example 16-23 on page 399 shows an AIX file system that was created on a single DS8000 logical volume. The DSCLI is used to display the characteristics of the DS8000 logical volumes; AIX LVM commands show the definitions of volume group, logical volume and file systems. Note that the available space for the file system is almost used up.
Example 16-23 DS8000 Logical Volume and AIX File System before Dynamic Volume Expansion dscli> lsfbvol 4700 Date/Time: November 6, 2007 9:13:37 AM CET IBM DSCLI Version: 5.3.0.794 DS: IBM.2107-7520781 Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) cap (10^9B) cap (blocks) ========================================================================================== ITSO_p550_1_4700 4700 Online Normal Normal 2107-900 FB 512 P53 18.0 37748736
# lsvg -p dvevg dvevg: PV_NAME hdisk0 # lsvg -l dvevg dvevg: LV NAME dvelv loglv00 # lsfs /dvefs Name /dev/dvelv
PV STATE active
FREE PPs 5
LPs 280 1
PPs 280 1
PVs 1 1
Nodename --
Mount Pt /dvefs
VFS jfs2
If more space is required in this file system, two options are available with the AIX operating system: either add another DS8000 logical volume (which is a physical volume on AIX) to the AIX volume group or extend the DS8000 logical volume and subsequently adjust the AIX LVM definitions. The second option is demonstrated in Example 16-24 on page 399. The DSCLI is used to extend the DS8000 logical volume. On the attached AIX host, the configuration change is read out with the AIX commands cfgmgr and chvg. Afterwards, the file system is expanded online and the results are displayed.
Example 16-24 Dynamic Volume Expansion of DS8000 Logical Volume and AIX File System dscli> chfbvol -cap 24 4700 Date/Time: November 6, 2007 9:15:33 AM CET IBM DSCLI Version: 5.3.0.794 DS: IBM. 2107-7520781 CMUC00332W chfbvol: Some host operating systems do not support changing the volume size. Are you sure that you want to resize the volume? [y/n]: y CMUC00026I chfbvol: FB volume 4700 successfully modified. # cfgmgr # chvg -g dvevg # lsvg -p dvevg dvevg: PV_NAME hdisk0
PV STATE active
399
6786ch_hostconsid_open.fm
# lsvg -p dvevg dvevg: PV_NAME PV STATE TOTAL PPs hdisk0 active 382 # lsvg -l dvevg dvevg: LV NAME TYPE LPs PPs PVs dvelv jfs2 344 344 1 loglv00 jfs2log 1 1 1 # lsfs /dvefs Name Nodename Mount Pt /dev/dvelv -/dvefs
FREE PPs 37
There are some limitations regarding the online size extension of the AIX volume group. It might be required to deactivate and then reactivate the AIX volume group for LVM to see the size change on the disks. If necessary, please check the appropriate AIX documentation.
16.4 Linux
Linux is an open source UNIX-like kernel, originally created by Linus Torvalds. The term Linux is often used to mean the whole operating system of GNU/Linux. The Linux kernel, along with the tools and software needed to run an operating system, are maintained by a loosely organized community of thousands of (mostly) volunteer programmers. There are several organizations (distributors) that bundle the Linux kernel, tools, and applications to form a distribution, a package that can be downloaded or purchased and installed on a computer. Some of these distributions are commercial; others are not.
16.4.1 Support issues that distinguish Linux from other operating systems
Linux is different from the other proprietary operating systems in many ways: There is no one person or organization that can be held responsible or called for support. Depending on the target group, the distributions differ largely in the kind of support that is available. Linux is available for almost all computer architectures. Linux is rapidly changing. All these factors make it difficult to promise and provide generic support for Linux. As a consequence, IBM has decided on a support strategy that limits the uncertainty and the amount of testing. IBM only supports the major Linux distributions that are targeted at enterprise clients: Red Hat Enterprise Linux SUSE Linux Enterprise Server
400
6786ch_hostconsid_open.fm
Asianux (Red Flag Linux) These distributions have release cycles of about one year, are maintained for five years, and require you to sign a support contract with the distributor. They also have a schedule for regular updates. These factors mitigate the issues listed previously. The limited number of supported distributions also allows IBM to work closely with the vendors to ensure interoperability and support. Details about the supported Linux distributions can be found in the DS8000 Interoperability Matrix and the System Storage Interoperation Center (SSIC):
http://www-1.ibm.com/servers/storage/disk/DS8000/interop.html http://www-03.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.wss?sta rt_over=yes
There are exceptions to this strategy when the market demand justifies the test and support effort.
401
6786ch_hostconsid_open.fm
topologies and terminology, and instructions to attach open systems (Fixed Block) storage devices using FCP to an IBM System z running Linux. It can be found at:
http://www.redbooks.ibm.com/redpapers/pdfs/redp0205.pdf
IBM System z dedicates its own Web page to storage attachment using FCP:
http://www-03.ibm.com/systems/z/connectivity/products/
The IBM System z Connectivity Handbook discusses the connectivity options available for use within and beyond the data center for IBM System z9 and zSeries servers. There is an extra section for FC attachment:
http://www.redbooks.ibm.com/redbooks.nsf/RedbookAbstracts/sg245444.html?Open
The white paper ESS Attachment to United Linux 1 (IA-32) is available at:
http://www.ibm.com/support/docview.wss?uid=tss1td101235
It is intended to help users to attach a server running an enterprise-level Linux distribution based on United Linux 1 (IA-32) to the IBM 2105 Enterprise Storage Server. It provides very detailed step-by-step instructions and background information about Linux and SAN storage attachment. Another white paper, Linux on IBM eServer pSeries SAN - Overview for Customers, describes in detail how to attach SAN storage (ESS 2105 and DS4000) to a System p server running Linux:
http://www.ibm.com/servers/eserver/pseries/linux/whitepapers/linux_san.pdf
Most of the information provided in these publications is valid for DS8000 attachment, although much of it was originally written for the ESS 2105.
402
6786ch_hostconsid_open.fm
Table 16-1 Major numbers and special device files Major number First special device file Last special device file
Each SCSI device can have up to 15 partitions, which are represented by the special device files /dev/sda1, /dev/sda2, and so on. Mapping partitions to special device files and major and minor numbers is shown in Table 16-2 on page 403.
Table 16-2 Minor numbers, partitions, and special device files Major number Minor number Special device file Partition
8 8
0 1 ...
/dev/sda /dev/sda1
8 8 8
15 16 17 ...
15th partition of 1st disk All of 2nd disk First partition of 2nd disk
8 8
31 32 ...
/dev/sdb15 /dev/sdc
8 65 65 ...
255 0 1 ...
15th partition of 16th disk All of 16th disk First partition on 16th disk
403
6786ch_hostconsid_open.fm
The name of the special device file to create The type of the device: b stands for a block device, c for a character device The major number of the device The minor number of the device Refer to the man page of the mknod command for more details. Example 16-25 on page 404 shows the creation of special device files for the seventeenth SCSI disk and its first three partitions.
Example 16-25 Create new special device files for SCSI disks mknod mknod mknod mknod /dev/sdq b 65 0 /dev/sdq1 b 65 1 /dev/sdq2 b 65 2 /dev/sdq3 b 65 3
After creating the device files, you might need to change their owner, group, and file permission settings to be able to use them. Often, the easiest way to do this is by duplicating the settings of existing device files, as shown in Example 16-26. Be aware that after this sequence of commands, all special device files for SCSI disks have the same permissions. If an application requires different settings for certain disks, you have to correct them afterwards.
Example 16-26 Duplicating the permissions of special device files knox:~ # ls -l rw-rw---1 rw-rw---1 knox:~ # chmod knox:~ # chown /dev/sda /dev/sda1 root disk 8, root disk 8, 660 /dev/sd* root:disk /dev/sda* 0 2003-03-14 14:07 /dev/sda 1 2003-03-14 14:07 /dev/sda1
404
6786ch_hostconsid_open.fm
done
You may display the DM-MPIO devices by issuing the ls -l /dev/disk/by-name/ or by listing the partitions, the partitions with the name dm-<x> are DM-MPIO partitions (see Figure 16-28).
Example 16-28 Display the DM-MPIO devices x346-tic-4:/ # ls -l /dev/disk/by-name/
405
6786ch_hostconsid_open.fm
total 0, the lrwxrwxrwx 1 lrwxrwxrwx 1 lrwxrwxrwx 1 x346-tic-4:/ major minor 8 8 8 8 8 8 8 8 8 8 8 8 8 8 253 253 253 0 1 16 17 18 32 48 64 80 96 112 128 144 160 0 1 2
root root 10 Nov 8 14:14 mpath0 -> ../../dm-0 root root 10 Nov 8 14:14 mpath1 -> ../../dm-1 root root 10 Nov 8 14:14 mpath2 -> ../../dm-2 # cat /proc/partitions #blocks name 35548160 35543781 71686144 2104483 69577515 12582912 12582912 12582912 12582912 12582912 12582912 12582912 12582912 12582912 12582912 12582912 12582912 sda sda1 sdb sdb1 sdb2 sdc sdd sde sdf sdg sdh sdi sdj sdk dm-0 dm-1 dm-2
To find out the mapping between the DM-MPIO device and the volume on the DS8000 use the multipath command, as shown in Figure 16-29. The DM-MPIO device dm-0 is the volume 4707 on the DS8000 with the WWNN 5005076303FFC1A5.
Example 16-29 DM-MPIO device to DS8000 volume mapping x346-tic-4:/cd_image # multipath -l mpath2 (36005076303ffc6630000000000004707) dm-2 IBM,2107900 [size=12G][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=0][active] \_ 1:0:5:0 sdh 8:112 [active][undef] \_ 2:0:5:0 sdk 8:160 [active][undef] mpath1 (36005076303ffc1a50000000000004707) dm-1 IBM,2107900 [size=12G][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=0][active] \_ 1:0:4:0 sdg 8:96 [active][undef] \_ 2:0:4:0 sdj 8:144 [active][undef] mpath0 (36005076303ffc08f0000000000004707) dm-0 IBM,2107900 [size=12G][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=0][active] \_ 1:0:0:0 sdc 8:32 [active][undef] \_ 1:0:1:0 sdd 8:48 [active][undef] \_ 1:0:2:0 sde 8:64 [active][undef] \_ 1:0:3:0 sdf 8:80 [active][undef] \_ 2:0:3:0 sdi 8:128 [active][undef]
406
6786ch_hostconsid_open.fm
For the Linux 2.6 kernels the number of major and minor bits has been increased to 12 and 20 bits respectively, thus Linux 2.6 kernels can support thousands of disks. There is still a limitation of only up to 15 partitions per disk.
407
6786ch_hostconsid_open.fm
Issue mkinitrd -h for more help information. If you reboot now, the SCSI and FC HBA drivers are loaded in the correct order. Example 16-30 shows how the /etc/modules.conf file should look with two Adaptec SCSI controllers and two QLogic 2340 FC HBAs installed. It also contains the line that enables multiple LUN support. Note that the module names are different with different SCSI and Fibre Channel adapters.
Example 16-30 Sample /etc/modules.conf scsi_hostadapter aic7xxx scsi_hostadapter1 aic7xxx scsi_hostadapter2 qla2300 scsi_hostadapter3 qla2300 options scsi_mod max_scsi_luns=128
408
6786ch_hostconsid_open.fm
Example 16-32 shows the SCSI disk assignment after one more DS8000 volume is added.
Example 16-32 SCSi disks after dynamic addition of another DS8000 volume /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg internal SCSI disk 1st DS8000 volume, 2nd DS8000 volume, 1st DS8000 volume, 2nd DS8000 volume, new DS8000 volume, new DS8000 volume, seen seen seen seen seen seen by by by by by by HBA HBA HBA HBA HBA HBA 0 0 1 1 0 1
Mapping special device files is now different than it is if all three DS8000 volumes had already been present when the HBA driver was loaded. In other words, if the system is now restarted, the device ordering changes to what is shown in Example 16-33.
Example 16-33 SCSi disks after dynamic addition of another DS8000 volume and reboot /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg internal SCSI disk 1st DS8000 volume, 2nd DS8000 volume, new DS8000 volume, 1st DS8000 volume, 2nd DS8000 volume, new DS8000 volume, seen seen seen seen seen seen by by by by by by HBA HBA HBA HBA HBA HBA 0 0 0 1 1 1
409
6786ch_hostconsid_open.fm
Vendor: IBM-ESXS Model: DTN036C1UCDY10F Type: Direct-Access Host: scsi0 Channel: 00 Id: 08 Lun: 00 Vendor: IBM Model: 32P0032a S320 1 Type: Processor Host: scsi2 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2107900 Type: Direct-Access Host: scsi2 Channel: 00 Id: 00 Lun: 01 Vendor: IBM Model: 2107900 Type: Direct-Access Host: scsi2 Channel: 00 Id: 00 Lun: 02 Vendor: IBM Model: 2107900 Type: Direct-Access Host: scsi3 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2107900 Type: Direct-Access Host: scsi3 Channel: 00 Id: 00 Lun: 01 Vendor: IBM Model: 2107900 Type: Direct-Access Host: scsi3 Channel: 00 Id: 00 Lun: 02 Vendor: IBM Model: 2107900 Type: Direct-Access
Rev: S25J ANSI SCSI revision: 03 Rev: 1 ANSI SCSI revision: 02 Rev: .545 ANSI SCSI revision: 03 Rev: .545 ANSI SCSI revision: 03 Rev: .545 ANSI SCSI revision: 03 Rev: .545 ANSI SCSI revision: 03 Rev: .545 ANSI SCSI revision: 03 Rev: .545 ANSI SCSI revision: 03
There is also an entry in /proc for each HBA, with driver and firmware levels, error counters, and information about the attached devices. Example 16-35 shows the condensed content of the entry for a QLogic Fibre Channel HBA.
Example 16-35 Sample /proc/scsi/qla2300/x knox:~ # cat /proc/scsi/qla2300/2 QLogic PCI to Fibre Channel Host Adapter for ISP23xx: Firmware version: 3.01.18, Driver version 6.05.00b9 Entry address = c1e00060 HBA: QLA2312 , Serial# H28468 Request Queue = 0x21f8000, Response Queue = 0x21e0000 Request Queue count= 128, Response Queue count= 512 . . Login retry count = 012 Commands retried with dropped frame(s) = 0 SCSI Device Information: scsi-qla0-adapter-node=200000e08b0b941d; scsi-qla0-adapter-port=210000e08b0b941d; scsi-qla0-target-0=5005076300c39103; SCSI LUN (Id:Lun) ( 0: 0): ( 0: 1): ( 0: 2): Information: Total reqs 99545, Pending reqs 0, flags 0x0, 0:0:81, Total reqs 9673, Pending reqs 0, flags 0x0, 0:0:81, Total reqs 100914, Pending reqs 0, flags 0x0, 0:0:81,
410
6786ch_hostconsid_open.fm
Useful sg tools are: sg_inq /dev/sgx prints SCSI Inquiry data, such as the volume serial number. sg_scan prints the /dev/sg scsihost, channel, target, LUN mapping. sg_map prints the /dev/sd /dev/sg mapping. sg_readcap prints the block size and capacity (in blocks) of the device. sginfo prints SCSI inquiry and mode page data; it also allows you to manipulate the mode pages.
16.5 OpenVMS
DS8000 supports FC attachment of OpenVMS Alpha systems with operating system Version 7.3 or later. For details regarding operating system versions and HBA types, see the DS8000 Interoperability Matrix or the System Storage Interoperation Center (SSIC), available at:
http://www.ibm.com/servers/storage/disk/ds8000/interop.html http://www-03.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.wss?sta rt_over=yes
The support includes clustering and multiple paths (exploiting the OpenVMS built-in multipathing). Boot support is available through Request for Price Quotations (RPQ).
411
6786ch_hostconsid_open.fm
Important: The DS8000 FC ports used by OpenVMS hosts must not be accessed by any other operating system, not even accidentally. The OpenVMS hosts have to be defined for access to these ports only, and you must ensure that no foreign HBA (without definition as an OpenVMS host) is seen by these ports. Conversely, an OpenVMS host must have access only to the DS8000 ports configured for OpenVMS compatibility. You must dedicate storage ports for only the OpenVMS host type. Multiple OpenVMS systems can access the same port. Appropriate zoning must be enforced from the beginning. The wrong access to storage ports used by OpenVMS hosts can clear the OpenVMS-specific settings for these ports. This might remain undetected for a long time, until a failure happens, and by then, I/Os might be lost. It is worth mentioning that OpenVMS is the only platform with this type of restriction (usually, different open systems platforms can share the same DS8000 FC adapters).
412
6786ch_hostconsid_open.fm
http://h71000.www7.hp.com/doc/82FINAL/6318/6318PRO.HTML
The rules are: Every FC volume must have a UDID that is unique throughout the OpenVMS cluster that accesses the volume. You can use the same UDID in a different cluster or for a different stand-alone host. If the volume is planned for MSCP serving, then the UDID range is limited to 0-9999 (by operating system restrictions in the MSCP code). OpenVMS system administrators tend to use elaborate schemes for assigning UDIDs, coding several hints about physical configuration into this logical ID, for instance, odd/even values or reserved ranges to distinguish between multiple data centers, storage systems, or disk groups. Thus, they must be able to provide these numbers without additional restrictions imposed by the storage system. In the DS8000, UDID is implemented with full flexibility, which leaves the responsibility about restrictions to the user. In Example 16-37, we configured a DS8000 volume with the UDID 8275 for OpenVMS attachment. This gives us the OpenVMS Fibre Channel disk device $1$DGA8275. You see the output from the OpenVMS command show device/full $1$DGA8275. The OpenVMS host has two Fibre Channel HBAs with names PGA0 and PGB0. Because each HBA accesses two DS8000 ports, we have four I/O paths.
Example 16-37 OpenVMS volume configuration $ show device/full $1$DGA8275: Disk $1$DGA8275: (NFTE18), device type IBM 2107900, is online, file-oriented device, shareable, device has multiple I/O paths, served to cluster via MSCP Server, error logging is enabled. Error count 0 Owner process "" Owner process ID 00000000 Reference count 0 Current preferred CPU Id 9 Host name "NFTE18" Alternate host name "NFTE17" Allocation class 1 Operations completed 2 Owner UIC [SYSTEM] Dev Prot S:RWPL,O:RWPL,G:R,W Default buffer size 512 Fastpath 1 Host type, avail Compaq AlphaServer GS60 6/525, yes Alt. type, avail Compaq AlphaServer GS60 6/525, yes
I/O paths to device 5 Path MSCP (NFTE17), primary path. Error count 0 Operations completed Path PGA0.5005-0763-0319-8324 (NFTE18), current path. Error count 0 Operations completed Path PGA0.5005-0763-031B-C324 (NFTE18). Error count 0 Operations completed Path PGB0.5005-0763-0310-8324 (NFTE18). Error count 0 Operations completed Path PGB0.5005-0763-0314-C324 (NFTE18). Error count 0 Operations completed
0 1 1 0 0
The DS CLI command lshostvol displays the mapping of DS8000 volumes to host system device names. You can find more details regarding this command in the IBM System Storage DS8000: Command-Line Interface Users Guide, SC26-7916. More details about the attachment of an OpenVMS host can be found at the IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917.
413
6786ch_hostconsid_open.fm
1 0 0 0
414
6786ch_hostconsid_open.fm
The DS CLI command chvolgrp provides the flag -lun, which you can use to control which volume becomes LUN 0.
The DS8000 provides volumes as SCSI-3 devices and thus does not implement a forced error indicator. It also does not support the READL and WRITEL command set for data integrity reasons. Usually, the OpenVMS SCSI Port Driver recognizes if a device supports READL/WRITEL, and the driver sets the no forced error (NOFE) bit in the Unit Control Block. You can verify this setting with the SDA utility: After starting the utility with the analyze/system command, enter the show device command at the SDA prompt. Then, the NOFE flag should be shown in the devices characteristics. The OpenVMS command for mounting shadow sets provides a qualifier /override=no_forced_error to support non-DSA devices. To avoid possible problems (performance loss, unexpected error counts, or even removal of members from the shadow set), we recommend that you apply this qualifier.
16.6 VMware
The DS8000 currently supports the VMware high-end virtualization solution Virtual Infrastructure 3 and the included VMware ESX Server starting with version 2.5. A great deal of useful information is available in the IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917. This section is not intended to duplicate that publication, but rather provide more information about optimizing your VMware environment as well as a step-by-step guide to setting up ESX Server with the DS8000.
415
6786ch_hostconsid_open.fm
Other VMware products, such as VMware Server and Workstation, are not intended for the data center class environments where the DS8000 is typically used. The supported guest operating systems are Windows 2000 Server, Windows Server 2003, SUSE Linux SLES 8, 9 and 10, and Red Hat Enterprise Linux 2.1, 3.0 and 4.0. The VMotion feature is supported starting with version 2.5.1. This information is likely to change, so check the Interoperability matrix and the System Storage Interoperation Center for complete, up-to-date informations: http://www.ibm.com/servers/storage/disk/ds8000/interop.html http://www-03.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutj s.wss?start_over=yes
416
6786ch_hostconsid_open.fm
In Figure 16-12 LUNs are assigned to the ESX host via 2 HBAs (vmhba1 and vmhba2). LUN vmhba1:0:0 is used by the service console OS while the other LUNs are used for the virtual machines. One or more LUNs build a VMFS datastore. Datastores can be expanded dynamically by adding additional LUNs. In this example VM1, VM2 and VM3 have stored their virtual disks on 3 different datastores. VM1 and VM2 share one virtual disk which is located on datastore 2 and is accessible by both VMs (for a cluster solution, for example). VMFS distributed lock manager manages the shared access to the datastores. VM3 uses both a virtual disk and a RDM to vmhba2:0:3. The .vmdk file acts as a proxy and contains all information VM3 needs to access the LUN. While the .vmdk file is accessed when starting I/O, I/O itself is done directly to the LUN.
417
6786ch_hostconsid_open.fm
As with other operating systems, you should have multiple paths from your server to the DS8000 to improve availability and reliability. Normally, the LUNs show up as multiple separate devices, but VMware contains native multipathing software that automatically conceals the redundant paths. Therefore, multipathing software is not needed on your guest operating systems. As with other operating systems, you also need to also use persistent binding. See the IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917, for reasons why persistent binding is important and how to configure it for VMware.
After the LUNs are assigned properly, you can see them in Virtual Center by selecting the ESX host they have been assigned to and then choosing from the Configuration tab "Storage Adapters". If you choose an adapter the connected LUNs are show in the details section. You might have to tell VMware to refresh its disks by selecting Rescan in the upper right corner. Figure 16-13 shows the LUNs assigned to the selected ESX servers vmhba1.
418
6786ch_hostconsid_open.fm
Note: At least one VMFS datastore is required to store the virtual machines configuration files and the proxy files for RDMs. This is normally done automatically during the initial installation of ESX Server. Option 1: Using virtual disks To store virtual disk files on this LUN, it must be formatted with the VMFS. In the configuration tab select "Storage (SCSI, SAN and NFS)". On the right you will be presented a list of all configured datastores on this ESX host. To add a new one click on "Add Storage" in the upper right corner. Follow the steps through the process by choosing: the storage type (choose LUN/disk for FC SAN storage) select the LUN you want to use from the list look over the current disk layout enter a datastore name select the maximum file size (depends on the block size) and enter the desired capacity of your new datastore on the "Ready to complete" page click Finish to create the datastore After these steps you need to perform a rescan to update the view. To create a new virtual disk on this datastore, select the virtual machine you want to add it to and click on "Edit Settings". On the lower left corner of the popup window click on "Add" and follow the steps through the process: choose device type "hard disk" select "create a new disk" enter the size and select the datastore where you want to store this virtual disk select the virtual SCSI node and the mode of the disk on the summary tab click on "Finish" to provide this new virtual disk for the VM
Note: With Virtual Infrastructure 3 it is now possible to add new disks to virtual machines while they are running. However the guest OS must support this. Option 2: Using raw device mapping (RDM) To use a physical LUN inside a virtual machine you need to create a raw device mapping. The LUN you want must already be known by the ESX host. If the LUN is not yet visible to the ESX host, check if it's assigned correctly and perform a rescan. To create a RDM select the virtual machine you want to add it to and click on "Edit Settings". On the lower left corner of the popup window click on "Add" and follow the steps through the process: choose device type "hard disk" select "Raw Device Mappings" from the Select a disk window, select "Mapped SAN LUN" choose a raw LUN from the list select a datastore onto which the raw LUN will be mapped (the proxy file will be stored here) chose the compatibility mode select the virtual SCSI node on the summary tab click on "Finish" to assign this LUN to the VM Note: RDM is a requirement for VMotion.
419
6786ch_hostconsid_open.fm
Compatibility modes VMware offers two different modes for RDMs: physical compatibility mode and virtual compatibility mode. In physical compatibility mode all SCSI commands to the LUN are passed through the virtualization layer with minimal modification. As a result, system administrators can use the DS CLI command lshostvol to map the virtual machine disks to DS8000 disks. This option also generates the least overhead. Virtual compatibility mode on the other hand lets you take advantage of disk modes and other features like snapshots and redo logs that are normally only available for virtual disks.
Both RDM and virtual disks appear to the VM as regular SCSI disks and can be handled as such. Only difference: Virtual disks appear as "VMware Virtual Disk SCSI Disk Device" in the device tree and RDMs as "IBM 2107900 SCSI Disk Device" (see Figure 16-14).
A great deal of useful information is available in the IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917. This section is not intended to duplicate that publication, but instead, this section provides more information about optimizing your Sun Solaris environment as well as a step-by-step guide to using Solaris with the DS8000. 420
IBM System Storage DS8000 Series: Architecture and Implementation
6786ch_hostconsid_open.fm
421
6786ch_hostconsid_open.fm
depending on your operating system version, your host bus adapters, and whether you use clustering. Details are available in the IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917. One difference between the multipathing technologies is in whether they suppress the redundant paths to the storage. MPxIO and DMP both suppress all paths to the storage except for one, and the device appears to the application as a single-path device. However, SDD allows the original paths to be seen, but creates its own virtual device (called a vpath) for applications to use. If you assign LUNs to your server before you install multipathing software, you can see each LUN show up as two or more devices, depending on how many paths you have. In Example 16-40, the iostat -nE command shows that the volume 75207814206 appears twice: once as c2t1d1 on the first HBA and once as c3t1d1 on the second HBA.
Example 16-40 Device listing without multipath software # iostat -nE c2t1d1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: IBM Product: 2107900 Revision: .212 Serial No: Size: 10.74GB <10737418240 Bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 c2t1d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: IBM Product: 2107900 Revision: .212 Serial No: Size: 10.74GB <10737418240 Bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 c3t1d1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: IBM Product: 2107900 Revision: .212 Serial No: Size: 10.74GB <10737418240 Bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 c3t1d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: IBM Product: 2107900 Revision: .212 Serial No: Size: 10.74GB <10737418240 Bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0
75207814206
75207814205
75207814206
75207814205
After you install the SDD software, you can see that the paths have been grouped into virtual vpath devices. Example 16-41 shows the output of the showvpath command.
Example 16-41 Output of the showvpath command # /opt/IBMsdd/bin/showvpath vpath1: Serial Number : 75207814206 c2t1d1s0 /devices/pci@6,4000/fibre-channel@2/sd@1,1:a,raw c3t1d1s0 /devices/pci@6,2000/fibre-channel@1/sd@1,1:a,raw vpath2: c2t1d0s0 c3t1d0s0 Serial Number : 75207814205 /devices/pci@6,4000/fibre-channel@2/sd@1,0:a,raw /devices/pci@6,2000/fibre-channel@1/sd@1,0:a,raw
422
6786ch_hostconsid_open.fm
For each device, the operating system creates a node in the /dev/dsk and /dev/rdsk directories. After SDD is installed, you can see these new vpaths by listing the contents of those directories. Note that with SDD, the old paths are not suppressed. Instead, new vpath devices show up as /dev/rdsk/vpath1a, for example. When creating your volumes and file systems, be sure to use the vpath device instead of the original device. SDD also offers parameters that you can tune for your environment. Specifically, SDD offers three different load balancing schemes: Failover: No load balancing. Second path is used only if the preferred path fails. Round robin: Paths to use are chosen at random (but different paths than most recent I/O). If only two paths, then they alternate. Load balancing: Path chosen based on estimated path load. Default policy. The policy can be set through the use of the datapath set device policy command.
Note that the command reports that both adapters are unconfigured. To configure the adapters, issue the cfgadm -c configure cx (where x is the adapter number, in this case, 3 and 4). Now, both adapters should show up as configured. Note: The cfgadm -c configure command is unnecessary in Solaris 10. To configure your MPxIO, you need to first enable it by editing the /kernel/drv/scsi_vhci.conf file. Find and change the mpxio-disable parameter to no (mpxio-disable="no";). For Solaris 10, you need to execute the stmsboot -e command to enable MPxIO. Next, add the following stanza to supply the vendor identification (VID) and product identification (PID) information to MPxIO in the /kernel/drv/scsi_vhci.conf file:
device-type-scsi-options-list = "IBM 2107900", "symmetric-option"; symmetric-option = 0x1000000;
423
6786ch_hostconsid_open.fm
The vendor string must be exactly 8 Bytes, so you must type IBM followed by 5 spaces. Finally, the system must be rebooted. After the reboot, MPxIO is ready to be used. For more information about MPxIO, including all the MPxIO commands and tuning parameters, see the Sun Web site:
http://www.sun.com/storage/software/
During device discovery, the vxconfigd daemon compares the serial numbers of the different devices. If two devices have the same serial number, then they are the same LUN, and DMP combines the paths. Listing the contents of the /dev/vx/rdmp directory shows only one set of devices. The vxdisk path command also demonstrates DMPs path suppression capabilities. In Example 16-43, you can see that device c7t2d0s2 is suppressed and is only shown as a subpath of c6t1d0s2.
Example 16-43 vxdisk path command output # vxdisk path SUBPATH c6t1d0s2 c7t2d0s2 c6t1d1s2 c7t2d1s2 c6t1d2s2 c7t2d2s2 c6t1d3s2 c7t2d3s2 DANAME c6t1d0s2 c6t1d0s2 c7t2d1s2 c7t2d1s2 c7t2d2s2 c7t2d2s2 c7t2d3s2 c7t2d3s2 DMNAME Ethan01 Ethan01 Ethan02 Ethan02 Ethan03 Ethan03 Ethan04 Ethan04 GROUP Ethan Ethan Ethan Ethan Ethan Ethan Ethan Ethan STATE ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED
Now, you create volumes using the device name listed under the DANAME column. In Figure 16-15, a volume is created using four disks, even though there are actually eight paths.
424
6786ch_hostconsid_open.fm
As with other multipathing software, DMP provides a number of parameters that you can tune in order to maximize the performance and availability in your environment. For example, it is possible to set a load balancing policy to dictate how the I/O should be shared between the different paths. It is also possible to select which paths get used in which order in case of a failure. You can find complete details about the features and capabilities of DMP on the VERITAS Web site:
http://www.symantec.com/business/theme.jsp?themeid=datacenter
425
6786ch_hostconsid_open.fm
Here we can see that the capacity is 12 GB, and also that the volume ID is 4704. To find out what is the corresponding disk on the Solaris host, you have to install the DS CLI on this host and execute the lshostvol command. The output is shown in Example 16-45.
Example 16-45 lshostvol output bash-3.00# /opt/ibm/dscli/bin/lshostvol.sh Device Name Volume ID -----------------------------------------c2t50050763030CC1A5d0 IBM.2107-7520781/4704 c2t50050763030CC08Fd0 IBM.2107-7503461/4704 c2t5005076303000663d0 IBM.2107-75ABTV1/4704 c3t50050763030B0663d0 IBM.2107-75ABTV1/4704 c3t500507630319C08Fd0 IBM.2107-7503461/4704 c3t50050763031CC1A5d0 IBM.2107-7520781/4704
Here we can see that the volume with id 75207814704 is c2t50050763030CC1A5d0 or c3t50050763031CC1A5d0 on the Solaris host. To see the size of the volume on the Solaris host, we use luxadm command, as shown in Example 16-46.
Example 16-46 luxadm output before volume expansion bash-3.00# luxadm display /dev/rdsk/c2t50050763030CC1A5d0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c2t50050763030CC1A5d0s2 Status(Port A): O.K. Vendor: IBM Product ID: 2107900 WWN(Node): 5005076303ffc1a5 WWN(Port A): 50050763030cc1a5 Revision: .991 Serial Num: 75207814704 Unformatted capacity: 12288.000 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x16 Device Type: Disk device Path(s): /dev/rdsk/c2t50050763030CC1A5d0s2 /devices/pci@9,600000/SUNW,qlc@1/fp@0,0/ssd@w50050763030cc1a5,0:c,raw /dev/rdsk/c3t50050763031CC1A5d0s2 /devices/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w50050763031cc1a5,0:c,raw
This indicates that the volume size is 12288 MB, equal to 12 GB. To find out the dmpnodename of this disk in VxVM, we have to use the vxdmpadm command (see Example 16-47 on page 427). The capacity of this disk, as shown in VxVM, can be found on the output line labeled public: after issuing a vxdisk list <dmpnodename> command, . You have to multiply the value for len by 512 Bytes, which is equal to 12 GB (25095808 x 512).
426
6786ch_hostconsid_open.fm
Example 16-47 VxVM commands before volume expansion bash-3.00# vxdmpadm getsubpaths ctlr=c2 NAME STATE[A] PATH-TYPE[M] DMPNODENAME ENCLR-TYPE ENCLR-NAME ATTRS ================================================================================ NONAME DISABLED IBM_DS8x002_1 IBM_DS8x00 IBM_DS8x002 c2t50050763030CC08Fd0s2 ENABLED(A) IBM_DS8x002_0 IBM_DS8x00 IBM_DS8x002 c2t50050763030CC1A5d0s2 ENABLED(A) IBM_DS8x001_0 IBM_DS8x00 IBM_DS8x001 c2t5005076303000663d0s2 ENABLED(A) IBM_DS8x000_0 IBM_DS8x00 IBM_DS8x000 bash-3.00# vxdisk list IBM_DS8x001_0 Device: IBM_DS8x001_0 devicetag: IBM_DS8x001_0 type: auto hostid: v880 disk: name=IBM_DS8x001_0 id=1194446100.17.v880 group: name=20781_dg id=1194447491.20.v880 info: format=cdsdisk,privoffset=256,pubslice=2,privslice=2 flags: online ready private autoconfig autoimport imported pubpaths: block=/dev/vx/dmp/IBM_DS8x001_0s2 char=/dev/vx/rdmp/IBM_DS8x001_0s2 guid: {9ecb6cb6-1dd1-11b2-af7a-0003ba43fdc1} udid: IBM%5F2107%5F7520781%5F6005076303FFC1A50000000000004704 site: version: 3.1 iosize: min=512 (Bytes) max=2048 (blocks) public: slice=2 offset=65792 len=25095808 disk_offset=0 private: slice=2 offset=256 len=65536 disk_offset=0 update: time=1194447493 seqno=0.15 ssb: actual_seqno=0.0 headers: 0 240 configs: count=1 len=48144 logs: count=1 len=7296 Defined regions: config priv 000048-000239[000192]: copy=01 offset=000000 enabled config priv 000256-048207[047952]: copy=01 offset=000192 enabled log priv 048208-055503[007296]: copy=01 offset=000000 enabled lockrgn priv 055504-055647[000144]: part=00 offset=000000 Multipathing information: numpaths: 2 c2t50050763030CC1A5d0s2 state=enabled c3t50050763031CC1A5d0s2 state=enabled
We already created a file system on the logical volume of the VxVM diskgroup, the size of the file system (11 GB) mounted on /20781will be displayed by the use of the df command, as shown in Example 16-48.
Example 16-48 df command before volume expansion bash-3.00# df -k Filesystem /dev/dsk/c1t0d0s0 /devices ctfs proc mnttab swap objfs fd swap swap kBytes used avail capacity 14112721 4456706 9514888 32% 0 0 0 0% 0 0 0 0% 0 0 0 0% 0 0 0 0% 7225480 1160 7224320 1% 0 0 0 0% 0 0 0 0% 7224320 0 7224320 0% 7224368 48 7224320 1% Mounted on / /devices /system/contract /proc /etc/mnttab /etc/svc/volatile /system/object /dev/fd /tmp /var/run
427
6786ch_hostconsid_open.fm
swap 7224320 0 7224320 swap 7224320 0 7224320 /dev/dsk/c1t0d0s7 20160418 2061226 17897588 /dev/vx/dsk/03461_dg/03461_vol 10485760 20062 9811599 /dev/vx/dsk/20781_dg/20781_vol 11534336 20319 10794398
To expand the volume on the DS8000, we use the command chfbvol (see Example 16-49). The new capacity must be larger than the previous one, you can not shrink the volume.
Example 16-49 Expanding a volume dscli> chfbvol -cap 18 4704 Date/Time: October 18, 2007 1:10:52 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7520781 CMUC00332W chfbvol: Some host operating systems do not support changing the volume size. Are you sure that you want to resize the volume? [y/n]: y CMUC00026I chfbvol: FB volume 4704 successfully modified.
To check that the volume has been expanded, we use the lsfbvol command as shown in Example 16-10 on page 383. Here you can see that the volume 4703 has been expanded to 18 GB in capacity.
Example 16-50 lsfbvol after expansion
dscli> lsfbvol 4704 Date/Time: November 7, 2007 11:05:15 PM CET IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7520781 Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) cap (10^9B) cap (blocks) ====================================================================================================== ============ ITSO_v880_4704 4704 Online Normal Normal 2107-900 FB 512 P53 18.0 37748736
To see the changed size of the volume on the Solaris host after the expansion, we use luxadm command, as shown in Example 16-51.
Example 16-51 luxadm output after volume expansion bash-3.00# luxadm display /dev/rdsk/c2t50050763030CC1A5d0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c2t50050763030CC1A5d0s2 Status(Port A): O.K. Vendor: IBM Product ID: 2107900 WWN(Node): 5005076303ffc1a5 WWN(Port A): 50050763030cc1a5 Revision: .991 Serial Num: 75207814704 Unformatted capacity: 18432.000 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x16 Device Type: Disk device Path(s): /dev/rdsk/c2t50050763030CC1A5d0s2 /devices/pci@9,600000/SUNW,qlc@1/fp@0,0/ssd@w50050763030cc1a5,0:c,raw /dev/rdsk/c3t50050763031CC1A5d0s2 /devices/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w50050763031cc1a5,0:c,raw
428
6786ch_hostconsid_open.fm
The disk now has a capacity of 18GB. To use the additional capacity, we have to issue the vxdisk resize command, as shown in Example 16-52. After the volume expansion the disk size is 37677568 * 512 Bytes, equal to 18 GB. Note: You need at least two disks in the diskgroup, where you want to resize a disk, otherwise the vxdisk resize command will fail. In addition, SUN has found some potential issues regarding the vxdisk resize command in Veritas Storage Foundation 4.0 or 4.1. More details about this issue can be found at: http://sunsolve.sun.com/search/document.do?assetkey=1-26-102625-1&searchclause= 102625
Example 16-52 VxVM commands after volume expansion bash-3.00# vxdisk resize IBM_DS8x001_0 bash-3.00# vxdisk list IBM_DS8x001_0 Device: IBM_DS8x001_0 devicetag: IBM_DS8x001_0 type: auto hostid: v880 disk: name=IBM_DS8x001_0 id=1194446100.17.v880 group: name=20781_dg id=1194447491.20.v880 info: format=cdsdisk,privoffset=256,pubslice=2,privslice=2 flags: online ready private autoconfig autoimport imported pubpaths: block=/dev/vx/dmp/IBM_DS8x001_0s2 char=/dev/vx/rdmp/IBM_DS8x001_0s2 guid: {fbdbfe12-1dd1-11b2-af7c-0003ba43fdc1} udid: IBM%5F2107%5F7520781%5F6005076303FFC1A50000000000004704 site: version: 3.1 iosize: min=512 (Bytes) max=2048 (blocks) public: slice=2 offset=65792 len=37677568 disk_offset=0 private: slice=2 offset=256 len=65536 disk_offset=0 update: time=1194473744 seqno=0.16 ssb: actual_seqno=0.0 headers: 0 240 configs: count=1 len=48144 logs: count=1 len=7296 Defined regions: config priv 000048-000239[000192]: copy=01 offset=000000 enabled config priv 000256-048207[047952]: copy=01 offset=000192 enabled log priv 048208-055503[007296]: copy=01 offset=000000 enabled lockrgn priv 055504-055647[000144]: part=00 offset=000000 Multipathing information: numpaths: 2 c2t50050763030CC1A5d0s2 state=enabled c3t50050763031CC1A5d0s2 state=enabled
Now we have to expand the logical volume and the file system in VxVM, therefore we need at first the maximum size we can expand to, then we have to expand the logical volume and the file system (see Example 16-53).
Example 16-53 VxVM logical volume expansion bash-3.00# vxvoladm -g 20781_dg maxgrow 20781_vol Volume can be extended to: 37677056(17.97g) bash-3.00# vxvoladm -g 20781_dg growto 20781_vol 37677056
429
6786ch_hostconsid_open.fm
bash-3.00# /opt/VRTS/bin/fsadm -b 17g /20781 UX:vxfs fsadm: INFO: V-3-25942: /dev/vx/rdsk/20781_dg/20781_vol size increased from 23068672 sectors to 35651584 sectors
After the file system expansion, the df command shows a size of 17825792 KB, equal to 17 GB, on file system /dev/vx/dsk/20781_dg/20781_vol, as shown in Example 16-54.
Example 16-54 df command after file system expansion bash-3.00# df -k Filesystem kBytes used avail capacity /dev/dsk/c1t0d0s0 14112721 4456749 9514845 32% /devices 0 0 0 0% ctfs 0 0 0 0% proc 0 0 0 0% mnttab 0 0 0 0% swap 7222640 1160 7221480 1% objfs 0 0 0 0% fd 0 0 0 0% swap 7221480 0 7221480 0% swap 7221528 48 7221480 1% swap 7221480 0 7221480 0% swap 7221480 0 7221480 0% /dev/dsk/c1t0d0s7 20160418 2061226 17897588 11% /dev/vx/dsk/03461_dg/03461_vol 10485760 20062 9811599 1% /dev/vx/dsk/20781_dg/20781_vol 17825792 21861 16691193 1% Mounted on / /devices /system/contract /proc /etc/mnttab /etc/svc/volatile /system/object /dev/fd /tmp /var/run /dev/vx/dmp /dev/vx/rdmp /export/home /03461 /20781
430
6786ch_hostconsid_open.fm
For preparing the host to attach the DS8000, refer to the DS8000 Information Center and select Configuring Attaching Hosts Hewlett-Packard Server (HP-UX) host attachment:
http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp
For SDD installation, refer to IBM System Storage Multipath Subsystem Device Driver Users Guide, SC30-4131. The Users Guide is available at the download page for each individual SDD Operating System Version at:
http://www.ibm.com/support/dlsearch.wss?rs=540&tc=ST52G7&dc=D430
You can download the latest available ISO-image for the DS CLI CD from:
ftp://ftp.software.ibm.com/storage/ds8000/updates/DS8K_Customer_Download_Files/CLI/
431
6786ch_hostconsid_open.fm
Maximum Frame Size Driver-Firmware Dump Available Driver-Firmware Dump Timestamp Driver Version
= = = =
7 2007
432
6786ch_hostconsid_open.fm
Example 16-56 Discovered DS8000 devices without a special device file # ioscan -fnC disk Class I H/W Path Driver S/W State H/W Type Description ======================================================================= disk 2 0/0/3/0.0.0.0 sdisk CLAIMED DEVICE TEAC DV-28E-C /dev/dsk/c0t0d0 /dev/rdsk/c0t0d0 disk 0 0/1/1/0.0.0 sdisk CLAIMED DEVICE HP 146 GMAT3147NC /dev/dsk/c2t0d0 /dev/rdsk/c2t0d0 ... disk 7 0/2/1/0.71.10.0.46.0.1 sdisk CLAIMED DEVICE IBM 2107900 disk 8 0/2/1/0.71.42.0.46.0.1 sdisk CLAIMED DEVICE IBM 2107900 disk 6 0/2/1/0.71.51.0.46.0.1 sdisk CLAIMED DEVICE IBM 2107900 disk 11 0/5/1/0.73.10.0.46.0.1 sdisk CLAIMED DEVICE IBM 2107900 disk 9 0/5/1/0.73.42.0.46.0.1 sdisk CLAIMED DEVICE IBM 2107900 disk 10 0/5/1/0.73.51.0.46.0.1 sdisk CLAIMED DEVICE IBM 2107900
To create the missing special device file, there are two options. The first one is a reboot of the host, which is disruptive. The alternative to the reboot is to run the command insf -eC disk, which will reinstall the special device files for all devices of the Class disk. After creating the special device files, the ioscan output should look like Example 16-57.
Example 16-57 Discovered DS8000 devices with a special device file # ioscan -fnkC disk Class I H/W Path Driver S/W State H/W Type Description ======================================================================= disk 2 0/0/3/0.0.0.0 sdisk CLAIMED DEVICE TEAC DV-28E-C /dev/dsk/c0t0d0 /dev/rdsk/c0t0d0 disk 0 0/1/1/0.0.0 sdisk CLAIMED DEVICE HP 146 GMAT3147NC /dev/dsk/c2t0d0 /dev/rdsk/c2t0d0 ... disk 7 0/2/1/0.71.10.0.46.0.1 sdisk CLAIMED DEVICE IBM 2107900 /dev/dsk/c11t0d1 /dev/rdsk/c11t0d1 disk 8 0/2/1/0.71.42.0.46.0.1 sdisk CLAIMED DEVICE IBM 2107900 /dev/dsk/c12t0d1 /dev/rdsk/c12t0d1 disk 6 0/2/1/0.71.51.0.46.0.1 sdisk CLAIMED DEVICE IBM 2107900 /dev/dsk/c10t0d1 /dev/rdsk/c10t0d1 disk 11 0/5/1/0.73.10.0.46.0.1 sdisk CLAIMED DEVICE IBM 2107900 /dev/dsk/c15t0d1 /dev/rdsk/c15t0d1 disk 9 0/5/1/0.73.42.0.46.0.1 sdisk CLAIMED DEVICE IBM 2107900 /dev/dsk/c13t0d1 /dev/rdsk/c13t0d1 disk 10 0/5/1/0.73.51.0.46.0.1 sdisk CLAIMED DEVICE IBM 2107900 /dev/dsk/c14t0d1 /dev/rdsk/c14t0d1
The ioscan command also shows the relationship between agile and legacy representations.
Example 16-58 Relationship between Persistent DSFs and Legacy DSFs # ioscan -m dsf Persistent DSF Legacy DSF(s) ======================================== ... /dev/rdisk/disk12 /dev/rdsk/c10t0d1 /dev/rdsk/c14t0d1 /dev/rdisk/disk13 /dev/rdsk/c11t0d1 /dev/rdsk/c15t0d1 /dev/rdisk/disk14 /dev/rdsk/c12t0d1 /dev/rdsk/c13t0d1
433
6786ch_hostconsid_open.fm
Once the volumes are visible, such as in Example 16-57, you can then create volume groups (VGs), logical volumes, and file systems.
16.8.5 Multipathing
You can use the IBM SDD multipath driver or other multipathing solutions from HP and Veritas.
SDD troubleshooting
When all DS8000 volumes are visible after claiming them with ioscan, but are not configured by the SDD, you can run the command cfgvpath -r to perform a dynamic reconfiguration of all SDD devices.
..... Nov 10 17:56:12 dwarf vmunix: NOTICE: VPATH_EVENT: device = vpath9 path = 0 online
434
6786ch_hostconsid_open.fm
Nov 10 17:56:12 dwarf vmunix: NOTICE: VPATH_EVENT: device = vpath8 path = 0 online Nov 10 17:56:12 dwarf vmunix: NOTICE: VPATH_EVENT: device = vpath10 path = 0 online Nov 10 17:56:12 dwarf vmunix: NOTICE: VPATH_EVENT: device = vpath11 path = 0 online
--- Physical volumes --PV Name PV Name PV Status Total PE Free PE Autoswitch PV Name PV Name PV Status Total PE Free PE Autoswitch
/dev/dsk/c11t0d1 /dev/dsk/c15t0d1 Alternate Link available 767 767 On /dev/dsk/c12t0d1 /dev/dsk/c13t0d1 Alternate Link available 767 767 On
435
6786ch_hostconsid_open.fm
With HP-UX 11iv3 and the new agile addressing, a native multipathing solution is available outside of HP Logical Volume Manager (LVM). It offers load balancing over the available I/O paths. Starting with HP-UX 11iv3, LVMs Alternate Link functionality (PVLINKS) is redundant, but is still supported with legacy DSFs. To use the new agile addressing with a volume group, just specify the persistent DSFs in the vgcreate command (see Example 16-62 on page 436).
Example 16-62 Volume group creation with Persistent DSFs # vgcreate -A y -x y -l 255 -p 100 -s 16 /dev/vgagile /dev/disk/disk12 /dev/disk/disk13 /dev/disk/disk14 Volume group "/dev/vgagile" has been successfully created. Volume Group configuration for /dev/vgagile has been saved in /etc/lvmconf/vgagile.conf # vgdisplay -v vgagile --- Volume groups --VG Name VG Write Access VG Status Max LV Cur LV Open LV Max PV Cur PV Act PV Max PE per PV VGDA PE Size (MBytes) Total PE Alloc PE Free PE Total PVG Total Spare PVs Total Spare PVs in use
--- Physical volumes --PV Name PV Status Total PE Free PE Autoswitch PV Name PV Status Total PE Free PE Autoswitch PV Name PV Status Total PE Free PE Autoswitch
/dev/disk/disk12 available 767 767 On /dev/disk/disk13 available 767 767 On /dev/disk/disk14 available 767 767 On
436
6786ch_hostconsid_open.fm
# vxdiskadm Volume Manager Support Operations Menu: VolumeManager/Disk 1 2 3 4 5 6 7 8 9 10 Add or initialize one or more disks Remove a disk Remove a disk for replacement Replace a failed or removed disk Mirror volumes on a disk Move volumes from a disk Enable access to (import) a disk group Remove access to (deport) a disk group Enable (online) a disk device Disable (offline) a disk device
437
6786ch_hostconsid_open.fm
11 12 13 14 15 16 17 18 19 20 21 list
Mark a disk as a spare for a disk group Turn off the spare flag on a disk Remove (deport) and destroy a disk group Unrelocate subdisks back to a disk Exclude a disk from hot-relocation use Make a disk available for hot-relocation use Prevent multipathing/Suppress devices from VxVM's view Allow multipathing/Unsuppress devices from VxVM's view List currently suppressed/non-multipathed devices Change the disk naming scheme Change/Display the default disk layouts List disk information
? ?? q
Display help about menu Display help about the menuing system Exit from menus
Select an operation to perform: 1 Add or initialize disks Menu: VolumeManager/Disk/AddDisks Use this operation to add one or more disks to a disk group. You can add the selected disks to an existing disk group or to a new disk group that will be created as a part of the operation. The selected disks may also be added to a disk group as spares. Or they may be added as nohotuses to be excluded from hot-relocation use. The selected disks may also be initialized without adding them to a disk group leaving the disks available for use as replacement disks. More than one disk or pattern may be entered at the prompt. Here are some disk selection examples: all: c3 c4t2: c3t4d2: xyz_0: xyz_: all disks all disks on both controller 3 and controller 4, target 2 a single disk (in the c#t#d# naming scheme) a single disk (in the enclosure based naming scheme) all disks on the enclosure whose name is xyz c10t6d0 c10t6d1
Select disk devices to add: [<pattern-list>,all,list,q,?] Here are the disks selected. c10t6d0 c10t6d1 Continue operation? [y,n,q,?] (default: y) y
You can choose to add these disks to an existing disk group, a new disk group, or you can leave these disks available for use by future add or replacement operations. To create a new disk group, select a disk group name that does not yet exist. To leave the disks available for future use, specify a disk group name of "none". Which disk group [<group>,none,list,q,?] (default: none) There is no active disk group named dg01. Create a new group named dg01? [y,n,q,?] (default: y) dg01
438
6786ch_hostconsid_open.fm
Create the disk group as a CDS disk group? [y,n,q,?] (default: y) Use default disk names for these disks? [y,n,q,?] (default: y) Add disks as spare disks for dg01? [y,n,q,?] (default: n) n
Exclude disks from hot-relocation use? [y,n,q,?] (default: n) A new disk group will be created named dg01 and the selected disks will be added to the disk group with default disk names. c10t6d0 c10t6d1 Continue with operation? [y,n,q,?] (default: y) Do you want to use the default layout for all disks being initialized? [y,n,q,?] (default: y) n Do you want to use the same layout for all disks being initialized? [y,n,q,?] (default: y) Enter the desired format [cdsdisk,hpdisk,q,?] (default: cdsdisk) Enter the desired format [cdsdisk,hpdisk,q,?] (default: cdsdisk) Enter desired private region length [<privlen>,q,?] (default: 1024) Initializing device c10t6d0. Initializing device c10t6d1. VxVM NOTICE V-5-2-120 Creating a new disk group named dg01 containing the disk device c10t6d0 with the name dg0101. VxVM NOTICE V-5-2-88 Adding disk device c10t6d1 to disk group dg01 with disk name dg0102. Add or initialize other disks? [y,n,q,?] (default: n) n # vxdisk list DEVICE TYPE c2t0d0 auto:none c2t1d0 auto:none c10t0d1 auto:none c10t6d0 auto:hpdisk c10t6d1 auto:hpdisk c10t6d2 auto:none hpdisk hpdisk
invalid
The graphical equivalent for the vxdiskadm utility is the VERITAS Enterprise Administrator (vea). Figure 16-16 on page 440 shows the presentation of disks by this graphical user interface.
439
6786ch_hostconsid_open.fm
440
6786ch_Hostconsid_zOS.fm
17
Chapter 17.
System z considerations
This chapter discusses the specifics for attaching the DS8000 to System z hosts. This chapter covers the following topics: Connectivity considerations Operating systems prerequisites and enhancements z/OS considerations z/VM considerations VSE/ESA and z/VSE considerations
441
6786ch_Hostconsid_zOS.fm
ESCON
For optimum availability, make ESCON host adapters available through all I/O enclosures. For good performance, and depending on your workload characteristics, use at least eight ESCON ports installed on four ESCON host adapters in the storage unit. Note: When using ESCON channels only, volumes in address group 0 can be accessed. For this reason, if you have a mixture of count key data (CKD) and fixed block (FB) volumes in the storage image, you might want to reserve the first 16 LSSs (00-0F) for the ESCON-accessed CKD volumes.
FICON
You also need to check for dependencies in the host hardware driver level and the supported feature codes. Your IBM service representative can help you determine your current hardware driver level on your mainframe processor complex. Examples of limited host server feature support are (FC3319) FICON Express2 LX and (FC3320) FICON Express2 SX, which are available only for the z890 and z990 host server models.
Or refer the the new IBM System Storage Interoperability Center (SSIC) at:
http://www-03.ibm.com/systems/support/storage/config/ssic/esssearchwithoutjs.wss
Important: In addition to the Interoperability Matrix or the SSIC, always review the Preventive Service Planning (PSP) bucket of the 2107 for software updates.
442
6786ch_Hostconsid_zOS.fm
The PSP information can be found on the Resource Link Web site at:
http://www-1.ibm.com/servers/resourcelink/svc03100.nsf?OpenDatabase
You need to register for an IBM Registration ID (IBM ID) before you can sign in the web site.
Scalability support
The IOS recovery was designed to support a small number of devices per control unit, and a unit check was presented on all devices at failover. This did not scale well with a DS8000 that has the capability to scale up to 65,280 devices. Under these circumstances, you can have CPU or spin lock contention, or exhausted storage below the 16M line at device failover, or both. Starting with z/OS 1.4 and higher with the DS8000 software support, the IOS recovery has been improved by consolidating unit checks at an LSS level instead of each disconnected device. This consolidation shortens the recovery time as a result of I/O errors. This enhancement is particularly important, because the DS8000 has a much higher number of devices compared to the predecessor (IBM ESS 2105). In the IBM ESS 2105, we had 4 K devices, and in the DS8000, we have up to 65,280 devices in a storage facility.
Benefits
With enhanced scalability support, the following benefits are possible: Common storage area (CSA) usage (above and below the 16M line) is reduced. The I/O supervisor (IOS) large block pool for error recovery processing and attention and the state change interrupt processing are located above the 16M line, thus reducing the storage demand below the 16M line. Unit control blocks (UCB) are pinned during event notification facility (ENF) signalling during channel path recovery.
Chapter 17. System z considerations
443
6786ch_Hostconsid_zOS.fm
These scalability enhancements provide additional performance improvements by: Bypassing dynamic pathing validation in channel recovery for reduced recovery I/Os. Reducing elapsed time by reducing the wait time in channel path recovery.
6786ch_Hostconsid_zOS.fm
DS8000 as UNIT=2105; in this case, only the 16 logical control units (LCUs) of Address Group 0 can be used. Starting with z9-109 processors, you can define an additional subchannel set with ID 1 (SS 1) on top of the existing subchannel set (SS 0) in a channel subsystem. With this additional subchannel set, you can configure more than 2 x 63K devices for a channel subsystem. With z/OS V1R7, you can define Parallel Access Volume (PAV) alias devices (device types 3380A and 3390A) of the 2105, 2107, and 1750 DASD control units to SS 1. Device numbers can be duplicated across channel subsystems and subchannel sets.
Performance statistics
Two new sets of performance statistics that are reported by the DS8000 were introduced. Because a logical volume is no longer allocated on a single RAID rank or single device adapter pair, the performance data is now provided with a set of rank performance statistics and extent pool statistics. The RAID RANK reports are no longer reported by RMF and IDCAMS LISTDATA batch reports. RMF and IDCAMS LISTDATA are enhanced to report the logical volume statistics that are provided on the DS8000. These reports consist of back-end counters that capture the activity between the cache and the ranks in the DS8000 for each individual logical volume. These rank and extent pool statistics are disk system-wide instead of volume-wide only.
445
6786ch_Hostconsid_zOS.fm
adapter activity) and transfer activity from hard disk to cache and conversely (called disk activity).
New reports have been designed for reporting FICON channel utilization. RMF also provides support for Remote Mirror and Copy link utilization statistics. This support is delivered by APAR OA04877; PTFs are available for z/OS V1R4. Note: RMF cache reporting and the results of a LISTDATA STATUS command report a cache size that is half the actual size, because the information returned represents only the cluster to which the logical control unit is attached. Each LSS on the cluster reflects the cache and nonvolatile storage (NVS) size of that cluster. z/OS users will find that only the SETCACHE CFW ON | OFF command is supported while other SETCACHE command options (for example, DEVICE, SUBSYSTEM, DFW, NVS) are not accepted. Note also that the cache and NVS size reported by the LISTDATA command is somewhat less than the installed processor memory size. The DS8000 licensed internal code uses part of the processor memory and this is not reported by LISTDATA.
Migration considerations
The DS8000 is supported as an IBM 2105 for z/OS systems without the DFSMS and z/OS small program enhancements (SPEs) installed. This allows clients to roll the SPE to each system in a Sysplex without having to take a Sysplex-wide outage. You must take an IPL to activate the DFSMS and z/OS portions of this support.
Coexistence considerations
IBM provides support for the DS8000 running in 2105 mode on systems that do not have this SPE installed. The support consists of the recognition of the DS8000 real control unit type and device codes when it runs in 2105 emulation on these down-level systems. Input/Output definition files (IODF) created by HCD can be shared on systems that do not have this SPE installed. Additionally, you should be able to use existing IODF files that define IBM 2105 control unit records for a 2107 subsystem as long as 16 or fewer logical subsystems are configured in the DS8000.
446
6786ch_Hostconsid_zOS.fm
In the absence of workload data to model, consider the following rule: define as many aliases in an LSS as the number of FICON channels to the LSS multiplied by six. Also, use Table 17-1 on page 448 as a conservative recommendation for base to alias ratios.
447
6786ch_Hostconsid_zOS.fm
Table 17-1 Base to alias ratios guideline Size of base device (number of cylinders) Number of aliases for dynamic PAV Number of aliases for static PAV
1 - 3339 3340 - 6678 6679 - 10,017 10,018 - 16,695 16,696 - 23,373 23,374 - 30,051 30,052 - 40,068 40,069 - 50,085 50,086 - 60,102 60,103 >
1 2 3 4 5 6 7 8 9 10
You can find more information regarding dynamic PAV on the Internet at:
http://www.ibm.com/s390/wlm/
RMF considerations
RMF reports all I/O activity against the Base PAV address, not by the Base and associated Aliases. The performance information for the Base includes all Base and Alias activity.
Tip: When setting MIH times in the IECIOSxx member of SYS1.PARMLIB, do not use device ranges that include alias device numbers.
6786ch_Hostconsid_zOS.fm
HyperPAV options
The following SYS1.PARMLIB(IECIOSxx) options allow enablement of HyperPAV at the LPAR level: HYPERPAV= YES | NO | BASEONLY YES: Attempt to initialize LSSs in HyperPAV mode. NO: Do not initialize LSSs in HyperPAV mode. BASEONLY: Attempt to initialize LSSs in HyperPAV mode, but only start I/Os on base volumes. The BASEONLY option returns the LSSs with enabled HyperPAV capability to a pre-PAV behavior for this LPAR.
HyperPAV migration
You can enable HyperPAV dynamically. Because it can take some time to initialize all needed LSSs in a DS8000 into HyperPAV mode, planning is prudent. If many LSSs are involved, then pick a quiet time to perform the SETIOS HYPERPAV=YES command and do not schedule concurrent DS8000 microcode changes or IODF activation together with this change. If you are currently using PAV and FICON, then no HCD or DS8000 logical configuration changes are needed on the existing LSSs. HyperPAV deployment can be staged: 1. Load/Authorize the HyperPAV feature on the DS8000. 2. If necessary, you can run without exploiting this feature by using the z/OS PARMLIB option. 3. Enable the HyperPAV feature on z/OS images in which you want to utilize HyperPAV using the PARMLIB option or the SETIOS command. 4. Eventually, enable the HyperPAV feature on all z/OS images in the Sysplex and authorize the licensed function on all attached DS8000s. 5. Optionally, reduce the number of aliases defined. Full coexistence with traditional PAVs (static or dynamic), as well as sharing with z/OS images without HyperPAV enabled, allows migration to HyperPAV to be a flexible procedure.
HyperPAV definition
The correct number of aliases for your workload can be determined from analysis of RMF data. The PAV Tool, which can be used to analyze PAV usage, is available at:
http://www-03.ibm.com/servers/eserver/zseries/zos/unix/bpxa1ty2.html#pavanalysis
449
6786ch_Hostconsid_zOS.fm
Example 17-1 Display information for a base address in an LSS with enabled HyperPAV SY1 d m=dev(0710) SY1 IEE174I 23.35.49 DISPLAY M 835 DEVICE 0710 STATUS=ONLINE CHP 10 20 30 40 DEST LINK ADDRESS 10 20 30 40 PATH ONLINE Y Y Y Y CHP PHYSICALLY ONLINE Y Y Y Y PATH OPERATIONAL Y Y Y Y MANAGED N N N N CU NUMBER 0700 0700 0700 0700 MAXIMUM MANAGED CHPID(S) ALLOWED: 0 DESTINATION CU LOGICAL ADDRESS = 07 SCP CU ND = 002107.000.IBM.TC.03069A000007.00FF SCP TOKEN NED = 002107.900.IBM.TC.03069A000007.0700 SCP DEVICE NED = 002107.900.IBM.TC.03069A000007.0710 HYPERPAV ALIASES IN POOL 4
In Example 17-2, address 0718 is an alias address belonging to a HyperPAV LSS. If you happen to catch a HyperPAV alias in use (bound), it shows up as bound.
Example 17-2 Display information for an alias address belonging to a HyperPAV LSS SY1 D M=DEV(0718) SY1 IEE174I 23.39.07 DISPLAY M 838 DEVICE 0718 STATUS=POOLED HYPERPAV ALIAS
The D M=DEV command in Example 17-3 shows HA for the HyperPAV aliases.
Example 17-3 The system configuration information shows the HyperPAV aliases SY1 d m=dev SY1 IEE174I 23.42.09 DISPLAY M 844 DEVICE STATUS: NUMBER OF ONLINE CHANNEL PATHS 0 1 2 3 4 5 6 7 8 9 A B C D E F 000 DN 4 DN DN DN DN DN DN DN . DN DN 1 1 1 1 018 DN DN DN DN 4 DN DN DN DN DN DN DN DN DN DN DN 02E 4 DN 4 DN 4 8 4 4 4 4 4 4 4 DN 4 DN 02F DN 4 4 4 4 4 4 DN 4 4 4 4 4 DN DN 4 030 8 . . . . . . . . . . . . . . . 033 4 . . . . . . . . . . . . . . . 034 4 4 4 4 DN DN DN DN DN DN DN DN DN DN DN DN 03E 1 DN DN DN DN DN DN DN DN DN DN DN DN DN DN DN 041 4 4 4 4 4 4 4 4 AL AL AL AL AL AL AL AL 048 4 4 DN DN DN DN DN DN DN DN DN DN DN DN DN 4 051 4 4 4 4 4 4 4 4 UL UL UL UL UL UL UL UL 061 4 4 4 4 4 4 4 4 AL AL AL AL AL AL AL AL 071 4 4 4 4 DN DN DN DN HA HA DN DN . . . . 073 DN DN DN . DN . DN . DN . DN . HA . HA . 098 4 4 4 4 DN 8 4 4 4 4 4 DN 4 4 4 4 0E0 DN DN 1 DN DN DN DN DN DN DN DN DN DN DN DN DN 0F1 1 DN DN DN DN DN DN DN DN DN DN DN DN DN DN DN FFF . . . . . . . . . . . . HA HA HA HA ************************ SYMBOL EXPLANATIONS ************************ @ ONLINE, PHYSICALLY ONLINE, AND OPERATIONAL INDICATORS ARE NOT EQUAL + ONLINE # DEVICE OFFLINE . DOES NOT EXIST BX DEVICE IS BOXED SN SUBCHANNEL NOT AVAILABLE
450
6786ch_Hostconsid_zOS.fm
DN DEVICE NOT AVAILABLE PE SUBCHANNEL IN PERMANENT ERROR AL DEVICE IS AN ALIAS UL DEVICE IS AN UNBOUND ALIAS HA DEVICE IS A HYPERPAV ALIAS
17.4.1 Connectivity
z/VM provides the following connectivity: z/VM supports FICON attachment as 3990 Model 3 or 6 controller Native controller modes 2105 and 2107 are supported on z/VM 5.2.0 with APAR VM63952: This brings support up to equal to z/OS. z/VM simulates controller mode support by each guest. z/VM supports FCP attachment for Linux systems running as a guest. z/VM itself supports FCP-attached SCSI disks starting with z/VM 5.1.0.
451
6786ch_Hostconsid_zOS.fm
For further discussion of PAV, see 10.7.8, PAV in z/VM environments on page 219.
452
6786ch_Hostconsid_zOS.fm
HyperPAV support
z/VM supports HyperPAV for dedicated DASD and minidisks starting in z/VM version 5.3.0.
453
6786ch_Hostconsid_zOS.fm
454
6786ch_hostconsid_iSeries.fm
18
Chapter 18.
System i considerations
This chapter discusses the specifics for the DS8000 attachment to System i. This chapter covers the following topics: Supported environment Logical volume sizes Protected as opposed to unprotected volumes Adding volumes to the System i configuration Multipath Sizing guidelines Migration Boot from SAN AIX on IBM System i Linux on IBM System i For further information about these topics, refer to the IBM Redbook iSeries and IBM TotalStorage: A Guide to Implementing External Disk on eserver i5, SG24-7120.
455
6786ch_hostconsid_iSeries.fm
18.1.1 Hardware
The DS8000 is supported on all System i models that support Fibre Channel attachment for external storage. Fibre Channel was supported on all model 8xx onward. AS/400 models 7xx and prior only supported SCSI attachment for external storage, so they cannot support the DS8000. There are three Fibre Channel adapters for System i. All support the DS8000: 2766 2 Gigabit Fibre Channel Disk Controller PCI 2787 2 Gigabit Fibre Channel Disk Controller PCI-X 5760 4 Gigabit Fibre Channel Disk Controller PCI-X Each adapter requires its own dedicated I/O processor. The System i Storage Web page provides information about current hardware requirements, including support for switches. You can find this at:
http://www.ibm.com/servers/eserver/iseries/storage/storage_hw.html
18.1.2 Software
The System i must be running V5R2 or V5R3 (i5/OS) or a later level of OS/400. In addition, at the time of writing, the following PTFs were required: V5R2: MF33327, MF33301, MF33469, MF33302, SI14711, and SI14754 V5R3: MF33328, MF33845, MF33437, MF33303, SI14690, SI14755, and SI14550 Prior to attaching the DS8000 to System i, you should check for the latest PTFs, which might have superseded those shown here.
456
6786ch_hostconsid_iSeries.fm
Table 18-1 OS/400 logical volume sizes Model type Unprotected Protected OS/400 device size (GB) Number of logical block addresses (LBAs) Extents Unusable space (GiB1 ) Usable space%
8 17 33 66 132 263
1. GiB represents Binary GigaBytes (230 Bytes), and GB represents Decimal GigaBytes (109 Bytes).
Note: Logical volumes of size 8.59 and 282.2 are not supported as System i Load Source Unit (boot disk) where the Load Source Unit is to be located in the external storage server. When creating the logical volumes for use with OS/400, you will see that in almost every case, the OS/400 device size does not match a whole number of extents, and so some space is wasted. You should also note that the #2766 and #2787 Fibre Channel Disk Adapters used by System i can only address 32 logical unit numbers (LUNs), so creating more, smaller LUNs requires more Input Output Adapters (IOAs) and their associated Input Output Processors (IOPs). For more sizing guidelines for OS/400, refer to 18.6, Sizing guidelines on page 475.
457
6786ch_hostconsid_iSeries.fm
(depending on the number of extents returned to the extent pool). This is unlike ESS E20, F20, and 800 where the entire array containing the logical volume had to be reformatted. However, before deleting the logical volume on the DS8000, you must first remove it from the OS/400 configuration (assuming it was still configured). This is an OS/400 task that is disruptive if the disk is in the System ASP or User ASPs 2-32 because it requires an IPL of OS/400 to completely remove the volume from the OS/400 configuration. This is no different from removing an internal disk from an OS/400 configuration. Indeed, deleting a logical volume on the DS8000 is similar to physically removing a disk drive from an System i. Disks can be removed from an Independent ASP with the IASP varied off without requiring you to IPL the system.
3. Select Option 2, Work with disk configuration, as shown in Figure 18-2 on page 459.
458
6786ch_hostconsid_iSeries.fm
Work with Disk Units Select one of the following: 1. Display disk configuration 2. Work with disk configuration 3. Work with disk unit recovery
4. When adding disk units to a configuration, you can add them as empty units by selecting Option 2 or you can choose to allow OS/400 to balance the data across all the disk units. Typically, we recommend balancing the data. Select Option 8, Add units to ASPs and balance data, as shown in Figure 18-3.
Work with Disk Configuration Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. Display disk configuration Add units to ASPs Work with ASP threshold Include unit in device parity protection Enable remote load source mirroring Disable remote load source mirroring Start compression on non-configured units Add units to ASPs and balance data Start device parity protection
5. Figure 18-4 on page 460 shows the Specify ASPs to Add Units to panel. Specify the ASP number to the left of the desired units. Here we have specified ASP1, the System ASP. Press Enter.
459
6786ch_hostconsid_iSeries.fm
Specify ASPs to Add Units to Specify the ASP to add each unit to. Specify ASP Serial Number 21-662C5 21-54782 75-1118707 Resource Model Capacity Name 050 35165 DD124 050 35165 DD136 A85 35165 DD006
F3=Exit F12=Cancel
F5=Refresh
6. The Confirm Add Units panel appears for review as shown in Figure 18-5. If everything is correct, press Enter to continue.
Confirm Add Units Add will take several minutes for each unit. The system will have the displayed protection after the unit(s) are added. Press Enter to confirm your choice for Add units. Press F9=Capacity Information to display the resulting capacity. Press F12=Cancel to return and change your choice. Serial Number 02-89058 68-0CA4E32 68-0C9F8CA 68-0CA5D96 75-1118707 Resource Name DD004 DD003 DD002 DD001 DD006
ASP Unit 1 1 2 3 4 5
Type Model 6717 6717 6717 6717 2107 074 074 074 074 A85
Protection Unprotected Device Parity Device Parity Device Parity Device Parity Unprotected
F12=Cancel
7. Depending on the number of units you are adding, this step can take some time. When it completes, display your disk configuration to verify the capacity and data protection.
460
6786ch_hostconsid_iSeries.fm
2. Expand the System i to which you want to add the logical volume and sign on to that server as shown in Figure 18-7.
3. Expand Configuration and Service Hardware Disk Units as shown in Figure 18-8 on page 462.
461
6786ch_hostconsid_iSeries.fm
4. The system prompts you to sign on to SST as shown in Figure 18-9. Enter your Service tools ID and password and press OK.
5. Right-click Disk Pools and select New Disk Pool. 6. The New Disk Pool wizard appears. Click Next. 7. On the New Disk Pool dialog shown in Figure 18-10 on page 463, select Primary from the pull-down for the Type of disk pool, give the new disk pool a name, and leave Database to default to Generated by the system. Ensure the disk protection method matches the type of logical volume you are adding. If you leave it unchecked, you see all available disks. Select OK to continue.
462
6786ch_hostconsid_iSeries.fm
8. A confirmation panel such as that shown in Figure 18-11 appears to summarize the disk pool configuration. Select Next to continue.
9. Now you need to add disks to the new disk pool. On the Add to Disk Pool panel, click Add Disks as shown in Figure 18-12 on page 464.
463
6786ch_hostconsid_iSeries.fm
10.A list of non-configured units similar to that shown in Figure 18-13 appears. Highlight the disks you want to add to the disk pool, and click Add.
11.A confirmation panel appears as shown in Figure 18-14 on page 465. Click Next to continue.
464
6786ch_hostconsid_iSeries.fm
12.A summary of the Disk Pool configuration similar to Figure 18-15 appears. Click Finish to add the disks to the Disk Pool.
13.Take note of and respond to any message dialogs that appear. After you take action on any messages, the New Disk Pool Status panel shown in Figure 18-16 on page 466 displays and shows progress. This step might take some time, depending on the number and size of the logical units you add.
Chapter 18. System i considerations
465
6786ch_hostconsid_iSeries.fm
15.The new Disk Pool can be seen on the System i Navigator Disk Pools panel in Figure 18-18.
16.To see the logical volume, as shown in Figure 18-19 on page 467, expand Configuration and Service Hardware Disk Pools, and click the disk pool that you have just created.
466
6786ch_hostconsid_iSeries.fm
18.5 Multipath
Multipath support was added for external disks in V5R3 of i5/OS (also known as OS/400 V5R3). Unlike other platforms that have a specific software component, such as Subsystem Device Driver (SDD), multipath is part of the base operating system. At V5R3 and V5R4, you can define up to eight connections from multiple I/O adapters on a System i server to a single logical volume in the DS8000. Each connection for a multipath disk unit functions independently. Several connections provide availability by allowing disk storage to be utilized even if a single path fails. Multipath is important for System i, because it provides greater resilience to SAN failures, which can be critical to OS/400 due to the single level storage architecture. Multipath is not available for System i internal disk units but the likelihood of path failure is much less with internal drives. This is because there are fewer interference points where problems can occur, such as long fiber cables and SAN switches, as well as the increased possibility of human error when configuring switches and external storage, and the concurrent maintenance on the DS8000, which can make some paths temporarily unavailable. Many System i clients still have their entire environment in the System ASP and loss of access to any disk will cause the system to fail. Even with User ASPs, loss of a UASP disk eventually causes the system to stop. Independent ASPs provide isolation so that loss of disks in the IASP only affects users accessing that IASP while the rest of the system is unaffected. However, with multipath, even loss of a path to disk in an IASP does not cause an outage.
467
6786ch_hostconsid_iSeries.fm
Prior to the availability of multipath, some clients used OS/400 mirroring to two sets of disks, either in the same or different external disk subsystems. This provided implicit dual-path capability as long as the mirrored copy was connected to a different IOP/IOA, BUS, or I/O tower. However, this also required two copies of data. Because disk level protection is already provided by RAID-5 or RAID-10 in the external disk subsystem, this was sometimes seen as unnecessary. With the combination of multipath and RAID-5 or RAID-10 protection in the DS8000, we can provide full protection of the data paths and the data itself without the requirement for additional disks.
1. IO Frame 2. BUS 3. IOP 4. IOA 5. Cable 9. ISL 10. Port 11. Switch 12. Port
When implementing multipath, provide as much redundancy as possible. As a minimum, multipath requires two IOAs connecting the same logical volumes. Ideally, place these on different buses and in different I/O racks in the System i. If a SAN is included, use separate switches also for each path. You should also use Host Adapters in different I/O drawer pairs in the DS8000. Figure 18-21 on page 469 shows this.
468
6786ch_hostconsid_iSeries.fm
Unlike other systems, which might only support two paths (dual-path), OS/400 V5R3 supports up to eight paths to the same logical volumes. As a minimum, you should use two, although some small performance benefits might be experienced with more. However, because OS/400 multipath spreads I/O across all available paths in a round-robin manner, there is no load balancing, only load sharing.
469
6786ch_hostconsid_iSeries.fm
BUS a BUS b
FC IOA
FC IOA
BUS x BUS y
FC IOA
FC IOA
iSeries IO Towers/Racks
Logical connection
To implement multipath, the first group of 24 logical volumes is also assigned to a Fibre Channel I/O adapter in the second System i I/O tower or rack through a host adapter in the lower right I/O drawer in the DS8000. The second group of 24 logical volumes is also assigned to a Fibre Channel I/O adapter on a different BUS in the second System i I/O tower or rack through a host adapter in the upper right I/O drawer.
470
6786ch_hostconsid_iSeries.fm
Specify ASPs to Add Units to Specify the ASP to add each unit to. Specify ASP Serial Number 21-662C5 21-54782 75-1118707 Resource Model Capacity Name 050 35165 DD124 050 35165 DD136 A85 35165 DMP135
F3=Exit F12=Cancel
F5=Refresh
Note: For multipath volumes, only one path is shown. In order to see the additional paths, see 18.5.5, Managing multipath volumes using System i Navigator on page 472. 5. A confirmation panel appears as shown in Figure 18-24. Check the configuration details, and if correct, press Enter to accept.
Confirm Add Units Add will take several minutes for each unit. The system will have the displayed protection after the unit(s) are added. Press Enter to confirm your choice for Add units. Press F9=Capacity Information to display the resulting capacity. Press F12=Cancel to return and change your choice. Serial Number 02-89058 68-0CA4E32 68-0C9F8CA 68-0CA5D96 75-1118707 Resource Name DD004 DD003 DD002 DD001 DMP135
ASP Unit 1 1 2 3 4 5
Type Model 6717 6717 6717 6717 2107 074 074 074 074 A85
Protection Unprotected Device Parity Device Parity Device Parity Device Parity Unprotected
F12=Cancel
471
6786ch_hostconsid_iSeries.fm
When you get to the point where you select the volumes to add, you see a panel similar to that shown in Figure 18-25. Multipath volumes appear as DMPxxx. Highlight the disks you want to add to the disk pool, and click Add.
Note: For multipath volumes, only one path is shown. In order to see the additional paths, see 18.5.5, Managing multipath volumes using System i Navigator on page 472. The remaining steps are identical to those in 18.4.2, Adding volumes to an Independent Auxiliary Storage Pool on page 460.
472
6786ch_hostconsid_iSeries.fm
To see the other connections to a logical unit, right-click the unit and select Properties, as shown in Figure 18-26.
473
6786ch_hostconsid_iSeries.fm
You now get the General properties tab for the selected unit, as shown in Figure 18-27. The first path is shown as Device 1 in the box labelled Storage.
To see the other paths to this unit, click the Connections tab, as shown in Figure 18-28, where you can see the other seven connections for this logical unit.
474
6786ch_hostconsid_iSeries.fm
475
6786ch_hostconsid_iSeries.fm
General rules
SAN Fabric
Proposed configuration
Workload from other servers
No
Finish
18.6.2 Cache
In general, System i workloads do not benefit from large cache. Still, depending on the workload (as shown in OS/400 Performance Tools System, Component and Resource Interval reports) you may see some benefit in larger cache sizes. However, in general, with large System i main memory sizes, OS/400 Expert Cache can reduce the benefit of external cache. 476
IBM System Storage DS8000 Series: Architecture and Implementation
6786ch_hostconsid_iSeries.fm
Note: ***Size the same capacity per adapter 5760 as for 2787 on 2844. For transfer sizes larger than 16 KB, size about 50% more capacity than for adapter 2787.
For most System i workloads, Access Density is usually below 2, so if you do not know it, the General rule column is a typical value to use.
477
6786ch_hostconsid_iSeries.fm
RAID-5 15K RPM (7 + P) RAID-5 10K RPM (7 + P) RAID-5 15K RPM (6 + P + S) RAID-5 10K RPM (6 + P + S) RAID-10 15K RPM (3 + 3 + 2S) RAID-10 10K RPM (3 + 3 + 2S) RAID-10 15K RPM (4 + 4) RAID-10 15K RPM (4 + 4)
As you can see in Table 18-3, RAID-10 can support higher host I/O rates than RAID-5. However, you must balance this against the reduced effective capacity of a RAID-10 rank when compared to RAID-5.
478
6786ch_hostconsid_iSeries.fm
18.7 Migration
For many System i clients, migrating to the DS8000 is best achieved using traditional Save/Restore techniques. However, there are some alternatives you might want to consider.
479
6786ch_hostconsid_iSeries.fm
You can also use the same setup if the ESS LUNs are in an IASP. Although the System i does not require a complete shutdown, varying off the IASP in the ESS, unassigning the ESS LUNs, assigning the DS8000 LUNs, and varying on the IASP have the same effect. Clearly, you must also take into account the licensing implications for Metro Mirror and Global Copy. Note: This is a special case of using Metro Mirror or Global Copy and only works if the same System i is used, along with the LSU to attach to both the original ESS and the new DS8000. It is not possible to use this technique to a different System i.
480
6786ch_hostconsid_iSeries.fm
Start ASP Balance (STRASPBAL) Type choices, press Enter. Balance type . . . . . . . . . . > *ENDALC Storage unit . . . . . . . . . . + for more values *CAPACITY, *USAGE, *HSM... 1-4094
F5=Refresh
F12=Cancel
When you subsequently run the OS/400 command STRASPBAL TYPE(*MOVDTA), all data is moved from the marked units to other units in the same ASP, as shown in Figure 18-32. Obviously, you must have sufficient new capacity to allow the data to be migrated.
Start ASP Balance (STRASPBAL) Type choices, press Enter. Balance type . . . . . . . . . . > *MOVDTA Time limit . . . . . . . . . . . *CAPACITY, *USAGE, *HSM... 1-9999 minutes, *NOMAX
F5=Refresh
F12=Cancel
You can specify a time limit that the function is to run for each ASP being balanced or the balance can be set to run to completion. If you need to end the balance function prior to this, use the End ASP Balance (ENDASPBAL) command. A message is sent to the system history (QHST) log when the balancing function is started for each ASP. A message is also sent to the QHST log when the balancing function completes or is ended. If the balance function is run for a few hours and then stopped, it will continue from where it left off when the balance function restarts. This allows the balancing to be run during off-hours over several days. In order to finally remove the old units from the configuration, you need to use Dedicated Service Tools (DST) and reIPL the system (or partition). Using this method allows you to remove the existing storage units over a period of time. However, it requires that both the old and new units are attached to the system at the same time, so it might require additional IOPs and IOAs if migrating from an ESS to a DS8000. It might be possible in your environment to reallocate logical volumes to other IOAs, but careful planning and implementation are required.
481
6786ch_hostconsid_iSeries.fm
482
6786ch_hostconsid_iSeries.fm
2787 2 Gigabit Fibre Channel Disk Controller PCI-X For more information about running AIX in an i5 partition, refer to the i5 Information Center at:
http://publib.boulder.ibm.com/infocenter/iseries/v1r2s/en_US/index.htm?info/iphat/iphatlpar kickoff.htm
Note: AIX will not run in a partition on earlier 8xx and prior System i hosts.
V5R3 at:
http://publib.boulder.ibm.com/infocenter/iseries/v5r3/ic2924/index.htm
483
6786ch_hostconsid_iSeries.fm
484
6786p_upgrading_and_maintaining.fm
Part 5
Part
485
6786p_upgrading_and_maintaining.fm
486
6786ch_Install_microcode.fm
19
Chapter 19.
487
6786ch_Install_microcode.fm
6786ch_Install_microcode.fm
6. Updates to the host adapters will be performed. For FICON/FCP adapters, the impact of these updates on each adapter is less than 2.5 seconds and should not affect connectivity. If an update were to take longer than this, multipathing software on the host, or control-unit reconfigured initiation (CUIR) for ESCON and FICON, directs I/O to a different host adapter. While the installation process described above might seem complex, it does not require a great deal of user intervention. The code installer normally just starts the process and then monitors its progress using the HMC. Important: An upgrade of your DS8000 microcode might require that you upgrade the level of your DS CLI version and might also require that you upgrade your HMC version. Check with your IBM representative on the description and contents of the release bundle.
489
6786ch_Install_microcode.fm
Important: There are a few limitations for the levels of microcode that can coexist among the storage images; see Different code versions across storage images on page 93. You also have the ability to install microcode update bundles non-concurrently, with all attached hosts shut down. However, this should not be necessary. This method is usually only employed at DS8000 installation time.
490
6786ch_Install_microcode.fm
and z/VM environments. The CUIR function automates channel path vary on and vary off actions to minimize manual operator intervention during selected DS8000 service actions. CUIR allows the DS8000 to request that all attached system images set all paths required for a particular service action to the offline state. System images with the appropriate level of software support respond to these requests by varying off the affected paths, and either notifying the DS8000 subsystem that the paths are offline, or that it cannot take the paths offline. CUIR reduces manual operator intervention and the possibility of human error during maintenance actions, at the same time reducing the time required for the maintenance. This is particularly useful in environments where there are many systems attached to a DS8000.
When it comes time to copy the new code bundle onto the DS8000, there are two ways to achieve this: Load the new code bundle onto the HMC using CDs. Download the new code bundle directly from IBM using FTP. The ability to download the code from IBM eliminates the need to order or burn CDs. However, it can require firewall changes to allow the HMC to connect using FTP to the site listed above.
Saturday
Note that the actual time required for the concurrent code load varies based on the bundle that you are currently running and the bundle to which you are going. It is not possible to state here how long updates can take. Always consult with your IBM service representative.
491
6786ch_Install_microcode.fm
492
6786ch_SNMP.fm
20
Chapter 20.
493
6786ch_SNMP.fm
494
6786ch_SNMP.fm
SNMP defines six generic types of traps and allows definition of enterprise-specific traps. The trap structure conveys the following information to the SNMP manager: Agents object that was affected IP address of the agent that sent the trap Event description (either a generic trap or enterprise-specific trap, including trap number) Time stamp Optional enterprise-specific trap identification List of variables describing the trap
Note that you can configure an SNMP agent to send SNMP trap requests to multiple SNMP managers. Figure 20-1 illustrates the characteristics of SNMP architecture and communication.
495
6786ch_SNMP.fm
0 1 2 3 4 5 6
Restart after a crash. Planned restart. Communication link is down. Communication link is up. Invalid SNMP community string was used. EGP neighbor is down. Vendor specific event happened.
A trap message contains pairs of an OID and a value shown in Table 20-1 to notify the cause of the trap message. You can also use type 6, the enterpriseSpecific trap type, when you have to send messages that do not fit other predefined trap types, for example, DISK I/O error and application down. You can also set an integer value field called Specific Trap on your trap message.
496
6786ch_SNMP.fm
For open events in the event log, a trap is sent every eight hours until the event is closed.
497
6786ch_SNMP.fm
If all links all interrupted, a trap 101, as shown in Example 20-3, is posted. This event indicates that no communication between the primary and the secondary system is possible any more.
Example 20-3 Trap 101: Remote mirror and copy links are inoperable PPRC Links Down UNIT: Mnf Type-Mod SerialNm LS PRI: IBM 2107-922 75-20781 10 SEC: IBM 2107-9A2 75-ABTV1 20 Path: Type PP PLink SP SLink RC 1: FIBRE 0143 XXXXXX 0010 XXXXXX 17 2: FIBRE 0213 XXXXXX 0140 XXXXXX 17
Once the DS8000 can communicate again using any of the links, trap 102, as shown in Example 20-4, is sent once one or more of the interrupted links are available again.
Example 20-4 Trap 102: Remote mirror and copy links are operational PPRC Links Up UNIT: Mnf Type-Mod SerialNm LS PRI: IBM 2107-9A2 75-ABTV1 21 SEC: IBM 2107-000 75-20781 11 Path: Type PP PLink SP SLink RC 1: FIBRE 0010 XXXXXX 0143 XXXXXX OK 2: FIBRE 0140 XXXXXX 0213 XXXXXX OK
498
6786ch_SNMP.fm
Table 20-2 shows the Remote mirror and copy return codes.
Table 20-2 Remote mirror and copy return codes Return Code Description
02 03 04 05 06 07 08 09
Initialization failed. ESCON link reject threshold exceeded when attempting to send ELP or RID frames. Time out. No reason available. There are no resources available in the primary storage unit for establishing logical paths because the maximum number of logical paths have already been established. There are no resources available in the secondary storage unit for establishing logical paths because the maximum number of logical paths have already been established. There is a secondary storage unit sequence number, or logical subsystem number, mismatch. There is a secondary LSS subsystem identifier (SSID) mismatch, or failure of the I/O that collects the secondary information for validation. The ESCON link is offline. This is caused by the lack of light detection coming from a host, peer, or switch. The establish failed. It is retried until the command succeeds or a remove paths command is run for the path.
Note: The attempt-to-establish state persists until the establish path operation succeeds or the remove remote mirror and copy paths command is run for the path.
0A 10
The primary storage unit port or link cannot be converted to channel mode if a logical path is already established on the port or link. The establish paths operation is not retried within the storage unit. Configuration error. The source of the error is one of the following: The specification of the SA ID does not match the installed ESCON adapter cards in the primary controller. For ESCON paths, the secondary storage unit destination address is zero and an ESCON Director (switch) was found in the path. For ESCON paths, the secondary storage unit destination address is not zero and an ESCON director does not exist in the path. The path is a direct connection.
14 15 16
The fibre-channel path link is down. The maximum number of fibre-channel path retry operations has been exceeded. The fibre-channel path secondary adapter is not remote mirror and copy capable. This could be caused by one of the following conditions: The secondary adapter is not configured properly or does not have the current firmware installed. The secondary adapter is already a target of 32 different logical subsystems (LSSs).
17 18 19 1A 1B 1C
The secondary adapter fibre-channel path is not available. The maximum number of fibre-channel path primary login attempts has been exceeded. The maximum number of fibre-channel path secondary login attempts has been exceeded. The primary fibre-channel adapter is not configured properly or does not have the correct firmware level installed. The fibre-channel path established but degraded due to a high failure rate. The fibre-channel path was removed due to a high failure rate.
499
6786ch_SNMP.fm
Trap 202, as shown in Example 20-6, is sent if a remote Copy Pair goes into a suspend State. The trap contains the serial number (SerialNm) of the primary and secondary machine, the logical subsystem or LSS (LS), and the logical device (LD). To avoid SNMP trap flooding, the number of SNMP traps for the LSS is throttled. The complete suspended pair information is represented in the summary. The last row of the trap represents the suspend state for all pairs in the reporting LSS. The suspended pair information contains a hexadecimal string of a length of 64 characters. By converting this hex string into binary, each bit represents a single device. If the bit is 1 then the device is suspended; otherwise, the device is still in full duplex mode.
Example 20-6 Trap 202: Primary remote mirror and copy devices on the LSS were suspended because of an error. Primary PPRC Devices on LSS Suspended Due to Error UNIT: Mnf Type-Mod SerialNm LS LD SR PRI: IBM 2107-922 75-20781 11 00 03 SEC: IBM 2107-9A2 75-ABTV1 21 00 Start: 2005/11/14 09:48:05 CST PRI Dev Flags (1 bit/Dev, 1=Suspended): C000000000000000000000000000000000000000000000000000000000000000
Trap 210, as shown in Example 20-7, is sent when a Consistency Group in a Global Mirror environment was successfully formed.
Example 20-7 Trap210: Global Mirror initial Consistency Group successfully formed 2005/11/14 15:30:55 CET Asynchronous PPRC Initial Consistency Group Successfully Formed UNIT: Mnf Type-Mod SerialNm IBM 2107-922 75-20781 Session ID: 4002
Trap 211, as shown in Example 20-8, is sent if the Global Mirror setup got into an severe error state, where no attempts are made to form a Consistency Group.
Example 20-8 Trap 211: Global Mirror Session is in a fatal state Asynchronous PPRC Session is in a Fatal State UNIT: Mnf Type-Mod SerialNm IBM 2107-922 75-20781 Session ID: 4002
500
6786ch_SNMP.fm
Trap 212, as shown in Example 20-9 on page 501, is sent when a Consistency Group cannot be created in a Global Mirror relation. Some of the reasons might be: Volumes have been taken out of a copy session. The remote copy link bandwidth might not be sufficient. The FC link between the primary and secondary system is not available.
Example 20-9 Trap 212: Global Mirror Consistency Group failure - Retry will be attempted Asynchronous PPRC Consistency Group Failure - Retry will be attempted UNIT: Mnf Type-Mod SerialNm IBM 2107-922 75-20781 Session ID: 4002
Trap 213, as shown in Example 20-10, is sent when a Consistency Group in a Global Mirror environment can be formed after a previous Consistency Group formation failure.
Example 20-10 Trap 213: Global Mirror Consistency Group successful recovery Asynchronous PPRC Consistency Group Successful Recovery UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002
Trap 214, as shown in Example 20-11, is sent if a Global Mirror Session is terminated using the DS CLI command rmgmir or the corresponding GUI function.
Example 20-11 Trap 214: Global Mirror Master terminated 2005/11/14 15:30:14 CET Asynchronous PPRC Master Terminated UNIT: Mnf Type-Mod SerialNm IBM 2107-922 75-20781 Session ID: 4002
Trap 215, as shown in Example 20-12, is sent if, in the Global Mirror Environment, the Master detects a failure to complete the FlashCopy commit. The trap is sent after a number of commit retries have failed.
Example 20-12 Trap 215: Global Mirror FlashCopy at Remote Site unsuccessful Asynchronous PPRC FlashCopy at Remote Site Unsuccessful A UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002
Trap 216, as shown in Example 20-13, is sent if a Global Mirror Master cannot terminate the Global Copy relationship at one of his Subordinates (slave). This might occur if the Master is terminated with rmgmir but the Master cannot terminate the copy relationship on the Subordinate. You might need to run a rmgmir against the Subordinate to prevent any interference with other Global Mirror sessions.
Example 20-13 Trap 216: Global Mirror slave termination unsuccessful Asynchronous PPRC Slave Termination Unsuccessful UNIT: Mnf Type-Mod SerialNm Master: IBM 2107-922 75-20781 Slave: IBM 2107-921 75-03641 Session ID: 4002
501
6786ch_SNMP.fm
Trap 217, as shown in Example 20-14 on page 502, is sent if a Global Mirror environment was suspended by the DS CLI command pausegmir or the corresponding GUI function.
Example 20-14 Trap 217: Global Mirror paused Asynchronous PPRC Paused UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002
Trap 218, as shown in Example 20-15, is sent if a Global Mirror has exceeded the allowed threshold for failed consistency group formation attempts.
Example 20-15 Trap 218: Global Mirror number of consistency group failures exceed threshold Global Mirror number of consistency group failures exceed threshold UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002
Trap 219, as shown in Example 20-16, is sent if a Global Mirror has successfully formed a consistency group after one or more formation attempts had previously failed.
Example 20-16 Trap 219: Global Mirror first successful consistency group after prior failures Global Mirror first successful consistency group after prior failures UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002
Trap 220, as shown in Example 20-17, is sent if a Global Mirror has exceeded the allowed threshold of failed FlashCopy commit attempts.
Example 20-17 Trap 220: Global Mirror number of FlashCopy commit failures exceed threshold Global Mirror number of FlashCopy commit failures exceed threshold UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002
Trap 221, as shown in Example 20-18, is sent when the Repository has reached the user-defined warning watermark or when physical space is completely exhausted.
Example 20-18 Trap 221: Space Efficient Repository or Over-provisioned Volume has reached a warning watermark Space Efficient Repository or Over-provisioned Volume has reached a warning watermark UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002
502
6786ch_SNMP.fm
03
The host system sent a command to the primary volume of a remote mirror and copy volume pair to suspend copy operations. The host system might have specified either an immediate suspension or a suspension after the copy completed and the volume pair reached a full duplex state. The host system sent a command to suspend the copy operations on the secondary volume. During the suspension, the primary volume of the volume pair can still accept updates but updates are not copied to the secondary volume. The out-of-sync tracks that are created between the volume pair are recorded in the change recording feature of the primary volume. Copy operations between the remote mirror and copy volume pair were suspended by a primary storage unit secondary device status command. This system resource code can only be returned by the secondary volume. Copy operations between the remote mirror and copy volume pair were suspended because of internal conditions in the storage unit. This system resource code can be returned by the control unit of either the primary volume or the secondary volume. Copy operations between the remote mirror and copy volume pair were suspended when the secondary storage unit notified the primary storage unit of a state change transition to simplex state. The specified volume pair between the storage units is no longer in a copy relationship. Copy operations were suspended because the secondary volume became suspended as a result of internal conditions or errors. This system resource code can only be returned by the primary storage unit. The remote mirror and copy volume pair was suspended when the primary or secondary storage unit was rebooted or when the power was restored. The paths to the secondary storage unit might not be disabled if the primary storage unit was turned off. If the secondary storage unit was turned off, the paths between the storage units are restored automatically, if possible. After the paths have been restored, issue the mkpprc command to resynchronize the specified volume pairs. Depending on the state of the volume pairs, you might have to issue the rmpprc command to delete the volume pairs and reissue a mkpprc command to reestablish the volume pairs.
04
05
06
07
08
09
0A
The remote mirror and copy pair was suspended because the host issued a command to freeze the remote mirror and copy group. This system resource code can only be returned if a primary volume was queried.
503
6786ch_SNMP.fm
personnel. This information must be applied by the IBM service personnel during the installation. Also, the IBM service personnel can configure the HMC to either send notification for every serviceable event, or send notification for only those events that Call Home to IBM. The network management server that is configured on the HMC receives all the generic trap 6 specific trap 3 messages, which are sent in parallel with any events that Call Home to IBM.
504
6786ch_remote_supt.fm
21
Chapter 21.
Remote support
In this chapter, we discuss the supports for outbound (Call Home and Support Data offload) and inbound (Remote Services) for the DS8000. This chapter covers the following topics: Call Home for service Remote Services Support data offload Optional firewall setup guidelines
505
6786ch_remote_supt.fm
21.2.1 Connections
You can configure the remote support modem or Internet VPN.
Dial-up connection
This is a low-speed asynchronous modem connection to a telephone line. This connection typically favors transferring small amounts of data. When configuring for a dial-up connection, have the following information available: Which dialing mode to use: either tone or pulse Whether a dialing prefix is required when dialing an outside line
506
6786ch_remote_supt.fm
Client Network
DMZ
VPN
Internet
Phoneline
DMZ
IBM Network
507
6786ch_remote_supt.fm
must be "op_volume" or above. Example 21-1 shows the DS CLI command setvpn, which provides a client with the ability to establish a VPN tunnel to connect to IBM, so that the IBM remote support personnel can connect to the DS8000.
Example 21-1 DS CLI command setvpn to start VPN connection dscli> setvpn -vpnaddr smc1 -action connect Date/Time: October 13, 2007 6:06:18 PM CEST IBM DSCLI Version: 5.3.0.939 CMUC00232I setvpn: Secure connection is started successfully through the network
Remote PE Developer
Establish a VPN session back to IBM. Work with the standard support functions using the WebSM GUI. Establish a VPN session back to IBM. Work with advanced support functions using the WebSM GUI. Allowed root access, using ssh, but only if a PE user is logged in using WebSM.
The authorization at the DS8000 has implemented a challenge-response authentication scheme for the three authorization levels. The goal of these three levels is that only specially trained support personnel gain sufficient privilege according to their expertise.
Retain
FIREWALL
Async Modem
Figure 21-3 shows the establish process flow for a VPN connection. When the IBM support representative needs to connect to the DS8000, support calls in using a modem connection and an ASCII terminal screen is presented. This ASCII screen requests a one-time password, which is generated with a presented key and an IBM system that is calculating this one-time password .The one time password will be written to the Auditlogs as displayed for the password ZyM1NGMs in Example 21-4 on page 510. Once the access is granted, the support user establishes a VPN tunnel back to IBM.
FIREWALL
AT&T
OR
Internet
Client LAN
HMC
508
6786ch_remote_supt.fm
IBM intranet
Async Modem
HMC
Support/PFE
Async Modem
FIREWALL
AT&T
OR
Internet
Client LAN
HMC
Async Modem
FIREWALL
AT&T
OR
Support/PFE
Internet
Client LAN
HMC
Depending on the outbound configuration made during the setup of the HMC, the HMC uses the modem or the network connection to establish the VPN tunnel. The support person then needs to log on to an IBM system, which acts as the other end of the VPN tunnel. Now, only the workstation from where the VPN tunnel was opened can connect to this VPN tunnel. The tunnel automatically stops if the support person logs out or the VPN tunnel was not opened by IBM support within ten minutes.
Note: If remote service actions are in progress, it is recommended to inform IBM service before terminating a remote support connection.
509
6786ch_remote_supt.fm
The downloaded auditlog file provides information about when the remote access started and ended, which DS8000 was accessed and what remote authority level was applied. The autority levels are described in Table 21-1 on page 508
Example 21-4 audit log enties related to a remote support event via modem U,2007/10/05 18:20:49:000,,1,IBM.2107-7520780,N,8000,Phone_started,Phone_connection_started U,2007/10/05 18:21:13:000,,1,IBM.2107-7520780,N,8036,Authority_to_root,Challenge Key = 'ZyM1NGMs'; Authority_upgrade_to_root,,, U,2007/10/0518:26:02:000,,1,IBM.2107-7520780,N,8002,Phone_ended,Phone_connection_ended
For a detailed description about how auditing is used to record "who did what and when" in the audited system, please see:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD103019
510
6786ch_remote_supt.fm
Client and server authentication to ensure that the appropriate machines talk to each other. Starting with LMC v5.3.x.x, the DS8000 supports Call Home and offload of support data through SSL. In this configuration the HMC uses a customer-provided Internet connection to connect to the IBM servers. All communications are handled through TCP sockets (which always originate from the HMC) and use SSL to encrypt the data that is being sent back and forth. Optionally, the HMC can also be enabled to connect to the Internet through a client configured proxy server. To enable SSL connection the credentials of IBM service must allow the customer to enter the required information. IBM service will customize the Remote support Outbound Connectivity function by checking the Internet tab "allow an Existing Internet Connection for Service". If desired the customer can enter a proxy and proxy autentication. To forward SSL sockets, the proxy server must support the basic proxy header functions (as described in RFC #2616) and the CONNECT method. Optionally, basic proxy authentication (RFC #2617) may be configured so that the HMC authenticates before attempting to forward sockets through the proxy server. The firewall settings between HMC and the internet for SSL setup require four IP addresses open on port 443, two for authentication and two for access to IBM Service by geography: 129.42.160.48 and 207.25.252.200 (Allow HMC access to IBM System Authentication Server) 129.42.160.49 and 207.25.252.204 (Allow HMC access to IBM Service for North and South America) 129.42.160.50 and 207.25.252.205 (Allow HMC access to IBM Service for all other regions) Note: When configuring a firewall to allow the HMC to connect to these servers, only the IP addresses specific to the client's region are needed. After the configuration completed the service representative will perform a test to verify the successful setup. During the test, detailed status information does show that sockets have been successfully opened on the remote IBM server.
511
6786ch_remote_supt.fm
Client Network
DS8000 Subsystems DS HMC eth eth modem
DMZ
Internet
Phoneline
DMZ
IBM Network
512
6786ch_Capacity_Upgrades_CoD.fm
22
Chapter 22.
513
6786ch_Capacity_Upgrades_CoD.fm
Storage Enclosure
DDM
The storage enclosures are always installed in pairs, one enclosure in the front of the unit and one enclosure in the rear. A storage enclosure pair can be populated with one or two disk drive sets (16 or 32 DDMs). All DDMs in a disk enclosure pair must be of the same type (capacity and speed). If a disk enclosure pair is populated with only 16 DDMs, disk drive filler modules are installed in the vacant DDM slots. This is to maintain the correct cooling airflow throughout the enclosure. Each storage enclosure attaches to two device adapters (DAs). The DAs are the RAID adapter cards that connect the processor complexes to the DDMs. The DS8000 DA cards always come as a redundant pair, so we refer to them as DA pairs. Physical installation and testing of the device adapters, storage enclosure pairs, and DDMs are performed by your IBM service representative. After the additional capacity is added successfully, the new storage appears as additional unconfigured array sites. You might need to obtain new license keys and apply them to the storage image before you start configuring the new capacity; see Chapter 11, Features and license keys on page 227. You cannot create ranks using the new capacity if this causes your machine to exceed its license key limits. 514
IBM System Storage DS8000 Series: Architecture and Implementation
6786ch_Capacity_Upgrades_CoD.fm
515
6786ch_Capacity_Upgrades_CoD.fm
Example 22-1 Commands to display storage in a DS8000 First we list the device adapters: dscli> lsda -l IBM.2107-7503461 Date/Time: 29 October 2007 17:05:28 CET IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 ID State loc FC Server DA pair interfs ========================================================================================================= IBM.1300-001-00886/R-1-P1-C3 Online U1300.001.1300886-P1-C3 - 0 2 0x0230,0x0231,0x0232,0x0233 IBM.1300-001-00887/R-1-P1-C6 Online U1300.001.1300887-P1-C6 - 1 2 0x0360,0x0361,0x0362,0x0363 Now we list the storage enclosures: dscli> lsstgencl IBM.2107-7503461 Date/Time: 29 October 2007 17:07:31 CET IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 ID Interfaces interadd stordev cap (GB) RPM ===================================================================================== IBM.2107-D01-00106/R1-S22 0x0231,0x0361,0x0230,0x0360 0x1 16 146.0 10000 IBM.2107-D01-00110/R1-S11 0x0233,0x0363,0x0232,0x0362 0x0 16 146.0 10000 IBM.2107-D01-00125/R1-S21 0x0231,0x0361,0x0230,0x0360 0x0 16 146.0 10000 IBM.2107-D01-00196/R1-S12 0x0233,0x0363,0x0232,0x0362 0x1 16 146.0 10000 Now we list the DDMs: dscli> lsddm IBM.2107-7503461 Date/Time: 29 October 2007 17:09:28 CET IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 ID DA Pair dkcap (10^9B) dkuse arsite State ================================================================================================ IBM.2107-D01-00106/R1-P1-D1 2 146.0 array member S5 Normal IBM.2107-D01-00106/R1-P1-D2 2 146.0 array member S4 Normal IBM.2107-D01-00106/R1-P1-D3 2 146.0 array member S1 Normal IBM.2107-D01-00106/R1-P1-D4 2 146.0 array member S6 Normal .... Now we list the array sites: dscli> lsarraysite -l Date/Time: 29 October 2007 17:11:46 CET IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 arsite DA Pair dkcap (10^9B) diskRPM State Array diskclass =============================================================== S1 2 146.0 10000 Assigned A4 ENT S2 2 146.0 10000 Assigned A5 ENT S3 2 146.0 10000 Assigned A6 ENT S4 2 146.0 10000 Assigned A7 ENT S5 2 146.0 10000 Assigned A0 ENT S6 2 146.0 10000 Assigned A1 ENT S7 2 146.0 10000 Unassigned ENT S8 2 146.0 10000 Unassigned ENT
516
6786ch_Capacity_Upgrades_CoD.fm
517
6786ch_Capacity_Upgrades_CoD.fm
Now using the signature, log on to the DSFA Web site at:
http://www.ibm.com/storage/dsfa
On the View Machine Summary tab, you can see whether the CoD indicator is on or off. In Figure 22-2, you can see an example of a 2107-921 machine that has the CoD indicator.
If instead you see 0900 Non-Standby CoD, then the CoD feature has not been ordered for your machine.
518
6786ch_Capacity_Upgrades_CoD.fm
How a user can tell how many CoD array sites have been ordered
From the machine itself, there is no way to tell how many of the array sites in a machine are CoD array sites as opposed to array sites you can start using right away. During the machine order process, this must be clearly understood and documented.
519
6786ch_Capacity_Upgrades_CoD.fm
520
6786ax_datamig.fm
Appendix A.
Data migration
This chapter provides useful information for planning the methods and tools that you use when migrating data from other disk subsystems into the DS8000 storage disk subsystem. This chapter includes the following topics: Data migration in open systems environments Data migration in System z environments IBM Migration Services Summary
521
6786ax_datamig.fm
522
6786ax_datamig.fm
rootvg None
active
3. Add the new volume to the same volume group of the original volume:
root:/ > extendvg rootvg hdisk1 root:/ > lspv hdisk0 000007cac12a0429 hdisk1 000007ca7b577c70
rootvg rootvg
active active
523
6786ax_datamig.fm
The content of the volume group is shown with the lsvg -l command. Note that in the columns LPs and PPs the proportion is 1 by 1, which means that we have only one physical copy of each logical data:
root:/ > lsvg -l rootvg rootvg: LV NAME TYPE hd5 boot hd6 paging hd8 jfs2log hd4 jfs2 hd2 jfs2 hd9var jfs2 hd3 jfs2 hd1 jfs2 hd10opt j fs2 lg_dumplv sysdump download jfs2
LPs 1 8 1 1 21 1 1 1 1 80 200
PPs 1 8 1 1 21 1 1 1 1 80 200
PVs 1 1 1 1 1 1 1 1 1 1 1
LV STATE closed/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd open/syncd
MOUNT POINT N/A N/A N/A / /usr /var /tmp /home /opt N/A /downloads
4. Run the mirrorvg command to create the relationship and start the copy of the data:
root:/ > mirrorvg rootvg hdisk1
After the mirroring process ends, we have the following output for the lsvg -l command. Now you will see that the proportion between the LPs and PPs columns is 1 by 2, which means 1 logical data in 2 physical volumes:
root:/ > lsvg -l rootvg rootvg: LV NAME TYPE hd5 boot hd6 paging hd8 jfs2log hd4 jfs2 hd2 jfs2 hd9var jfs2 hd3 jfs2 hd1 jfs2 hd10opt jfs2 lg_dumplv sysdump download jfs2
LPs 1 8 1 1 21 1 1 1 1 80 200
PVs 2 2 2 2 2 2 2 2 2 1 2
LV STATE MOUNT POINT closed/syncd N/A open/syncd N/A open/syncd N/A open/syncd / open/syncd /usr open/syncd /var open/syncd /tmp open/syncd /home open/syncd /opt open/syncd N/A open/syncd /downloads
Now, the volume group is mirrored and the data is consistent in the two volumes as shown in the columns LV STATE, which indicate the status syncd for all logical volumes. If you want to remove the mirror, you can use the following command:
#unmirrorvg <vg_name> <hdisk#>
If you want to remove the hdisk1 and keep the hdisk0 active, run the following command:
#unmirrorvg rootvg hdisk1
If you want to remove the hdisk0 and keep the hdisk1 active, run the following command:
#unmirrorvg rootvg hdisk0
Note: You can use the smit utility to perform these procedures accessing the fast path: smit mirrorvg to create a mirror or smit unmirrovg to remove a mirror.
524
6786ax_datamig.fm
show in this section how to create a mirror using the Logical Disk Manager with dynamic disks. In this example, we have two volumes, Disk 8 and Disk 9. The drive letter S: is associated with Disk 8, which is the current volume running on the system. Disk 9 is the new disk that will be part of the mirror. See Figure A-1. The steps are:
Note: To configure new disks on the system, after connecting it to the Windows server, run the Rescan Disks function in Disk Management.
525
6786ax_datamig.fm
Figure A-2 shows how to convert the new volume to dynamic disk.
1. Now, with Disk 9 as a dynamic disk, the system is ready to initiate the mirroring process. As Figure A-3 on page 527 illustrates, right-click the source volume (S:) and choose the Add Mirror option.
526
6786ax_datamig.fm
2. The Add Mirror window displays with a list of available disks. Mark the chosen disk and then click Add Mirror; see Figure A-4.
3. The synchronization process starts automatically. At this time, you see that both volumes, Disk 8 and Disk 9, are assigned to the same drive letter, S: (see Figure A-5 on page 528).
527
6786ax_datamig.fm
4. Figure A-6 shows the volumes after the synchronization process has finished.
528
6786ax_datamig.fm
The next panels show you how to remove a mirror. We can access this option by right-clicking the selected volume. You have two options now: Break Mirrored Volume The selected volume will keep the original drive letter and the other volume will be automatically assigned to another letter. From now on, the synchronization process will not occur; both drives will have different drive letters, but the data is still there. Remove the mirror If you choose to remove the mirror, a window displays asking you which volume you want to remove. The selected volume, after completing the process, will become a free disk to the operating system with no drive letter and no data on it. The steps to follow are: 1. In Figure A-7, we select the option Break Mirrored Volume from Disk 8.
2. After you confirm the operation, you see that Disk 8 changes to drive letter E: (see Figure A-8 on page 530). The data is still available, but the disks will not be fault-tolerant.
529
6786ax_datamig.fm
3. Figure A-9 shows you the disks with the mirror, and this time, we selected the Remove Mirror option. A window opens in which you select which disk to remove. We select Disk 8.
530
6786ax_datamig.fm
4. After selecting Remove Mirror, the selected volume became available to the operating system without a drive letter and no data available. See Figure A-10.
531
6786ax_datamig.fm
Software-based and hardware-based: z/OS Global Mirror (XRC) Hardware-based: Global Mirror Global Copy FlashCopy in combination with either Global Mirror or Global Copy, or both
532
6786ax_datamig.fm
The following software products and components can be used for data logical migrations: DFSMS allocation management Allocation management by CA-ALLOC DFSMSdss DFSMShsm FDRPAS LDMF Logical Data Migration Facility (TDMF) for z/OS is a host-based software solution for data migration in a z/OS environment. LDMF provides the solution needed to migrate and consolidate data sets while at the same time minimizing and in most cases eliminating application downtime. LDMF makes it easy to combine smaller capacity volumes to better utilize storage, helps to increase available system resources, supports implementation of tiered storage, and can be used to improve application performance for service level compliance. For more information you can refer to the following Web site: http://www.softek.com/en/products/ldmf/ TDMF Transparent Data Migration Facility (TDMF) for z/OS is a host-based software solution for data migration in a z/OS environment. TDMF is user-initiated and controlled, allows for full system sharing throughout the data center, guarantees full access to the data at any point during a migration operation, and supports dynamic takeover on the part of the target device. For more information you can refer to the following Web site: http://www.softek.com/en/products/tdmf/ System utilities such as: IDCAMS with REPRO, EXPORT/IMPORT commands IEBCOPY for Partitioned Data Sets (PDS) or Partitioned Data Sets Extended (PDSE) ICEGENER as part of DFSORT, which can handle sequential data but not VSAM data sets, which also applies to IEBGENER CA-Favor CA-DISK or ASM2 Database utilities for data that is managed by certain database managers, such as DB2 or IMS. CICS as a transaction manager usually uses VSAM data sets.
533
6786ax_datamig.fm
Then, perform the logical data set copy operation to the larger volumes. This allows you to use either DFSMSdss logical copy operations or the system-managed data approach. When a level is reached where no data moves anymore because the remaining data sets are in use all the time, some downtime has to be scheduled to perform the movement of the remaining data. This might require you to run DFSMSdss jobs from a system that has no active allocations on the volumes that need to be emptied.
Summary
This appendix shows that there are many ways to accomplish data migration. Thorough analysis of the current environment, evaluation of the requirements, and planning are necessary. Once you decide on one or more migration methods, refer to the documentation of the tools that you want to use to define the exact sequence of steps to take. Special care must be exercised when data is shared between more than one host. For additional information about data migration to DS8000 see Migrating to IBM System Storage DS8000, SG24-7432. The migration might be used as an opportunity to consolidate smaller volumes to larger volumes as the data is migrated to the DS8000.
534
6786ax_tools.fm
Appendix B.
535
6786ax_tools.fm
Capacity Magic
Because of the additional flexibility and configuration options they provide, it becomes a challenge to calculate the raw and net storage capacity of disk subsystems such as the DS8000 and the DS6000. You have to invest considerable time, and you need an in-depth technical understanding of how spare and parity disks are assigned. You also need to take into consideration the simultaneous use of disks with different capacities and configurations that deploy both RAID-5 and RAID-10. Capacity Magic is there to do the physical (raw) to effective (net) capacity calculations automatically, taking into consideration all applicable rules and the provided hardware configuration (number and type of disk drive sets). Capacity Magic is designed as an easy-to-use tool with a single main dialog. It offers a graphical interface that allows you to enter the disk drive configuration of a DS8000, DS6000, or ESS 800; the number and type of disk drive sets; and the RAID type. With this input, Capacity Magic calculates the raw and net storage capacities; also, new functionality has been introduced into the tool to display the number of extents that are produced per rank. See Figure B-1.
Figure B-1 shows the configuration panel that Capacity Magic provides for you to specify the desired number and type of disk drive sets. Figure B-2 on page 537 shows the resulting output report that Capacity Magic produces. This report is also helpful to plan and prepare the configuration of the storage in the DS8000, because it also displays extent count information.
536
6786ax_tools.fm
Note: Capacity Magic is a tool used by IBM and IBM Business Partners to model disk storage subsystem effective capacity as a function of physical disk capacity to be installed. Contact your IBM Representative or IBM Business Partner to discuss a Capacity Magic study.
Disk Magic
Disk Magic is a Windows-based disk subsystem performance modeling tool. It supports disk subsystems from multiple vendors, but it offers the most detailed support for IBM subsystems. The first release was issued as an OS/2 application in 1994, and since then, Disk Magic has evolved from supporting storage control units, such as the IBM 3880 and 3990, to supporting modern, integrated, advanced-function disk subsystems, such as the DS8000, DS6000, ESS, DS4000, and the SAN Volume Controller. A critical design objective for Disk Magic is to minimize the amount of input that you must enter, while offering a rich and meaningful modeling capability. The following list provides several examples of what Disk Magic can model, but it is by no means complete: Move the current I/O load to a different disk subsystem model. Merge the current I/O load of multiple disk subsystems into a single DS8000. Insert a SAN Volume Controller in an existing disk configuration. Increase the current I/O load. Implement a storage consolidation. Increase the disk subsystem cache size. Change to larger capacity disk drives. Change to higher disk rotational speed. Upgrade from ESCON to FICON host adapters. Upgrade from SCSI to Fibre Channel host adapters. Increase the number of host adapters.
537
6786ax_tools.fm
Use fewer or more Logical Unit Numbers (LUNs). Activate Metro Mirror. Activate z/OS Gloabl Mirror. Activate Global Mirror. Figure B-3 shows some of the panels that Disk Magic provides for you to input data. The example shows the panels for input of host adapter information and disk configuration in an hypothetical case of a DS8000 attaching open systems. Modeling results are presented through tabular reports.
Figure B-3 Disk Magic Interfaces and Open Disk input panels
Note: Disk Magic is a tool used by IBM and IBM Business Partners to model disk storage subsystem performance. Contact your IBM Representative or IBM Business Partner to discuss a Disk Magic study.
6786ax_tools.fm
Data Analysis: This is a SAS application, which calculates the measurements collected for each PAV-alias. The results of this analysis are presented in a 3-D graphical representation of the PAV-alias usage. The output data set from the Data Collection phase is used as the input for the Data Analysis phase. There is a SAS program provided in the documentation of this Tool. If you do not have SAS, there is a description of the data layout in the documentation, so you can write your own reduction program to do the analysis. The PAV Analysis Tool can run in a z/OS system that is at release level 1.4.1 or higher. Note that the HyperPAV function only runs on a z/OS system that is at release level 1.6 or higher. You can find this stool at:
http://www.ibm.com/servers/eserver/zseries/zos/unix/bpxa1ty2.html
When you are on the Web site, look under PAV Analysis Tool and also download the documentation, which is called pav_analysis.doc. You need to download the three object codes and link-edit them onto an authorized Linklib. You can run this Tool against IBM and non-IBM disk subsystems. Note: Contact your IBM Representative or IBM Business Partner to discuss a HyperPAV study.
539
6786ax_tools.fm
Create logical configuration on DS8000 and generate copy services scripts from the available config data.
Linux or Windows ThinkPad with Disk Storage Configuration Migrator. Capture current configurations to generate new DS8000 configurations.
The standard CLI interfaces of the ESS and the DS8000 are used to read, modify, and write the logical and Copy Services configuration. All information is saved in a data set in the provided database on a workstation. Through the Graphical User Interface (GUI), the user information gets merged with the hardware information, and it is then applied to the DS8000 subsystem. See Figure B-4. Note: You can use this approach to convert DS4000, HSG80, EMC, and Hitachi to DS8000. For additional information see the following Web site:
http://web.mainz.de.ibm.com/ATSservices
540
6786ax_tools.fm
For details about available IBM Business Continuity and Recovery Services, contact your IBM Representative or visit:
http://www.ibm.com/services/continuity
Select your country, and then select the product as the category.
541
6786ax_tools.fm
542
6786ax_Project_Plan.fm
Appendix C.
Project plan
This appendix shows part of a skeleton for a project plan. This skeleton only includes the main topics. You can establish further details within each individual project.
543
6786ax_Project_Plan.fm
544
6786ax_Project_Plan.fm
545
6786ax_Project_Plan.fm
546
6786bibl.fm
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information about ordering these publications, see How to get IBM Redbooks on page 549. Note that some of the documents referenced here may be available in softcopy only. IBM System Storage DS8000 Series: Copy Services with System z servers, SG24-6787 IBM System Storage DS8000 Series: Copy Services in Open Environments, SG24-6788 IBM TotalStorage Business Continuity Solutions Guide, SG24-6547 The IBM TotalStorage Solutions Handbook, SG24-5250 IBM TotalStorage Productivity Center V2.3: Getting Started, SG24-6490 Managing Disk Subsystems using IBM TotalStorage Productivity Center, SG24-7097 IBM TotalStorage: Integration of the SAN Volume Controller, SAN Integration Server, and SAN File System, SG24-6097 IBM TotalStorage: Introducing the SAN File System, SG24-7057 If you are implementing Copy Services in a mixed technology environment, you might be interested in referring to the following IBM Redbooks about the ESS and the DS6000: IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services in Open Environments, SG24-5757 IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services with IBM Eserver zSeries, SG24-5680 IBM System Storage DS6000 Series: Copy Services with System z Servers, SG24-6782 IBM System Storage DS6000 Series: Copy Services in Open Environments, SG24-6783 DFSMShsm ABARS and Mainstar Solutions, SG24-5089 Practical Guide for SAN with pSeries, SG24-6050 Fault Tolerant Storage - Multipathing and Clustering Solutions for Open Systems for the IBM ESS, SG24-6295 Implementing Linux with IBM Disk Storage, SG24-6261 Linux with zSeries and ESS: Essentials, SG24-7025
Other publications
These publications are also relevant as further information sources. Note that some of the documents referenced here may be available in softcopy only. IBM System Storage DS8000: Introduction and Planning Guide, GC35-0515 IBM System Storage DS8000: Command-Line Interface Users Guide, SC26-7916
547
6786bibl.fm
IBM System Storage DS8000: Host Systems Attachment Guide, SC26-7917 IIBM System Storage Multipath Subsystem Device Driver Users Guide, SC30-4131 IBM System Storage DS8000: Users Guide, SC26-7915 IBM System Storage DS Open Application Programming Interface Reference, GC35-0516 IBM System Storage DS8000 Messages Reference, GC26-7914 z/OS DFSMS Advanced Copy Services, SC35-0248 Device Support Facilities: Users Guide and Reference, GC35-0033 Outperforming LRU with an adaptive replacement cache algorithm, by N. Megiddo and D. S. Modha, in IEEE Computer, volume 37, number 4, pages 5865, 2004 System Storage Productivity Center Software Installation and Users Guide, SC23-8823
Online resources
These Web sites and URLs are also relevant as further information sources: IBM System Storage Ds8000 Information Center
http://publib.boulder.ibm.com/infocenter/ds8000ic/index.isp
Fibre Channel host bus adapter firmware and driver level matrix
http://knowledge.storage.ibm.com/servers/storage/support/hbasearch/interop/hbaSearch.do
ATTO
http://www.attotech.com/
Emulex
http://www.emulex.com/ts/dds.html
JNI
http://www.jni.com/OEM/oem.cfm?ID=4
QLogic
http://www.qlogic.com/support/ibm_page.html
IBM
http://www.ibm.com/storage/ibmsan/products/sanfabric.html
McDATA
http://www.mcdata.com/ibm/
Cisco
http://www.cisco.com/go/ibm/storage
548
6786bibl.fm
CIENA
http://www.ciena.com/products/transport/shorthaul/cn2000/index.asp
CNT
http://www.cnt.com/ibm/
Nortel
http://www.nortelnetworks.com/
ADVA
http://www.advaoptical.com/
Related publications
549
6786bibl.fm
550
6786IX.fm
Index
Numerics
2805 71
C
cables 152 cache 20, 22, 196 cache management 49 cache pollution 198 caching algorithm 196 Call Home 11, 94, 252, 497, 506 Capacity Magic 196, 536 CCL Command Console LUN (CCL) see CCL CEC 20 cfgmgr 399 chfbvol 350 chpass 179 chuser 179 chvg 399 CIM 164 CIM agent 155, 262 CIMOM 268, 270, 278 CIMOM discovery 273 CKD volume 328 CKD volumes 102 allocation and deletion 107 cluster 45 community name 496 components 43 used in the DS HMC environment 163 Computer Electronics Complex (CEC) 20 concurrent copy session timeout 326 configuration FC port 411 volume 412 configuration flow 252 configuring the DS8000 343, 357 Configuring using DS CLI configuring the DS8000 343, 357 consistency group 127 Consistency Group FlashCopy 124 Consistency group timeout 326 Control Unit Initiated Reconfiguration see CUIR cooling 91 disk enclosure 69 rack cooling fans 91 Copy Services event traps 498 interfaces 137 CUIR 86
A
AAL 5758 benefits 58 activate licenses applying activation codes using the DS CLI 241 functions 228 Adaptive Multi-stream Prefetching 14 Adaptive Multi-stream Prefetching (AMP) 47, 198 address groups 111 addressing policy 332 Advanced Function Licenses 172 activation 172 agile view 432 AIX 390 boot support 400 I/O access methods 397 LVM configuration 396 on iSeries 482 WWPN 390 AIX MPIO 393 alias devices 335 allocation 107 allocation unit 106 AMP 14, 47, 198 AMP (Adaptive Multistream Prefetching) 4 applying activation codes using the DS CLI 241 architecture 46 array sites 98 arrays 57, 98 arrays across loops (AAL) 188 arrays across loops see AAL audit logging 274 Audittrace 274 Audittrace.log 274 authentication scheme 508
B
backup and restore 531 base frame 44 Basic Disk 385 battery backup assemblies 69 battery backup unit 44 battery backup unit see BBU BBU 44, 91 BM FlashCopy SE 105 boot support 372 Business Continuity 4 business continuity 9
D
DA 53 Fibre Channel 188 daemon 494 data migration 532 backup and restore 531
551
6786IX.fm
Disk Magic 537 Disk Storage Configuration Migrator 540 IBM Migration Services 534 logical migration 532 open systems environments 522 physical and logical data migration 533 physical migration 532 summary 534 volume management software 523 zSeries environments 532 data placement 200 Data Set FlashCopy 123 data set FlashCopy 127 datapath query 379, 381, 391 DB2 199 DDM 87, 188 DDMs 58 hot plugable 90 destage 46, 106 device adapter see DA Device Mapper for multipath I/O (DM-MPIO) 404 Device Special Files 432 device specific modules 377 DFSA 230 disk bay 50 disk drive set 8 disk drives capacity 157 FATA 59 disk enclosure 53 power and cooling 69 Disk Magic 196, 537 disk manager 380 Disk Storage Configuration Migrator 540 Disk Storage Feature Activation (DSFA) 230 disk subsystem 53 disk virtualization 97 Diskpart 380 DM-MPIO 404 DS API 118 DS CIM Command Line Interface (DSCIMCLI) 167 DS CLI 118, 139, 154 applying activaton codes 241 configuring second HMC 181 console user management 177 DS Command-Line Interface see DS CLI DS HMC external 180 DS HMC planning 161 activation of Advanced Function Licenses 172 components used in the DS HMC environment 163 environment setup 169 hardware and software setup 171 host prerequisites 174 latest DS8000 microcode 173 logical flow of communication 163 machine signature 173 maintenance windows 174 order confirmation code 173
serial number 173 setup 170 technical environment 163 time synchronization 174 using DS SM frontend using the DS CLI 171 using the DS Open API 171 DS Open API 140, 171 DS Open Application Interface see DS Open API DS SM 118, 170 configuring second HMC 182 user management 179 using frontend DS Storage Manager 138 DS Storage Manager GUI 45 DS Storage Manager see DS SM DS6000 business continuity 9 Capacity Magic 536 compared to DS8000 13 comparison with other DS family members 12 dynamic LUN/volume creation and deletion 11 large LUN and CKD volume support 11 simplified LUN masking 12 SNMP configuration 497 usermanagement using DS CLI 177 DS8000 AAL activate license functions 228 activation of Advanced Function Licenses 172 AIX 390 AIX MPIO 393 applying activation codes using the DS CLI 241 architecture 46 arrays 57 backup and restore 531 base frame 44 battery backup assemblies 69 boot support 372 Business Continuity 4 cache management 49 Capacity Magic 536 Command Console LUN common set of functions 12 compared to DS6000 13 compared to ESS 12 components 43 components used in the DS HMC environment 163 configuration flow 252 configuring 343, 357 considerations prior to installation 146 DA data migration 522 data migration in zSeries environments 532 data placement 199 DDMs 58 disk drive set 8 disk enclosure 53 Disk Magic 537 Disk Storage Configuration Migrator 540
552
6786IX.fm
disk subsystem 53 distinguish Linux from other operating systems 400 DS CLI console DS HMC planning 161 DS HMC setup 170 DS8100 Model 921 20 environment setup 169 EPO 45, 92 ESCON 152 existing reference materials 401 expansion enclosure 57 expansion frame 45 external DS HMC 180 FC port configuration 411 Fibre Channel disk drives 7 Fibre Channel/FICON 152 FICON 14 floor type and loading 148 frames 44 general considerations 370 HA hardware and software setup 171 hardware overview 6 HBA and operating system settings 373 host adapter 7 host interface and cables 152 host prerequisites 174 I/O enclosure 52 I/O priority queuing 15 IBM Migration Services 534 IBM Redbooks 547 input voltage 151 interoperability 11 Linux 400 Linux issues 402 logical flow of communication 163 logical migration 532 machine signature 173 maintenance windows 174 microcode 173 model comparison 26 model naming conventions 18 model upgrades 29 modular expansion 44 multipathing support 372 multiple allegiance 15 network connectivity planning 153 online resources 548 OpenVMS 411 openVMS volume shadowing 415 order confirmation code 173 other publications 547 PAV performance 13, 185 physical and logical data migration 533 physical migration 532 placement of data 200 planning for growth 159 positioning 12 power and cooling 69
power connectors 150 power consumption 151 power control 92 power control features 151 Power Line Disturbance (PLD) 152 power requirements 150 POWER5 6, 49, 189 PPS 69 prerequisites and enhancements 442 processor complex 49 processor memory 51 project plan skeleton 544 project planning 255, 287 rack operator panel 45 RAS Remote Mirror and Copy connectivity 156 remote power control 156 remote support 155 RIO-G 51 room space and service clearance 149 RPC SAN connection 156 scalability 11, 27 benefits 2829 for capacity 27 for performance 28 SDD 391 SDD for Windows 374 serial number 173 series overview 5 server-based 48 service 11 service processor 51 setup 11 S-HMC 70 SMP 48 spares 57 sparing considerations 157 SPCN storage capacity 8 stripe size 204 summary 534 supported environment 8 switched FC-AL 56 technical environment 163 time synchronization 174 troubleshooting and monitoring 409 updated and detailed information 370 using DS SM frontend using the DS CLI 171 using the DS Open API 171 volume configuration 412 volume management software 523 VSE/ESA 453 Windows 373 Windows Server 2003 VDS support 386 z/OS considerations 443 z/OS Metro/Global Mirror 10 z/VM considerations 451 zSeries performance 14
Index
553
6786IX.fm
DS8000 Interoperability Matrix 370 DS8100 Model 921 20 processor memory 51 DS8300 Copy Services 39 FlashCopy 40 LPAR 34 LPAR benefits 40 LPAR implementation 35 LPAR security 38 Model 9A2 configuration options 37 processor memory 51 Remote Mirror and Copy 40 dscimcli 155 DSFA 232 DSMs 377 Dynamic alias 335 Dynamic Disk 385 dynamic LUN/volume creation and deletion 11 Dynamic Volume Expansion 350, 362, 380 Dynamic volume expansion 109, 116, 398
E
eam rotateexts 348 eam rotatevols 348 eConfig 305 Element Manager 165, 267 Element Manager list 278 Enhanced Error Handling (EEH) 50 enterprise configuration 300 EPO 45, 92 EPO switch 46 Error Checking and Correcting (ECC) 77 ESCON 7, 46, 85, 141, 152 architecture 67 distances 67 Remote Mirror and Copy 67 supported servers 67 ESS compared to DS8000 12 ESS 800 Capacity Magic 536 ESS Network Interface (ESSNI) 260 Ethernet switches 70, 94 expansion frame 45 Extended Remote Copy (XRC) 10, 135 extent pool 105 extent pools 100 extent rotation 107 extent type 101 external DS HMC 180 External Tools 271
differences with FC 60 evolution of ATA 59 performance 195 positioning vs FC 61 the right application 64 vs. Fiber Channel 66 FC port configuration 411 FC-AL non-switched 54 overcoming shortcomings 186 switched 7 fcmsutil 431 FCP 20 Fibre Attached Technology Adapted 59 Fibre Channel distances 69 host adapters 68 Fibre Channel ATA (FATA) 7 Fibre Channel/FICON 152 FICON 7, 14, 21, 46, 85 host adapters 68 Firefox 267 firewal 512 fixed block LUNs 102 FlashCopy 40, 118, 482 benefits 121 Consistency Group 124 data set 123 establish on existing RMC primary 124, 126 inband commands 9, 127 Incremental 122 incremental 9 Multiple Relationship 124 multiple relationship 9 no background copy 121 options 122 persistent 126 Refresh Target Volume 122 FlashCopy SE 9, 119 floor type and loading 148 Fluxbox 293 frames 44 base 44 expansion 45 frontend functions activate license 228
G
GDS for MSCS 389 general considerations 370 Geographically Dispersed Sites for MSCS see GDS for MSCS Global Copy 10, 118, 129, 136, 479 Global Mirror 10, 118, 130, 137 how works 131 go-to-sync 129
F
FATA 27, 59 FATA disk drives capacity 157
554
6786IX.fm
H
HA 7, 67 HACMP-Extended Distance (HACMP/XD) 194 hardware setup 171 Hardware management console 154 HCD/IOCP 328 Historic Data Retention 273 hit ratio 49 HMC 45, 93, 137, 154 host interface 152 prerequisites microcode 174 host adapter four port 188 host adapter see HA host adapters Fibre Channel 68 FICON 68 host attachment 112 host connection zSeries 86 host considerations AIX 390 AIX MPIO 393 boot support 372 Command Console LUN 414 distinguish Linux from other operating systems 400 existing reference materials 401 FC port configuration 411 general considerations 370 HBA and operating system settings 373 Linux 400 Linux issues 402 multipathing support 372 OpenVMS 411 openVMS volume shadowing 415 prerequisites and enhancements 442 SDD 391 SDD for Windows 374 support issues 400 supported configurations (RPQ) 372 troubleshooting and monitoring 409 updated and detailed information 370 VDS support 386 volume configuration 412 VSE/ESA 453 Windows 373 z/OS considerations 443 z/VM considerations 451 zSeries 442 Hot pluggable 90 HyperPAV 14, 172, 223224, 335 HyperPAV licence 230 Hypervisor 79
I
I/O enclosure 52, 69 I/O latency 49
I/O priority queuing 15, 224 i5 / OS 14 IASP 460 IBM FlashCopy SE 9, 40, 346 IBM Migration Services 534 IBM Redbooks 547 IBM Subsystem Device Driver (SDD) 372 IBM System Storage Interoperability Center (SSIC) 141, 174 IBM System Storage Interoperation Center (SSIC) 370 IBM System Storage N Series 203 IBM TotalStorage Multipath Subsystem Device Driver see SDD IBM TotalStorage Productivity Center 8 IBM TotalStorage Productivity Center for Data 257 IBM TotalStorage Productivity Center for Disk 257 inband commands 9, 127 Incremental FlashCopy 9, 122 Independent Auxiliary Storage Pool 460 Independent Auxiliary Storage Pools see IASP index scan 199 initckdvol 107 initfbvol 107 Input Output Adapter (IOA) 457 Input Output Processor (IOP) 457 input voltage 151 installation DS8000 checklist 146 inter-disk allocation 396 Internet Explorer 267 Interoperability Matrix 370 IOPS 194 ioscan 432 iSeries 482 adding multipath volumes using 5250 interface 470 adding volumes 458 adding volumes to an IASP adding volumes using iSeries Navigator 471 AIX on 482 avoiding single points of failure 468 cache 476 changing from single path to multipath 475 changing LUN protection 457 configuring multipath 469 connecting via SAN switches 479 Global Copy 479 hardware 456 Linux 483 logical volume sizes 456 LUNs 104 managing multipath volumes using iSeries Navigator 472 Metro Mirror 479 migration to DS8000 479 multipath 467 multipath rules for multiple iSeries systems or partitions 475 number of fibre channel adapters 477 OS/400 data migration 480 OS/400 mirroring 479
Index
555
6786IX.fm
planning for arrays and DDMs 476 protected versus unprotected volumes 457 recommended number of ranks 478 sharing ranks with other servers 478 size and number of LUNs 477 sizing guidelines 475 software 456 using 5250 interface 458
M
machine reported product data (MPRD) 506 machine signature 173 maintenance windows 174 Management Information Base (MIB) 494 managepwfile 178 Metro Mirror 10, 118, 128, 136, 479 MIB 494, 496 microcode updates installation process 488 Microsoft Cluster 380 Microsoft Multi Path Input Output 377 migrating using volume management software 523 mirroring 397 mkckdvol 361 mkfbvol 347 mkrank 247 mkuser 179 modular expansion 44 Most Recently Used (MRU) 198 MPIO 377 MRPD 506 MSCS 380 multipath storage solution 377 multipathing support 372 multiple allegiance 15 Multiple Reader 4, 10, 15 Multiple Relationship FlashCopy 9, 124 multi-rank 206
J
Java webstart 266
L
large LUN and CKD volume support 11 LCU 324 LCU type 326 Least Recently Used (LRU) 198 Legacy DSF 432 Level 3 cache 49 Linux 400 on iSeries 483 Linux issues 402 localhost 179 Logical configuration 146 logical configuration 308 logical control unit (LCU) 110 logical migration 532 logical partition see LPAR logical size 105 logical subsystem see LSS logical volumes 102 long busy state 326 LPAR 32, 47, 74 application isolation 33 benefits 40 Copy Services 39 DS8300 34 DS8300 implementation 35 Hypervisor 79 increased flexibility 34 production and test environments 33 security through Power Hypervisor 38 storage facility image 34 why? 33 lsfbvol 381 lshostvol 420 LSS 110 lsuser 177 LUN 482 LUN polling 421 LUNs allocation and deletion 107 fixed block 102 iSeries 104 masking 12 LVM configuration 396 mirroring 397 striping 396
N
network connectivity planning 153 Network Interface Server 299 nickname 333 NMS 494 nocopy 105, 206 nonvolatile storage (NVS) 47 non-volatile storage see NVS Nucleus Initialization Program (NIP) 218 NVS 47, 51, 83 NVS recovery
O
offline configurator 297 offloadauditlog 510 online resources 548 open systems cache size 207 performance 207 sizing 207 OpenVMS 411 openVMS volume shadowing 415 order confirmation code 173 OS/400 data migration 480 OS/400 mirroring 479 other publications 547
556
6786IX.fm
Out of Band Fabric agent 282 Out-of-Band Fabric Agent 271 over provisioning 104 overhead 106
P
panel rack operator 45 Parallel Access Volumes see PAV partitioning concepts 32 PAV 14 performance data placement 199 FATA disk drives 66, 195 FATA positioning 61 FATA the right application 64 open systems 207 determing the connections 207 determining the number of paths to a LUN 207 where to attach the host 208 workload characteristics 199 z/OS 209 channel consolidation 211 configuration recommendations 211 connect to zSeries hosts 209 disk array sizing 194 processor memory size 211 Persistent FlashCopy 126 PFA 90 physical and logical data migration 533 physical migration 532 physical partition (PP) 396 physical paths 282 physical planning 145 delivery and staging area 147 floor type and loading 148 host interface and cables 152 input voltage 151 network connectivity planning 153 planning for growth 159 power connectors 150 power consumption 151 power control features 151 Power Line Disturbance (PLD) 152 power requirements 150 Remote Mirror and Copy connectivity 156 remote power control 156 remote support 155 room space and service clearance 149 sparing considerations 157 storage area network connection 156 physical size 105 placement of data 200 planning DS Hardware Management Console 143 logical 143 physical 143 project 143 planning for growth 159
positioning 12 power 69 BBU building power lost 92 disk enclosure 69 fluctuation protection 92 I/O enclosure 69 PPS processor enclosure 69 RPC 91 power and cooling 69, 91 BBU PPS rack cooling fans 91 RPC 91 power connectors 150 power consumption 151 power control features 151 Power Hypervisor 38 Power Line Disturbance (PLD) 152 power requirements 150 POWER5 6, 49, 189 POWER5+ 29 PPRC-XD 129 PPS 44, 91 Predictive Failure Analysis see PFA prefetch wastage 198 primary power supplies 44 primary power supply see PPS priority queuing 224 probe job 270 processor complex 49, 75 processor enclosure power 69 project plan considerations prior to installation 146 physical planning 145 roles 146 skeleton 544 project plan skeleton 544 project planning 255, 287 information required 147 PTC 121 PVLINKS 435
R
rack operator panel 45 rack power control 44 rack power control cards see RPC RAID-10 AAL 89 drive failure 89 implementation 89 theory 88 RAID-5 drive failure 88 implementation 88 theory 88 RANDOM 14 random write 196 Index
557
6786IX.fm
ranks 99, 105 RAS 73 CUIR disk scrubbing 91 disk subsystem 87 disk path redundancy 87 EPO 92 fault avoidance 76 first failure data capture 76 host connection availability 84 Hypervisor 79 I/O enclosure 79 metadata 80 microcode updates 93 installation process 488 naming 74 NVS recovery permanent monitoring 76 PFA power and cooling 91 processor complex 76 RAID-10 88 RAID-5 88 RIO-G 79 server 80 server failover and failback 80 S-HMC 93 spare creation 89 Recovery Point Objective see RPO Redbooks Web site 549 Contact us xx reference materials 401 related publications 547 help from IBM 549 how to get IBM Redboks 549 online resources 548 other publications 547 reliability, availability, serviceability see RAS Remote Desktop, 261 Remote Mirror and Copy 40, 118, 156 ESCON 67 Remote Mirror and Copy function see RMC Remote Mirror and Copy see RMC remote power control 156 remote support 155 reorg 108 repcapalloc 346 report 270 repository 105, 346 repository size 105 Requests for Price Quotation see RPQ RIO-G 5051 RMC 10, 128 Global Copy 129 Global Mirror 130 Metro Mirror 128 rmsestg 346 rmuser 179 role 265 room space 149
rotate extents 107, 330 rotate volumes 320, 330, 348, 361 Rotated volume 107 rotated volume 107 rotateexts 351, 363 RPC 44, 69 RPO 137 RPQ 372
S
SAN 85 SAN LUNs 482 SAN Volume Controller (SVC) 203 SARC 1314, 49, 196, 198 SATA 59 scalability 11 DS8000 scalability 193 scripts 380 SDD 14, 208, 391 for Windows 374 SDDDSM 377 SEQ 14 SEQ list 198 Sequential Adaptive Replacement Cache 196 Sequential Prefetching in Adaptive Replacement Cache see SARC Sequential-prefetching in Adaptive Replacement Cache (SARC) 49 serial number 173 server RAS 80 server-based SMP 48 service clearance 149 service processor 51 session timeout 326 settings HBA and operating system 373 SFI 74 S-HMC 7, 70, 93 showckdvol 361 showfbvol 362 showpass 178 showsestg 346 showuser 178 shutdown 380 simplified LUN masking 12 Simulated manager 297, 299 simultaneous multi-threading (SMT) 48 sizing open systems 207 z/OS 209 SMIS 140 SMS alert 274 SMT 48 SMUX 494 SNIA 140 SNMP 494 configuration 497, 503 Copy Services event traps 498
558
6786IX.fm
notifications 497 preparation for the management software 504 preparation on the DS HMC 503 preparation with DS CLI 504 trap 101 498 trap 202 500 trap 210 500 trap 211 500 trap 212 501 trap 213 501 trap 214 501 trap 215 501 trap 216 501 trap 217 502 SNMP agent 494495 SNMP event 274 SNMP manage 494 SNMP trap 494, 496 SNMP trap request 494 software setup 171 Space Efficient 343 space efficient volume 104 Space efficient volumes 116 spares 57, 89 floating 90 sparing 157 sparing considerations 157 SPCN 51 SSIC 370, 411 SSID 326 SSL connection 11 SSPC 8, 256 install 261 SSPC user management 264 Standby Capacity on Demand see Standby CoD Standby CoD 8 storage area network connection 156 storage capacity 8 storage complex 74 storage facility image 34, 74 addressing capabilities 42 hardware components 36 I/O resources 36 processor and memory allocations 37 RIO-G interconnect separation 37 Storage Hardware Management Console see S-HMC Storage image 328 storage image 74 storage LPAR 74 Storage Management Initiative Specification (SMIS) 140 Storage Networking Industry Association (SNIA) 140 Storage Pool Striping 200, 206, 330, 397 Storage pool striping 101, 107 storage pool striping 348 storage unit 74 storport 377 stripe 105 size 204 striped volume 108
Subsystem Device Driver (SDD) 372 Subsystem Device Driver DSM 377 summary 534 switched FC-AL 7 advantages 55 DS8000 implementation 56 System i protected volume 457 system power control network see SPCN System Storage Productivity Center (SSPC) 7, 154, 169
T
target volume 105 thin provisioning 104 Three Site BC 258 time synchronization 174 Tivoli Enterprise Console 274 tools Capacity Magic 536 topology 8 TotalStorage Productivity Center (TPC) 164 TotalStorage Productivity Center for Fabric 257 TotalStorage Productivity Center for Replication (TPC-R) 5, 94, 258 TotalStorage Productivity Standard Edition (TPC-SE) 257 TPC Enterprise Manager 268 TPC for Replication 139 TPC topology viewer 279 TPC-R 139 track 106, 119 Track Space Efficient (TSE) 330 Track Space Efficient Method (TSE) 320 trap 494, 496 troubleshooting and monitoring 409 TSRMGUI.jar 266
U
UDID 412 Unit Device Identifier (UDID) 412 user managemen using DS SM 179 user management using DS CLI 177 using DS SM 179 user role 265
V
VDS support Windows Server 2003 386 virtual space 105 virtualization abstraction layers for disk 97 address groups 111 array sites 98 arrays 98 benefits 115 concepts 95 definition 96
Index
559
6786IX.fm
extent pools 100 hierarchy 114 host attachment 112 logical volumes 102 ranks 99 storage system 96 volume group 113 volume space efficient 104 volume group 317 volume groups 113 volume manager 108 volumes CKD 102 VPN 11 VSE/ESA 453 vxdiskadm 437
W
WebSM 293 Windows 373 SDD 374 Windows 2003 377 WLM 218 workload 207 Workload Manager 218 write penalty 196 WWPN 390
X
XRC 10, 135 XRC session 326
Z
z/OS considerations 443 VSE/ESA 453 z/OS Global Mirror 10, 15, 21, 118, 135, 137 z/OS Global Mirror session timeout 326 z/OS Metro/Global Mirror 10, 132, 135 z/OS Workload Manager 218 z/VM considerations 451 zSeries 532 host connection 86 host considerations 442 performance 14 prerequisites and enhancements 442
560
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
6786spine.fm
561
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
6786spine.fm
562
Back cover