Professional Documents
Culture Documents
Rafael Noas
Technical Consultant
Agenda
Hitachi Data Systems Portfolio
Hitachi Adaptable Modular Storage 2000 family Product Overview
Key Features
Performance
AMS2300
1TB Cache
192 x FC
2048 disks
400,000 IOPS
AMS2100
8GB Cache
4 x FC / iSCSI
120 disks
30,000 IOPS
32GB Cache
16 x FC / iSCSI mix
480 disks
90,000 IOPS
16GB Cache
8 x FC / iSCSI
240 disks
50,000 IOPS
Capacity
Throughput
Enterprise design
Host multi-pathing
Hardware load balancing
Dynamic Optimized Performance
Dense storage, SAS and SATA
99.999+% reliability
Adaptable Modular
Storage 2100
4-8GB Cache
4 FC, or 8 FC, or 4 FC and
4 iSCSI ports
Mix Up to 159 SSD, SAS
and SATA-II Disks
Up to 313 TBs
Up to 2048 LUNs
Up to 1024 Hosts
Adaptable Modular
Storage 2300
8-16GB Cache
8 FC, or 16 FC, or 8 FC and 4
iSCSI ports
Mix Up to 240 SSD, SAS and
SATA-II Disks
Up to 472 TBs
Up to 4096 LUNs
Up to 2048 Hosts
Scalability
5
Adaptable Modular
Storage 2500
16-32GB Cache
16 FC, or 8 iSCSI, or 8 FC and
4 iSCSI ports
Mix Up to 480 SSD, SAS and
SATA-II Disks
Up to 945 TBs
Up to 4096 LUNs
Up to 2048 Hosts
Hitachi Adaptable
Modular Storage 2100
Protection
RAID Groups/System: 50
RAID levels: -6, -5, -1+0, -1, -0*
6
3U trays (max of 7)
2 to 15 disk drives/tray
4U trays (max of 3)
4 to 48 disk drives/tray
4U controller
4 to 15 disk drives
Hitachi Adaptable
Modular Storage 2300
Protection
RAID Groups/System: 75
RAID levels: -6, -5, -1+0, -1, -0*
7
SSD:
200GB
Hitachi Adaptable
Modular Storage 2500
Protection
RAID Groups/System: 100
RAID levels: -6, -5, -1+0, -1, -0*
8
SSD:
200GB
10
11
The Active-Active Symmetric system allows for a host I/O present on any front-end port to be processed by either
controller. All LUNs are associated with one of the two controllers by the LUN Management Table. If a host request for a
LUN arrives on a port on the current managing controller, then that is called Direct Mode. If that request is for a LUN
managed by the other controller, that is known as Cross Mode. In Cross Mode, the local Intel processor directly sends
the request to the Intel CPU on the other controller for execution over the inter-DCTL communications bus.
12
15 HDDs
Controller 0
Internal
enclosure
SAS CTL
Expander
15 HDDs
15 HDDs
15 HDDs
Expander
Enclosure
15 disks
4-SAS
links
4-SAS
links
SAS
HDD
Slot 0
SAS
HDD
Slot 1
SAS
HDD
Slot 2
SATA
HDD
Slot 3
SAS
HDD
Slot 4
4-SAS
links
13
SATA
HDD
Slot 13
SAS
HDD
Slot 14
4-SAS
links
Expander
Controller-0
SATA
HDD
Slot 5
15 HDDs
240 Disks
1 + 15 enclosures
(SAS and SATA)
0
2 SAS Wide Cables
8 links x 3Gbit/s
15 HDDs
15 HDDs
14
This is the 15-disk shelf which may be freely populated with SSD, SAS and SATA-II disks.
The two SAS Expanders in each enclosure attach each disk port to any of the four SAS links
in that Expander. The SATA-II canisters provide the dual-port logic. Additional enclosures are
daisy-chained from the outbound SAS links port.
15
This is a representation of the high density shelf for use with 38 SAS disks or 48 SATA disks
(no intermix). The two pairs of SAS Expanders in each enclosure attach each disk port to any of
the four SAS links in that Expander. The SATA-II canisters provide the dual-port logic native to
SAS disks. Additional enclosures are daisy-chained from the outbound SAS links port.
16
Standard
Tray
Tray
SAS / SATA
15
15
38 / 48
4U
3U
4U
Max w/ Intermix
Max w/ Intermix
Max w/ only Standard Trays
Max SAS
HDDs *
Max SATA
HDDs *
129
159
136
156
128
138
120
120
167
207
Max w/ Intermix
204
234
Max w/ Intermix
196
216
Max w/ Intermix
11
218
228
15
240
240
AMS2100
AMS2300
* SAS and SATA dense trays can be intermixed in the same system. Max HDD count will depend on the number
of SAS and SATA trays.
18
2500
System Tray
Standard
Tray
Tray
SAS / SATA
0
4U
15
3U
38 / 48
4U
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
0
2
6
8
12
14
16
20
24
28
32
10
9
8
7
6
5
4
3
2
1
0
Max SAS
HDDs *
Max SATA
HDDs *
380
372
394
386
408
400
392
414
436
458
480
480
462
474
456
468
450
432
444
456
468
480
* SAS and SATA dense trays can be intermixed in the same system. Max HDD count will depend on the number
of SAS and SATA trays.
19
Top View
Rear View
PS X 4
ENC Board X 4
HDD X 48 (Max)
Cable Holder
Lever
20
Sustainable
Save up to half power and cooling compare with 3.5 HDD
300GB and 600GB 2.5 SFF drives
LED:
POWER()
Unit
LED:
Power
(Green)
Unit
LED:
READY
(Green)
LED:
READY()
LED:
LOCATE()
Unit
LED:
LOCATE
(Yellow)
21
2.5 Canister
2.5
Drive
LED:
ACTIVE (Green)
LED:ACTIVE()
Drive LED:
ALARM (Red)
LED:ALARM()
LED:
POWER()
Unit
LED:
Power
(Green)
2.5 Canister
2.5
Drive
LED:
ACTIVE (Green)
LED:ACTIVE()
LED: READY()
Drive LED:
ALARM (Red)
LED:ALARM()
Unit
LED:
LOCATE
(Yellow)
LED:
LOCATE()
LINK
(IN) )
LINKLED
LED(IN
Mini
SAS Connector
LINK)
MiniSAS
(UP (UP
LINK)
ENC#0
ENC#1
LINKLED
LED(OUT
LINK
(OUT))
MiniSAS
(DOWN
LINK)LINK)
Mini
SAS Connector
(DOWN
POWER
LOCATE
ALARM
IN
OUT
CONSOLE
IN
POWER
LOCATE
ALARM
CONSOLE
RDY
AC_IN
ALM
PS#0
OUT
RDY
AC_IN
ALM
PS#1
Power
point
22
Item
Unit Size
2U
3U
DF800/DF800E
Same as left
# of HDU
24(SAS: 2.5-10Krpm)
15(SAS/SATA/SSD/NL-SAS: 3.5)
# of ENC
Same as left
# of PSU
Same as left
# of Fan(self-powered)
12 units(6units/PSU)
Basic specification
2
3
Module specification
Back-end composition
Path
Input1path (4WideLink)
Output1path (4WideLink)
Same as left
Use device
Expander
PM8005(6G_36Port version)
SXP-24x3G
Flash
8Mbyte
2Mbyte
10
SRAM
4Mbyte x 2
11
SEEPROM
32KByte(I2C)
2KByte
12
Power Supply
Power type
12v/5v/3.3v/1.2v/1.0v
Same as left
13
Mini-SAS cable
Path
IN:key4,6
OUT:key2,4
1m, 3m,5m
Same as left
Debug
Same as left
14
15
23
Other
AMS 2300
AMS 2500
S CTLR1
M CTLR1
H CTRL1
PS1
PS2
PS1
4U
S CTLR2
PS2
4U
Monterey M
ADD shelf
24
4U
H CTRL2
- Usable w/S/M/H
- 3 GB/sec capable
- ENC different from
upgrade ENC
PS2
M CTLR2
Monterey S
HDD Chassis
PS1
ENC 2
PS1
PS2
ENC 1
ENC 2
PS1
PS2
3U
Batteries
Standard Configurations
AMS2100
2 Internal
72 hours
48 hours
AMS2300
2 Internal
36 hours
24 hours
AMS2500
4 Internal
48 hours
24 hours
Optional Configurations
25
AMS2500
1 External + 4
Internal
96 hours
48 hours
AMS2500
2 External + 4
Internal
168 hours
96 hours
Hardware Details
Adaptable Modular
Storage 2100
Adaptable Modular
Storage 2300
Adaptable Modular
Storage 2500
Symmetric active/active
controllers
4 or 8 GB cache
1.67GHz Celeron, single core
processor
PCIe internal bus
15 internal disk slots
Dual redundant power
supplies
Dual batteries
Symmetric active/active
controllers
8 or 16 GB cache
1.67GHz Xeon, single core
processor
PCIe internal bus
15 internal disk slots
Dual redundant power
supplies
Dual batteries
Symmetric active/active
controllers
16 or 32 GB cache
2GHz Xeon, dual core
processor
PCIe internal bus
0 internal disk slots
Dual redundant power
supplies
Dual batteries
1024
2048
2048
Data in place
upgrades
N/A
Models
(3 models)
Host Interface
Options
(Dedicated FC or iSCSI)
26
Hardware Details
Adaptable Modular
Storage 2100
Adaptable Modular
Storage 2300
Adaptable Modular
Storage 2500
Maximum cache
partitions
16
32
32
Drive Interface
RAID Levels
Max # of RAID
Groups
50
75
100
Max # of LUs
2048
4096
4096
Max LU size
60TB
60TB
60TB
27
Hardware Details
Adaptable Modular
Storage 2100
Adaptable Modular
Storage 2300
Adaptable Modular
Storage 2500
Supported Drives
Trays
Maximum Capacity
313TB
472TB
944TB
Supported
Operating Systems
Microsoft Windows 2000, Windows Server 2003, Windows 2008 ( including Hyper V )
Sun Solaris ( Sparc and x64 )
VMware
RedHat Enterprise Linux
SuSE Linux
RedFlag Linux
Oracle Enterprise Linux
Asianux
MiracleLinux
HP-UX
IBM AIX
MAC OSX
Novell NetWare
HP Tru64 UNIX
HP OpenVMS
28
Performance Monitor
29
Capabilities
ShadowImage (Clone)
Copy-on-Write (Snapshot)
Device Manager
Tuning Manager
30
Capabilities
Replication Manager
Protection Manager
31
Capabilities
Benefits
Traditional
Connect HBAs
to any controller
User required
to set paths
CTL0
LUN: 1
LUN: 2
LUN: 3
CTL1
LUs created
LUs created
LUN: 0
32
User not
required to
set paths
CTL0
LU LUN: 1
LUN: 3
LUN: 0
LUN: 2
CTL1
Benefits
Traditional
Path
Path
Manager Manager
HBA HBA
Standard setting
Load balancing
HBA HBA
Path
Path
Manager Manager
HBA HBA
Path
Path
Manager Manager
HBA HBA
Primary path
Alternate path
User required
to set paths
CTL0
LUN: 0
33
LUN: 1
LUN: 2
CTL1
LUN: 3
User not
required to
set paths
CTL0
LUN: 0
LUN: 1
LUN: 2
LUN: 3
CTL1
Benefits
(2) Analysis
(3) Planning
Automatically
Changes
(4)Change
setting
(1) Send
information
Performance
monitor
CTL0
CPU usage
70%
LUN: 0
34
LUN: 1
LUN: 2
Load balance
function
CTL1
CPU usage
10%
LUN: 3
CTL0
CTL1
CPU usage:
70% 40%
CPU usage
10% 40%
LUN: 0
LUN: 1
LUN: 2
LUN: 3
Unique
for Midrange
Symmetric
Active- Active
Controllers:
During normal operation
I/O can travel down any
available path to any
front end port using the
native host OS
multipathing capabilities.
This allows I/O to be
balanced over all
available paths without
impacting performance.
Automatic Load
Controller
Load Balancing:
Balancing
Controller :
If the load on the
controllers becomes
unbalanced it will
automatically be
balanced without the
host having to adjust its
its primary
primary
path.
path.
35
SWITCH 1
SWITCH 2
FE CTRL-0
AMS 2500
FE CTRL-1
BE CTRL-0
High Speed
Buses
BE CTRL-1
60,000
50,000
40,000
30,000
20,000
10,000
0
Disks In Use
2+2
37
4+4
4+1
8+1
IBM
NetApp
Model
AMS 2100
AMS 2300
AMS 2500
DS5020
DS5300
V7000
FAS3170
Controller Type
Symmetric
Symmetric
Symmetric
Asymmetric
Asymmetric
Asymmetric
Asymmetric
SPC-1 IOPS
31,498.58
42,502.61
89,491.81
26,090.03
62,243.63
56,210.85
60,515.34
Price/Performance
$5.85/IOP
$6.96/IOP
$6.71/IOP
$7.49/IOP
$11.76/IOP
$7.24/IOP
$10.01/IOP
SPC-1 Sustainability
Rate (Throughput
MB/s)
258MB/s
350MB/s
735MB/s
214MB/s
511MB/s
463MB/s
497MB/s
Average Response
Time (50% Load
Point)
2.67ms
2.70ms
2.88ms
3.91ms
2.95ms
4.19ms
4.16ms
Average Response
Time (100% Load
Point)
8.15ms
6.33ms
8.98ms
10.49ms
14.37ms
10.8ms
20.8ms
38
CTL1
connections
RKA
RKA
SAS
SAS
SATA
RKA
SATA
SAS
RKA
SATA
RKA
SAS
39
SATA
RKA
SATA
SAS
RKA
SAS
LU0
LU0
LU0
LU1
LU1
Not used
Add LU
Not used
RAID
Group
40
LU0
Sub LU
LU0
Unified
Logical
LU0
LU1
LU1
Sub LU
Not used
Sub
Not used
Physical
LU0
LU1
LU0
LU0
Not used
LU1
LU1
LU0
LU2
LU1
41
HDD1
HDD2
HDD3
HDD4
HDD5
6D+2P
HDD0
42
HDD1
HDD2
HDD3
HDD4
HDD5
HDD6
HDD7
HDD6
HDD7
LU0
LU1
LU2
LU3
HDD0
HDD1
HDD2
HDD3
HDD4
HDD5
HDD6
HDD7
LU0
LU1
LU2
LU3
HDD0
43
HDD1
HDD2
HDD3
HDD4
HDD5
HDD6
HDD7
Mega-LUNs
Simple to create
Create a LUN, pick your size
LUN can grow up to 60 TB
Can span every disk in the
system
Simple to connect
Select a LUN
Choose a host
No need to choose a preferred
path
No need to choose an owner
controller
Simple to manage
Expand LUNs nondisruptively
Expand Raid Groups nondisruptively
Works with all Hitachi Adaptable
Modular storage software
LUN A
LUN A
LUN A
LUN A
LUN A
LUN A
44
AMS 2000
Servers
Challenges
High cost of storage
Cumbersome provisioning
Expensive optimization
Solution Capabilities
Simplify provisioning
Provision only what is used
Automates performance optimization
Replication Savings
Business Benefits
Reduced storage expense
Reduced operational expense
IT Agility
Servers
HDP Volume
(Virtual LUN)
HDP Pool
RAID
Groups
RAID Group
RAID Group
RAID Group
Physical Drives
46
Virtual LU
ShadowImage
PVOL
Maximum
Savings
Virtual LU
ShadowImage
SVOL
ShadowImage PAIR
The Storage Manager still has the flexibility to using ShadowImage for copying between
Virtual-LU and Normal Volume as needed
Virtual LU
ShadowImage
PVOL
Normal
VOL
ShadowImage
SVOL
Normal
VOL
ShadowImage
PVOL
ShadowImage PAIR
Virtual LU
ShadowImage
SVOL
ShadowImage PAIR
Note: ShadowImage is used in conjunction with the Virtual Volume. Mirroring of the underlying DP Pool with ShadowImage is not supported.
47
Server Volume
48
VVOL
1 2 3 4
Pool
1 4
Pool
Vol#
0
Pool
Vol#
1
Pool
Vol#
2
VVOL
1 2 3 4
Pool
1 4
Pool
Vol#0
3
Pool
Vol#1
Pool
Vol#2
VVOL
1 2 3 4
Pool
Vol#3
Pool
1 4
Optimize Pool
Pool
Vol#0
3
Pool
Vol#1
Pool
Vol#2
Pool
Vol#3
Host
Storage Navigator
Modular 2
(GUI, CLI)
AMS2000
Port
Port
Capacity Monitoring
Virtual-LU
Virtual-LU
Virtual representation of capacity
presented to Host. Total capacity of
Virtual LUs may exceed the actual
capacity of the DP Pool.
DP Mapping Table
- Mapping information between
Virtual-LU and DP Pool
- Page space information
DP Mapping Table
DP Pool
Actual storage capacity that contains
data written by Host/Application.
Comprised of multiple RAID Groups.
DP Pool
RAID Group
RAID Group
Hitachi Dynamic Provisioning provides a host with virtual capacity volumes. Storage is assigned to the volume from a pool of
real storage volumes Just in Time for a write request from the host.
50
51
52
53
54
Virtual-LU Viewpoint
Over Provisioning
Ratio Alert
Trend
Informatio
n
DP Pool Viewpoint
DP Pool Consumed
Capacity Alert
55
LU
LU
LU
LU
LU
Virtual
LU
Virtual
LU
Virtual
LU
Virtual Virtual
LU
LU
RAID Group#2
RAID Group#1
56
RAID Group#2
Virtual-LU #0
Chunk #3
DP Pool
57
Virtual-LU #1
Virtual-LU #2
OS
File System
Meta-Data Writing
(The case of Default parameter)
Windows Server2003
NTFS
Windows Server2008
NTFS
Under investigation
Under investigation
Linux
XFS
Ext2 / Ext3
UFS
Size of Virtual-LU.
VxFS
ZFS
JFS
Size of Virtual-LU
Note: If you change the Allocation Group Size settings when
you create the file system, the meta-data can be written to a
maximum 64MB intervals. Approximately 50% of the DP
Pool is consumed
JFS2
VxFS
Solaris
AIX
58
Thin Friendly
Low advantage
No advantage
OS
File System
Meta-Data Writing
(The case of Default parameter)
HP-UX
JFS(VxFS)
HFS
Size of Virtual-LU.
Windows
Server2003
NTFS
Windows
Server2008
NTFS
Under investigation
Under investigation
Linux
Ext2 / Ext3
VMware
2GB
Virtual-LU
Metadata
32MB Page
DP Pool Capacity Consumed: 32MB/2GB = 1.6
59
Virtual-LU
32MB Page
Viewpoint
Capacity
utilization
Performance
Items
Best practices
Recommendation: 2,4,8,16
RG configuration in the
DP Pool
For the site for which high I/O performance is required, you need to configure
the DP Pool with many HDDs and as many RGs as possible
(By using the Performance Monitor, you can see the Host I/O frequency for
each Virtual-LU and utilization for each RG in the DP Pool.)
3
4
Reliability
RG RAID Level
DP Pool counts
60
61
Item
Capacity
Best practice
Recommendation: Set the Virtual-LU on 1GB size boundaries
(If you cannot do this, some DP Pool space will be wasted)
Normal Operation
DATA 1
DATA 2
DATA 3
DATA 4
DATA 5
DATA 6
PARITY 1
PARITY 2
62
DATA
3
HOT
SPARE
Rebuild
Operation
63
64
65
Conditions
Number of partitions
LUNs defined in the Adaptable Modular Storage 2000 family systems can
be assigned to a partition
Customer can specify a size of the partition
66
Controller #0
Dir #0
67
Partition #0
16KB(Fix)
Partition #2
4KB
Partition #3
256KB
Multipurpose
(Default)
For DB
(Small
size
access)
Streamin
g
(Exclusive
Partition)
68
16KB
69
4KB data
4KB data
70
16KB
71
16KB
Adaptable Modular Storage 2000
Selectable Segment Size 256KB
72
Stripe Size
256KB
HDDs
Stripe Size
64KB
1 WRITE to LUN
73
128KB
128KB
1 WRITE to LUN
HDDs
LUN
Extent
74
LUN
LUN
LUN
Extent
Cache
Mirrored Cache
Extent
Enhanced
Read/Write
Performance
Cache Residency
Area
ADAPTABLE
MODULAR STORAGE
2100
75
AMS 2500
Fibre Channel System
iSCSI System
FC and iSCSI
Multiprotocol
FC and iSCSI
Multiprotocol
AMS 2300
Fibre Channel System
(base)
Fibre Channel System
(max)
FC and iSCSI
Multiprotocol
76
FC Port
iSCSI Port
User LAN
B
Embedded 2x 8Gb/s FC
F
Mnt LAN
User LAN
Embedded 2x 8Gb/s FC
Mnt LAN
User LAN
H
Mnt LAN
Embedded 4x 8Gb/s FC
User LAN
D
Embedded 4x 8Gb/s FC
F
Mnt LAN
77
8Gb FC
WAN
iSR6250-HDS
78
SAS
LU1
Migration
finish
SAS
LU2
Online
Migration
SATA
LU2
79
SATA
LU1
LU1
Online
Migration
High load RG
LU2
Low load RG
80
LU2
Migration
finish
Medium load RG
LU1
Medium load RG
Performance imbalance is
addressed
Performance bottleneck is
removed
81
TrueCopy Synchronous
1000
2500
Bi-directional
Differential re-synchronization
Reverse re-synchronization
R/W Source and Read Target Continuous Access
Copy between AMS/WMS and AMS2000 models
2500
82
1000
TrueCopy Synchronous
Bi-directional
Differential re-synchronization
Reverse re-synchronization
R/W Source and Read Target Continuous
Access
83
Updates
T0
PVOL
Updates
T2
T1
T1
SVOL
T2
Snapshot
T2
T1
T2
T3
Snapshot
T3
Send
Differentials
Send
Differentials
T1
Snapshot
T1
Available 2H CY06
Update Cycle
84
T3
T2
Snapshot
Same
Data
Disaster
Updates
T2
T2
Snapshot
T2
Restore
Time
Power Savings
Lower Total Cost of Ownership
Hitachi Data Systems HDD spin down solutions
Spinning down of
85
86
Questions/Discussion
88
Thank You
89