You are on page 1of 89

Hitachi Adaptable Modular

Storage 2000 Family


Technical Presentation
20.07.2011

Rafael Noas
Technical Consultant

2008 Hitachi Data Systems

Agenda
Hitachi Data Systems Portfolio
Hitachi Adaptable Modular Storage 2000 family Product Overview
Key Features

Symmetric active/active controller with dynamic load balancing


SAS architecture
Online RAID group expansion
LUN grow/shrink
High Density Expansion Trays
RAID-6
Cache Partitioning
Multiprotocol: FC SAN/iSCSI
Volume Migration Modular software
Hitachi TrueCopy Extended Distance software
GREEN Storage initiative
Improved Security

Hitachi Adaptable Modular Storage 2000 Family


In the Spotlight

The Hitachi Adaptable Modular System


2500 has won the Eco-Responsibility
Award in this years Information Age
Innovation Awards, beating out stiff
competition.

Gartner positioned Hitachi & the Hitachi


AMS2000 family in its Magic Quadrantthis
quadrant reflects highest scores for their ability
to execute and completeness of visionthey
are innovatorswith a clear understanding of
market needs Nov 17, 2008

Hitachi achieves overall best-in-class Storage Performance Council (SPC-1)


benchmark results for its midrange storage system (Hitachi AMS2000). March 24, 2009

Hitachi Data Systems has consolidated


its claim as the top-rated high-end
array vendor by repeating its win...Rich Castagna, Editor
3

The AMS2000 series combination


of enterprise class features
with easy to manage midrange
usability and reduced operational
costs is powerful and worthy of
serious consideration by an IT
organization.

Performance

Hitachi Storage Lineup

Full redundancy for 99.999% availability


Data-in-place upgrades to new controllers
High Performance native SAS backend
Built-in encryption
Full Hyper-V & VMware integration
Common Management
Price/Performance Leaders
AMS2500

Scale Performance &


Capacity Independently!

Virtual Storage Platform

AMS2300

1TB Cache
192 x FC
2048 disks
400,000 IOPS

AMS2100

8GB Cache
4 x FC / iSCSI
120 disks
30,000 IOPS

32GB Cache
16 x FC / iSCSI mix
480 disks
90,000 IOPS

16GB Cache
8 x FC / iSCSI
240 disks
50,000 IOPS

* 60/40 workload with <10% cache hit

Capacity

Adaptable Modular Storage 2000 Family

Throughput

Enterprise design
Host multi-pathing
Hardware load balancing
Dynamic Optimized Performance
Dense storage, SAS and SATA
99.999+% reliability

Adaptable Modular
Storage 2100
4-8GB Cache
4 FC, or 8 FC, or 4 FC and

4 iSCSI ports
Mix Up to 159 SSD, SAS
and SATA-II Disks
Up to 313 TBs
Up to 2048 LUNs
Up to 1024 Hosts

Adaptable Modular
Storage 2300
8-16GB Cache
8 FC, or 16 FC, or 8 FC and 4
iSCSI ports
Mix Up to 240 SSD, SAS and
SATA-II Disks
Up to 472 TBs
Up to 4096 LUNs
Up to 2048 Hosts

Scalability
5

Adaptable Modular
Storage 2500
16-32GB Cache
16 FC, or 8 iSCSI, or 8 FC and
4 iSCSI ports
Mix Up to 480 SSD, SAS and
SATA-II Disks
Up to 945 TBs
Up to 4096 LUNs
Up to 2048 Hosts

Hitachi Adaptable Modular Storage 2100


Hitachi Dynamic Load Balancing controller
Symmetrical active/active design
Host port options:
4 x 8Gb/sec FC
8 x 8Gb/sec FC
4 x 8Gb/sec FC and 4 x 1Gb/sec iSCSI
Cache size: 4GB or 8GB
LUNs maximum number: 2048
Maximum attached hosts through virtual ports: 1024
3Gb/sec SAS links: 16

Hitachi Adaptable
Modular Storage 2100

SATA-II and SAS intermix


Max drives supported: 159 (7200 RPM) 136 (15K RPM)
Drive options:
SAS:
SATA:
SSD:
300GB 15K RPM
1TB 7200 RPM
200GB
450GB 15K RPM
2TB 7200 RPM
600GB 15K RPM
2TB 7200 RPM

Protection
RAID Groups/System: 50
RAID levels: -6, -5, -1+0, -1, -0*
6

* RAID-0 available on SAS drives only

3U trays (max of 7)
2 to 15 disk drives/tray
4U trays (max of 3)
4 to 48 disk drives/tray
4U controller
4 to 15 disk drives

Hitachi Adaptable Modular Storage 2300


Hitachi Dynamic Load Balancing Controller
Symmetrical active/active design
Host port options:
8 x 8Gb/sec FC
16 x 8Gb/sec FC
8 x 8Gb/sec FC and 4 x 1Gb/sec iSCSI
Cache size: 8GB or 16GB
LUNs maximum number:4096
Maximum attached hosts through virtual ports: 2048
3Gb/sec SAS links: 16

Hitachi Adaptable
Modular Storage 2300

SATA-II and SAS intermix


Min/Max drives supported: 4/240
Drive options:
SAS:
SATA:
300GB 15K RPM
1TB 7200 RPM
450GB 15K RPM
2TB 7200 RPM
600GB 15K RPM
2TB 7200 RPM

Protection
RAID Groups/System: 75
RAID levels: -6, -5, -1+0, -1, -0*
7

* RAID-0 available on SAS drives only

SSD:
200GB

3U trays (max of 15)


2 to 15 disk drives/tray
4U trays (max of 4)
4 to 48 disk drives/tray
4U controller
4 to 15 disk drives

Hitachi Adaptable Modular Storage 2500


Hitachi Dynamic Load Balancing Controller
Symmetrical active/active design
Host port options:
16 x 8Gb/sec FC
8 x 1Gb/sec iSCSI
8 x 8Gb/sec FC and 4 x 1Gb/sec iSCSI
Cache size: 16GB or 32GB
LUNs maximum number:4096
Maximum attached hosts through virtual ports: 2048
3Gb/sec SAS links: 32

Hitachi Adaptable
Modular Storage 2500

SATA-II and SAS intermix


Min/Max Drives Supported: 4/480
Drive options:
SAS:
SATA:
300GB 15K RPM
1TB 7200 RPM
450GB 15K RPM
2TB 7200 RPM
600GB 15K RPM
2TB 7200 RPM

Protection
RAID Groups/System: 100
RAID levels: -6, -5, -1+0, -1, -0*
8

* RAID-0 available on SAS drives only

SSD:
200GB

3U trays (max of 32)


4 to 15 drives/first tray
2 to 15 drives/ additional trays
4U trays (max of 10)
4 to 48 disk drives/tray
4U controller
0 disk drives

Hitachi Adaptable Modular Storage 2100 Architecture

Hitachi Adaptable Modular Storage 2300 Architecture:

10

Hitachi Adaptable Modular Storage 2500 Architecture

11

Hitachi Adaptable Modular Storage 2000 Family


Host Port Symmetric Active-Active Feature

The Active-Active Symmetric system allows for a host I/O present on any front-end port to be processed by either
controller. All LUNs are associated with one of the two controllers by the LUN Management Table. If a host request for a
LUN arrives on a port on the current managing controller, then that is called Direct Mode. If that request is for a LUN
managed by the other controller, that is known as Cross Mode. In Cross Mode, the local Intel processor directly sends
the request to the Intel CPU on the other controller for execution over the inter-DCTL communications bus.

12

Hitachi Adaptable Modular Storage 2000 Family Disk Enclosure


8 Links for Maximum Throughput and Reliability
Controller 1
SAS CTL
Expander

15 HDDs

Controller 0

Internal
enclosure

SAS CTL
Expander

2 SAS Wide Cables


8 links x 3Gbit/s

15 HDDs
15 HDDs
15 HDDs

RKAK (SAS and SATA-II Enclosure)


Controller-1

Expander

Enclosure
15 disks

4-SAS
links

4-SAS
links

SAS
HDD
Slot 0

SAS
HDD
Slot 1

SAS
HDD
Slot 2

SATA
HDD
Slot 3

SAS
HDD
Slot 4

4-SAS
links

13

SATA
HDD
Slot 13

SAS
HDD
Slot 14

4-SAS
links
Expander

Controller-0

SATA
HDD
Slot 5

15 HDDs
240 Disks
1 + 15 enclosures
(SAS and SATA)

0
2 SAS Wide Cables
8 links x 3Gbit/s

15 HDDs
15 HDDs

SAS Engine Overview


The SAS Engine is the set of IOC chips,
SXP chips, and SMX chips. AMS 2100 and
2300 models have 1 SAS Engine per
controller; the AMS 2500 has 2.
The DCTL processor directs disk I/O
requests to the SAS IOP. The IOC will
generate either SAS or SATA-II commands
depending on the target disk.
The SAS Multiplexor chip (SMX) is the
interface from the IOC to the external cables.
It manages 4 SAS links in a single Wide SAS
Cable.
On the AMS 2100/2300 controller, there is
an internal 24-port SAS Expander unit
(SXP). This controls one port from each of
the 15 internal disks. This is the same SXP
used in the external enclosures.

14

External Disk Enclosure 15 Disk

This is the 15-disk shelf which may be freely populated with SSD, SAS and SATA-II disks.
The two SAS Expanders in each enclosure attach each disk port to any of the four SAS links
in that Expander. The SATA-II canisters provide the dual-port logic. Additional enclosures are
daisy-chained from the outbound SAS links port.

15

External Disk Enclosure High Density

This is a representation of the high density shelf for use with 38 SAS disks or 48 SATA disks
(no intermix). The two pairs of SAS Expanders in each enclosure attach each disk port to any of
the four SAS links in that Expander. The SATA-II canisters provide the dual-port logic native to
SAS disks. Additional enclosures are daisy-chained from the outbound SAS links port.

16

Dense Expansion Disk Tray (4U)


Up to 48 high capacity disks
2TB SAS 7200 RPM
2TB SATA 7200 RPM

Up to 38 high performance disks


600GB 15K RPM
450GB 15K RPM

Nearly doubles the density of high


performance SAS storage when
compared to standard disk trays
Intermix high density SAS and
SATA storage in same system

Reduce data center floor space


consumption
17

Dense Expansion Tray (2100 and 2300)


Max System Configurations
High
Density
2100 / 2300
System Tray

Standard
Tray

Tray
SAS / SATA

HDDs per Tray

15

15

38 / 48

Height per Tray

4U

3U

4U

Max w/ only High Density Trays

Max w/ Intermix

Max w/ Intermix
Max w/ only Standard Trays

Max SAS
HDDs *

Max SATA
HDDs *

129

159

136

156

128

138

120

120

Max w/ only High Density Trays

167

207

Max w/ Intermix

204

234

Max w/ Intermix

196

216

Max w/ Intermix

11

218

228

Max w/ only Standard Trays

15

240

240

AMS2100

AMS2300

* SAS and SATA dense trays can be intermixed in the same system. Max HDD count will depend on the number
of SAS and SATA trays.

18

Dense Expansion Tray (2500)


Max System Configurations
High
Density

HDDS per Tray


Height per Tray
AMS2500
Max w/ only High Density Trays
Max w/ Intermix
Max w/ Intermix
Max w/ Intermix
Max w/ Intermix
Max w/ Intermix
Max w/ Intermix
Max w/ Intermix
Max w/ Intermix
Max w/ Intermix
Max w/ only Standard Trays

2500
System Tray

Standard
Tray

Tray
SAS / SATA

0
4U

15
3U

38 / 48
4U

N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A

0
2
6
8
12
14
16
20
24
28
32

10
9
8
7
6
5
4
3
2
1
0

Max SAS
HDDs *

Max SATA
HDDs *

380
372
394
386
408
400
392
414
436
458
480

480
462
474
456
468
450
432
444
456
468
480

* SAS and SATA dense trays can be intermixed in the same system. Max HDD count will depend on the number
of SAS and SATA trays.
19

Dense Expansion Tray


FRU Placement

Top View

Rear View
PS X 4
ENC Board X 4

HDD X 48 (Max)

SAS Cable Holder X 8


Enlarged

Left Side: Input


Right Side: Output
ENC cable

Cable Holder

Lever
20

SAVE Space and Power with smaller Drives


and trays
Cost Effective and scalable
Save up to 60% more space, among the best density in the Industry
High Density + Performance (7.2TB per U rack)
2U trays support 24HDD 2.5 SAS
AMS2100 up to 6, AMS2300 up to 9, AMS2500 up to 20 trays

Sustainable
Save up to half power and cooling compare with 3.5 HDD
300GB and 600GB 2.5 SFF drives

LED:
POWER()
Unit
LED:
Power
(Green)

Unit
LED:
READY
(Green)

LED:
READY()

LED:
LOCATE()
Unit
LED:
LOCATE
(Yellow)
21

2.5 Canister
2.5

Drive
LED:
ACTIVE (Green)

LED:ACTIVE()
Drive LED:
ALARM (Red)

LED:ALARM()

Hardware image for the 2.5 Expansion Unit

LED:
POWER()
Unit
LED:
Power
(Green)

2.5 Canister
2.5

Drive
LED:
ACTIVE (Green)

LED:ACTIVE()

Unit LED: READY (Green)

LED: READY()

Drive LED:
ALARM (Red)

LED:ALARM()

Unit
LED:
LOCATE
(Yellow)

LED:
LOCATE()

ENC LED (POWER, LOCATE, ALARM)

ENC LED ()POWER, LOCATE, ALARM

LINK
(IN) )
LINKLED
LED(IN
Mini
SAS Connector
LINK)
MiniSAS
(UP (UP
LINK)

ENC#0

ENC#1

LINKLED
LED(OUT
LINK
(OUT))
MiniSAS
(DOWN
LINK)LINK)
Mini
SAS Connector
(DOWN

POWER
LOCATE
ALARM

IN

OUT
CONSOLE

IN

POWER
LOCATE
ALARM

CONSOLE

RDY
AC_IN
ALM

PS#0

OUT

RDY
AC_IN
ALM

PS#1
Power
point

22

Comparison between 2.5 Expansion Unit


and DF800 Expansion Unit
#

Item

2.5 Expansion Unit

DF800 Expansion Unit (RKAK)

Unit Size

2U

3U

Connectable Basic Unit

DF800/DF800E

Same as left

# of HDU

24(SAS: 2.5-10Krpm)

15(SAS/SATA/SSD/NL-SAS: 3.5)

# of ENC

Same as left

# of PSU

Same as left

# of Fan(self-powered)

12 units(6units/PSU)

Basic specification

2
3

Module specification

Back-end composition

Path

Input1path (4WideLink)
Output1path (4WideLink)

Same as left

Use device

Expander

PM8005(6G_36Port version)

SXP-24x3G

Flash

8Mbyte

2Mbyte

10

SRAM

Unused (SRAM is mounted)

4Mbyte x 2

11

SEEPROM

32KByte(I2C)

2KByte

12

Power Supply

Power type

12v/5v/3.3v/1.2v/1.0v

Same as left

13

Mini-SAS cable

Path

IN :Key slot3 & key4,6


OUT:Key slot3 & key2,4

IN:key4,6
OUT:key2,4

SAS cable length

1m, 3m,5m

Same as left

Debug

Connector for serial(RJ-45)

Same as left

14

15

23

Other

Hitachi Adaptable Modular Storage 2000 family - Field


Controller Upgrade Process Easy Upgrades

Upgrade by changing two controller


cards only. Data undisturbed
AMS 2100

AMS 2300

AMS 2500

S CTLR1

M CTLR1

H CTRL1

PS1

PS2

PS1

4U

S CTLR2

PS2

4U

Monterey M

ADD shelf

24

4U

H CTRL2

Note : BATTERY is at the front side in


each model of 2100

- Usable w/S/M/H
- 3 GB/sec capable
- ENC different from
upgrade ENC

PS2

M CTLR2

Monterey S

HDD Chassis

PS1

15 x 3.5 inch SAS/ISATA-II HDDs


ENC 1

ENC 2

PS1

PS2

ENC 1

ENC 2

PS1

PS2

3U

Hitachi Adaptable Modular Storage 2000


Family Battery Backup Times
Model

Batteries

Small Cache 2GB DIMMS

Large Cache 4GB DIMMS

Standard Configurations

AMS2100

2 Internal

72 hours

48 hours

AMS2300

2 Internal

36 hours

24 hours

AMS2500

4 Internal

48 hours

24 hours

Optional Configurations

25

AMS2500

1 External + 4
Internal

96 hours

48 hours

AMS2500

2 External + 4
Internal

168 hours

96 hours

Hardware Details
Adaptable Modular
Storage 2100

Adaptable Modular
Storage 2300

Adaptable Modular
Storage 2500

Symmetric active/active
controllers
4 or 8 GB cache
1.67GHz Celeron, single core
processor
PCIe internal bus
15 internal disk slots
Dual redundant power
supplies
Dual batteries

Symmetric active/active
controllers
8 or 16 GB cache
1.67GHz Xeon, single core
processor
PCIe internal bus
15 internal disk slots
Dual redundant power
supplies
Dual batteries

Symmetric active/active
controllers
16 or 32 GB cache
2GHz Xeon, dual core
processor
PCIe internal bus
0 internal disk slots
Dual redundant power
supplies
Dual batteries

4 x 8Gbps Fibre Channel


(FC)
8 x 8Gbps FC
4 x 8Gbps FC + 4 x 1Gbps
iSCSI

8 x 8Gbps Fibre Channel


(FC)
16 x 8Gbps FC
8 x 8Gbps FC + 4 x 1Gbps
iSCSI

16 x 8Gbps Fibre Channel


(FC)
8 x 1Gbps iSCSI
8 x 8Gbps FC + 4 x 1Gbps
iSCSI

Max attached host


ports through
virtual ports

1024

2048

2048

Data in place
upgrades

To Adaptable Modular Storage


2300 or 2500

To Adaptable Modular Storage


2500

N/A

Models
(3 models)

Host Interface
Options
(Dedicated FC or iSCSI)

26

Hardware Details

Adaptable Modular
Storage 2100

Adaptable Modular
Storage 2300

Adaptable Modular
Storage 2500

Maximum cache
partitions

16

32

32

Drive Interface

16 Serial Attached SCSI (SAS)


wide links

16 Serial Attached SCSI (SAS)


wide links

32 Serial Attached SCSI


(SAS) wide links

RAID Levels

RAID-1, RAID-1+0, RAID-5, RAID-6 and RAID-0 (SAS drives only)

Max # of RAID
Groups

50

75

100

Max # of LUs

2048

4096

4096

Max LU size

60TB

60TB

60TB

27

Hardware Details
Adaptable Modular
Storage 2100

Adaptable Modular
Storage 2300

Adaptable Modular
Storage 2500

Supported Drives

SAS: 300GB/15k, 450GB/15k, 600GB/15k, 2TB/7200


SATA-II: 1TB/7200, 2TB/7200
SSD: 200GB (SAS interface)

Trays

Controller tray: 4 (min) to 15


(max) HDDs
Expansion tray ( up to 7 ): 2
(min) to 15 (max) HDDs
Dense tray ( up to 3 ): 4 (min)
to 48 (max) HDDs
Max of 159 total HDDs

Controller tray: 4 (min) to 15


(max) HDDs
Expansion trays ( up to 15 ): 2
(min) to 15 (max) HDDs
Dense tray ( up to 4 ): 4 (min)
to 48 (max) HDDs
Max of 240 total HDDs

Controller tray: 0 HDD


First expansion tray: 4 (min)
to 15 (max) HDDs
Up to 31 additional
expansion trays, each with 2
(min) to 15 (max) HDDs
Dense tray ( up to 10 ): 4
(min) to 48 (max) HDDs
Max of 480 total HDDs

Maximum Capacity

313TB

472TB

944TB

Supported
Operating Systems

Microsoft Windows 2000, Windows Server 2003, Windows 2008 ( including Hyper V )
Sun Solaris ( Sparc and x64 )
VMware
RedHat Enterprise Linux
SuSE Linux
RedFlag Linux
Oracle Enterprise Linux
Asianux
MiracleLinux
HP-UX
IBM AIX
MAC OSX
Novell NetWare
HP Tru64 UNIX
HP OpenVMS

28

Full Software and Firmware Offering


Standard Firmware Features

Storage Management Software


Storage Navigator Modular 2
Bundled Storage Functions
Account Authentication
Audit Logging
Dynamic Provisioning
LUN Manager
LUN Grow/LUN Shrink
Online RAID Group Expansion

Cache Residency Manager

System management via GUI and CLI


Capabilities
Provides access control to management functions
Records all system changes

Enables creation of logical, virtual volumes


Manages LUN configuration operations
Dynamically adds or reduces LUN capacity
Dynamically adds HDDs to a RAID group

Loads data for selected LUN into cache

Cache Partition Manager

Customizes cache utilization for applications

Modular Volume Migration

Moves data between RAID groups

SNMP Agent Support Function

Performance Monitor

29

Capabilities

Reports failures and status to a SNMP server

Monitors and reports performance data

Full Software and Firmware Offering


Optional Firmware Features

Optional Storage Features

ShadowImage (Clone)
Copy-on-Write (Snapshot)

1 Primary:8 Secondary, 2047 Max , GUI and CLI mgmt


1 Primary:32 Snaps, 2047 Max , GUI and CLI mgmt

TrueCopy (Sync Remote Mirroring)

1 Primary:1 Secondary, GUI and CLI mgmt

TrueCopy Extended Distance


(Asynchronous Remote Mirroring)

1 Primary:1 Secondary, GUI and CLI mgmt

Data Retention Utility (DRU)

Protects LUNs from I/O activity

Device Manager

Configuration GUI and device management for open


systems

Tuning Manager

Performance troubleshooting; deep storage system tuning

Storage Capacity Reporter


Power Savings Service (Spin Down)

30

Capabilities

End-to-end storage capacity reporting


Spins down unused disk drives

Full Software and Firmware Offering


Optional Firmware Features

Optional Storage Features

Replication Manager

Simplifies configuration and management of replication


services

Protection Manager

Performs rapid backup and recovery management

Storage Services Manager

Standards-based platform for heterogeneous storage


resource management and active SAN management

Dynamic Link Manger

Path failover and load balancing capabilities

Global Link Manager

Aggregates the management of Hitachi Dynamic Link


Manager instances into a single management station

Virtual Tape Library


Data Protection Suite
Content Archive Platform

31

Capabilities

Provides the benefits of backing up to disk without


changing established backup policies or procedures
Archiving, data migration services and backup/recovery
Digital content archiving

Hitachi Dynamic Load Balancing Controller


Architecture - Simplified Installation
Quick and Easy setting at installation:
1. No need to set controller ownership for each LU
2. Set host connection port without regards to controller ownership.

Benefits

Traditional

Adaptable Modular Storage 2000 family

Paths set to match


controller ownership
of LUs

Connect HBAs
to any controller

User required
to set paths
CTL0

LUN: 1

LUN: 2

LUN: 3

CTL1

LUs created

LUs created
LUN: 0

32

User not
required to
set paths

Owning controller of LUN 0

CTL0

LU LUN: 1

LUN: 3

LUN: 0

LUN: 2

CTL1

Hitachi Dynamic Load Balancing Controller


Architecture - Coordination with path management software
When used with front end load balancing function of path management software,
Hitachi Adaptable Modular Storage 2000 family systems can respond with balanced
performance over multiple data paths.

Benefits

Traditional

Adaptable Modular Storage 2000 family


Path
Path
Manager Manager
HBA HBA HBA HBA

Path
Path
Manager Manager
HBA HBA

Standard setting
Load balancing

HBA HBA

Path
Path
Manager Manager
HBA HBA

Path
Path
Manager Manager

HBA HBA HBA HBA

HBA HBA

Primary path
Alternate path
User required
to set paths

CTL0

LUN: 0

33

LUN: 1

LUN: 2

Owning controller of LUN 0

CTL1

LUN: 3

User not
required to
set paths

CTL0

LUN: 0

LUN: 1

LUN: 2

LUN: 3

CTL1

Back End Load Balancing


Automatic Internal Load Balancing
Optimal performance is achieved with minimal input from storage administrator.
Load balancing occurs automatically and evens out the utilization rates of
both controllers.

Benefits

Traditional Need to manually re-configure

Adaptable Modular Storage 2000 family


Load balancing is executed automatically

(5) Change path

(2) Analysis
(3) Planning

Automatically
Changes

(4)Change
setting

(1) Send
information

Performance
monitor

CTL0
CPU usage
70%

LUN: 0

34

LUN: 1

LUN: 2

Owning controller of LUN 0

Load balance
function

CTL1
CPU usage
10%

LUN: 3

CTL0

CTL1

CPU usage:
70% 40%

CPU usage
10% 40%

LUN: 0

LUN: 1

LUN: 2

LUN: 3

Unique
for Midrange

Hitachi Dynamic Load Balancing Controller

Symmetric
Active- Active
Controllers:
During normal operation
I/O can travel down any
available path to any
front end port using the
native host OS
multipathing capabilities.
This allows I/O to be
balanced over all
available paths without
impacting performance.
Automatic Load
Controller
Load Balancing:
Balancing
Controller :
If the load on the
controllers becomes
unbalanced it will
automatically be
balanced without the
host having to adjust its
its primary
primary
path.
path.
35

SWITCH 1

SWITCH 2

FE CTRL-0

AMS 2500

FE CTRL-1

BE CTRL-0

High Speed
Buses

BE CTRL-1

I/O is redirected to less busy controller

Symmetric ActiveActive and Load


Balancing Controllers:
A Powerful
combination!
With features
traditionally found only in
Enterprise class storage
arrays, the 2000 Family
can dynamically balance
I/O over any available
path through any front
end port on either
controller. All this is
done without having to
load any propriety path
management software.

Point-to-Point SAS Back end architecture


Up to 32 high performance
3GB/sec point-to-point
links for 9600MB/sec
system bandwidth
Each link is available for
concurrent I/O
No loop arbitration
bottlenecks
8 SAS links to every disk
tray for redundancy
Common disk tray for SAS
and SATA drives
Any failed component is
identified by their unique
address
36

SAS Delivers Scalable Performance for OLTP Workloads

100% Random Read, 146GB 15k rpm


Internal Load Balancing Only
8KB Block, 8 Threads per LUN
Transactions / sec

60,000
50,000

40,000
30,000
20,000
10,000
0

Disks In Use

2+2

37

4+4

4+1

2009 Hitachi Data Systems Confidential For Internal Use Only

8+1

SPC-1 Benchmark Results


for Midrange Storage Systems

Hitachi Data Systems

IBM

NetApp

Model

AMS 2100

AMS 2300

AMS 2500

DS5020

DS5300

V7000

FAS3170

Controller Type

Symmetric

Symmetric

Symmetric

Asymmetric

Asymmetric

Asymmetric

Asymmetric

SPC-1 IOPS

31,498.58

42,502.61

89,491.81

26,090.03

62,243.63

56,210.85

60,515.34

Price/Performance

$5.85/IOP

$6.96/IOP

$6.71/IOP

$7.49/IOP

$11.76/IOP

$7.24/IOP

$10.01/IOP

SPC-1 Sustainability
Rate (Throughput
MB/s)

258MB/s

350MB/s

735MB/s

214MB/s

511MB/s

463MB/s

497MB/s

Average Response
Time (50% Load
Point)

2.67ms

2.70ms

2.88ms

3.91ms

2.95ms

4.19ms

4.16ms

Average Response
Time (100% Load
Point)

8.15ms

6.33ms

8.98ms

10.49ms

14.37ms

10.8ms

20.8ms

Leading values shown in RED

38

Simplified Backend Cabling


SAS hard drive connection (system backend)
AMS 2100/2300
CTL0

CTL1

Simple and Easy backend cabling.

Single keyed cable type for all

connections

RKA

RKA
SAS

SAS

SATA

Common HDD Tray for SAS and


SATA drives

RKA

SATA

SAS

RKA
SATA

All SAS Backend paths active


RKA
SATA

RKA
SAS

39

SATA

RKA
SATA

SAS

RKA
SAS

LUN grow/LUN shrink


User adds a new Sub LUN from the unused capacity in an existing
RAID group
Any of the existing LUNs and the sub LUN are unified
LUN grow is an online process

LU0
LU0

LU0

LU1

LU1

Not used

Add LU
Not used

RAID
Group

40

LU0

Sub LU

LU0

Unified

Logical

LU0

LU1

LU1

Sub LU
Not used

Sub
Not used

Physical

LUN grow/LUN shrink


Users can release space from an existing LUN
The released space is no longer provisioned
Released space can be used for new LUNs or to expand an existing LUN

LU0
LU1
LU0

LU0
Not used

LU1

LU1

LU0
LU2
LU1

41

Online RAID Group Expansion


Use with LUN grow to increase LUN and RAID group capacity.
Tunable to allow preference to host I/O or to Expansion. Powerful
controller minimizes disruptions during RAID group expansion
3D+2P
HDD0

HDD1

HDD2

HDD3

HDD4

HDD5

6D+2P
HDD0

42

HDD1

HDD2

HDD3

HDD4

HDD5

HDD6

HDD7

HDD6

HDD7

Online RAID Group Expansion Specifications

LUNs are automatically restriped across expanded RAID groups

LU0
LU1
LU2

LU3
HDD0

HDD1

HDD2

HDD3

HDD4

HDD5

HDD6

HDD7

LU0
LU1
LU2
LU3

HDD0

43

HDD1

HDD2

HDD3

HDD4

HDD5

HDD6

HDD7

Mega-LUNs
Simple to create
Create a LUN, pick your size
LUN can grow up to 60 TB
Can span every disk in the
system

Simple to connect
Select a LUN
Choose a host
No need to choose a preferred
path
No need to choose an owner
controller

Simple to manage
Expand LUNs nondisruptively
Expand Raid Groups nondisruptively
Works with all Hitachi Adaptable
Modular storage software

LUN A

LUN A

LUN A

LUN A

LUN A

LUN A

44

Dynamic Provisioning = Efficient Storage Allocation


Use only what you need where you need it when its needed

AMS 2000

HDP Virtual Volumes

Servers

Hitachi Dynamic Provisioning


Pool Volumes
45

Challenges
High cost of storage
Cumbersome provisioning
Expensive optimization
Solution Capabilities
Simplify provisioning
Provision only what is used
Automates performance optimization
Replication Savings
Business Benefits
Reduced storage expense
Reduced operational expense
IT Agility

Hitachi Dynamic Provisioning


Capability: Automates performance optimization
Dynamic Provisioning software
effectively combines many applications
I/O patterns and evenly spreads the I/O
activity across available physical
resources
This automatically levels workloads and
avoids particular parity groups
becoming performance bottlenecks
Before Dynamic Provisioning, this had
to be done manually by a storage expert

Servers

HDP Volume
(Virtual LUN)

HDP Pool

RAID
Groups

RAID Group

RAID Group

RAID Group

Physical Drives

46

Hitachi Dynamic Provisioning


Reduces Replication Resource Requirements and Overhead
Using ShadowImage to copy Virtual LU-to-Virtual LU requires replicating only the mapped
physical storage which can be significantly less capacity than with a Normal Volume

Virtual LU

ShadowImage
PVOL

Maximum
Savings

Virtual LU
ShadowImage
SVOL

ShadowImage PAIR

The Storage Manager still has the flexibility to using ShadowImage for copying between
Virtual-LU and Normal Volume as needed

Virtual LU
ShadowImage
PVOL

Normal
VOL
ShadowImage
SVOL

Normal
VOL
ShadowImage
PVOL

ShadowImage PAIR

Virtual LU

ShadowImage
SVOL

ShadowImage PAIR

Note: ShadowImage is used in conjunction with the Virtual Volume. Mirroring of the underlying DP Pool with ShadowImage is not supported.
47

Get More From Storage Assets with Zero Page Reclaim


Facility
Hitachi Dynamic Provisioning for the AMS2000 supports the capability to do Zero Page Reclaim to
reduce the DP Pool consumed capacity and improves storage utilization.
Zero Page Reclaim facility examines the volumes of physical capacity and where the firmware determines
that no data other than zeros is found on a Dynamic Provisioning software pool page, the physical storage
is unmapped and 'returned' to the pool's free capacity.
Zero Page Reclaim is intended to be used after initial migration/restore
Migrate from the physical volume to the virtual volume (or restore volume from tape)
Zero Page Reclaim unused pages return physical storage to a dynamically provisioned pool
Note that prior to v8 microcode, the AMS2000 did not zero storage as it was allocated, so for existing volumes less
gain will be seen with ZPR than is seen on a USP.
Physical
Virtual
Virtual
Volume
Volume
Volume

Server Volume

48

Improved Performance Optimization: Dynamic Provisioning on


AMS2000 supports automatic pool rebalancing

Automatic rebalancing after virtual storage pool expansion


- When a new RAID Group is added to a DP Pool, pages are
actively redistributed among all of the RAID Groups in the pool
- Rebalances pool including rebalancing at individual virtual
volume level
- All transparently online with no affect to application I/O

VVOL

1 2 3 4

Pool

1 4

Pool
Vol#
0

Pool
Vol#
1

Pool
Vol#
2

VVOL

1 2 3 4

Pool

1 4

Add Pool Capacity

Pool
Vol#0

3
Pool
Vol#1

Pool
Vol#2

VVOL

1 2 3 4

Pool
Vol#3
Pool

Further simplifies storage


provisioning
Improved performance
optimization
49

1 4

Optimize Pool

Pool
Vol#0

3
Pool
Vol#1

Pool
Vol#2

Pool
Vol#3

Hitachi Dynamic Provisioning


Components Overview
Capacity Monitoring

Host
Storage Navigator
Modular 2
(GUI, CLI)

AMS2000

Port

Port

- DP Pool Consumed Capacity Alert


- Over Provisioning Ratio Alert
- DP Pool Trend Information
Port

Capacity Monitoring
Virtual-LU

Virtual-LU
Virtual representation of capacity
presented to Host. Total capacity of
Virtual LUs may exceed the actual
capacity of the DP Pool.

DP Mapping Table
- Mapping information between
Virtual-LU and DP Pool
- Page space information

Cache Memory / Local Memory (CS/DS)

DP Mapping Table

DP Pool
Actual storage capacity that contains
data written by Host/Application.
Comprised of multiple RAID Groups.

DP Mapping Table backup area (=1GB)

DP Pool

RAID Group

RAID Group

Hitachi Dynamic Provisioning provides a host with virtual capacity volumes. Storage is assigned to the volume from a pool of
real storage volumes Just in Time for a write request from the host.
50

Hitachi Dynamic Provisioning Components


Dynamic Provisioning Pool

Dynamic Provisioning Pool (DP-Pool)


Real installed capacity
Cannot be directly referenced from any hosts
DP Pool is composed of 1GB Chunks (32 x
32MB pages) randomly assigned throughout
the available DP Pool volumes
A 1GB Chunk is assigned to a Virtual-LU
DP Pool pages are assigned to Virtual-LUs
Just-in-Time
DP Pool storage will be shared by all
applications whose Virtual-LUs are associated
with the DP Pool
DP Pool itself can not be replicated and it can
not be migrated using the volume migration
function

51

The DP Pool is comprised of


RAID Groups built from physical
HDDs

Hitachi Dynamic Provisioning Components


Virtual Logical Unit

Virtual Logical Unit (Virtual-LU)


Virtual Volume (capacity) that the host
discovers
Must be associated with a DP Pool
Immediately after definition, the
Virtual-LU has no pages assigned
from the DP Pool
Target of Host Reads and Writes
Real storage capacity is assigned to
the Virtual-LU when required for a
write
Is supported as P-VOL or S-VOL for
ShadowImage

52

The DP Pool is comprised of


RAID Groups built from physical
HDDs

Hitachi Dynamic Provisioning Components


Dynamic Provisioning Mapping Table
Dynamic Provisioning Mapping Table
(DP Mapping Table)
Control information for Hitachi
Dynamic Provisioning
Stored in both Cache Memory and
Local Memory
Maps Logical Block Address (LBA) to
DP pages of a Virtual-LU
Provides logical view for Virtual-LU
Mirrored into DP Pool System Area
for real time recovery after complete
power outage

53

Logical View of DP Pool through DP Mapping Table


The DP Pool is comprised of RAID Groups built
from physical HDDs

Hitachi Dynamic Provisioning


Improved Storage Management Agility via Monitoring
Capacity Monitoring

54

Monitor the DP Pool utilization


Two customer definable utilization
levels may be set per pool to
generate alerts
Monitor over-provisioning ratio (Total
virtual volume capacity/Total pool
capacity)

Two customer definable over


provisioning ratio thresholds may be
set to generate alerts
Alert options: Blinking LED on
system; Email message; SNMP trap
issued
Error return (WRITE PROTECT) to
the Write command after DP Pool
depletion
Stores the DP Pool Trend information
for the past one year
Transfer the Trend information as
CSV file

Virtual-LU Viewpoint
Over Provisioning
Ratio Alert

Trend
Informatio
n

The DP Pool is comprised of RAID


Groups built from physical HDDs

DP Pool Viewpoint
DP Pool Consumed
Capacity Alert

Hitachi Dynamic Provisioning


Example of Trend Information Reported

55

Hitachi Dynamic Provisioning


Delivering Performance Improvements
When allocating many Virtual LUs to a Dynamic Provisioning Pool that consists of multiple
storage system groups, the total Host I/O load is evenly distributed across the pools RAID
Groups
This wide striping implementation automatically levels the I/O load and avoids overutilizing a
RAID Group resulting in a performance bottleneck
Without Dynamic Provisioning

LU

LU

LU

LU

LU

With Dynamic Provisioning

Virtual
LU

Virtual
LU

Virtual
LU

Virtual Virtual
LU
LU

Actual capacity pool (DP Pool)


RAID Group#1

RAID Group#2

RAID Group#1

RAID Group #1 becomes


performance bottleneck

56

RAID Group#2

Host I/O load is distributed to


both RAID Group #1 and RAID
Group #2

Hitachi Dynamic Provisioning Operations


Allocating Chunks and Load Balancing
Virtual-LU load balancing
- When receiving new Write data, assign one Chunk (1GB) in the 1st DP Pool RAID Group, then continue writing data to the
(32) 32MB pages that make up Chunk.
- When receiving new Write data for another Virtual-LU, assign the next Chunk in the same 1st DP Pool RAID Group
- When all (32) 32MB pages of the Chunk are allocated, then create a new Chunk in the 2 nd DP Pool RAID Group

Chunk #0 Used Up!!


Chunk #0
Chunk #1
Chunk #2

Virtual-LU #0

DP Pool RAID Group #0

Chunk #3

DP Pool RAID Group


#1

DP Pool
57

Virtual-LU #1

Virtual-LU #2

File System Initial Capacity Requirements


Thin Friendly
Low advantage
No advantage

File-Systems spec for DP Pool capacity utilization.

OS

File System

Meta-Data Writing
(The case of Default parameter)

DP Pool Capacity Consumed

Windows Server2003

NTFS

Writes metadata to first block.

Small (one page)

Windows Server2008

NTFS

Under investigation

Under investigation

Linux

XFS

Writes metadata in Allocation


Group Size intervals.

Depends upon allocation group size. The amount of pool


space consumed will be approximately
[Virtual LU Size]*[32MB/Allocation Group Size]

Ext2 / Ext3

Writes the meta-data in 128MB


intervals.

About 25% of the size of the Virtual-LU.


Note: The default block size for these file systems is 4KB.
This results in 25% of the Virtual-LU acquiring 32MB pages.
If the file system block size is changed to 2KB or less, then
the meta-data is written in 32MB or less intervals, so the DP
Pool capacity consumption becomes 100%

UFS

Writes the metadata in 52MB


increments.

Size of Virtual-LU.

VxFS

Writes metadata to first block.

Small (one page)

ZFS

Writes metadata in Virtual Storage


Pool Size intervals.

Small (when the Virtual-LU is more than 1GB)

JFS

Writes metadata in 8MB intervals.

Size of Virtual-LU
Note: If you change the Allocation Group Size settings when
you create the file system, the meta-data can be written to a
maximum 64MB intervals. Approximately 50% of the DP
Pool is consumed

JFS2

Writes metadata to first block.

Small (one page)

VxFS

Writes metadata to first block.

Small (one page)

Solaris

AIX

58

File System Initial Capacity Requirements


File-Systems spec for DP Pool capacity utilization.

Thin Friendly
Low advantage
No advantage

OS

File System

Meta-Data Writing
(The case of Default parameter)

DP Pool Capacity Consumed

HP-UX

JFS(VxFS)

Writes metadata to first block

Small (one page)

HFS

Writes metadata in 10MB intervals

Size of Virtual-LU.

Windows
Server2003

NTFS

Writes metadata to first block

Small (one page)

Windows
Server2008

NTFS

Under investigation

Under investigation

Linux

Ext2 / Ext3

About 25% of the size of the Virtual-LU.


Note: The default block size for these file systems is 4KB.
This results in 25% of the Virtual-LU acquiring 32MB pages.
If the file system block size is changed to 2KB or less, then
the meta-data is written in 32MB or less intervals, so the DP
Pool capacity consumption becomes 100%.

VMware

Case1: Writes metadata in 2GB intervals


Metadata

2GB

Virtual-LU

Metadata

32MB Page
DP Pool Capacity Consumed: 32MB/2GB = 1.6

59

Case2: Write metadata in 64MB intervals


64MB

Virtual-LU
32MB Page

DP Pool Capacity Consumed: 32MB/64MB = 50

Best practice for creating DP Pool


8D+2P, many RGs in one DP Pool, plural DP Pools
#

Viewpoint

Capacity
utilization

Performance

Items

Best practices

Data disk number for the


RAID Group in the DP
Pool

Recommendation: 2,4,8,16

RG configuration in the
DP Pool

For the site for which high I/O performance is required, you need to configure
the DP Pool with many HDDs and as many RGs as possible

(Ex: 4D+1P, 8D+2P, 16D+2P, etc)


Default Combination = 8D+2P

(By using the Performance Monitor, you can see the Host I/O frequency for
each Virtual-LU and utilization for each RG in the DP Pool.)
3
4

Reliability

RG RAID Level

From performance and reliability viewpoints, the recommendation is 8D+2P.

DP Pool counts

Recommendation: Create plural DP Pools


The backup DP Pool Mapping Table is taken to one of the DP Pools. If there is
a failure in this DP Pool, the backup of DMT is taken to another DP Pool
automatically. Since if there is just one DP Pool this backup process cannot be
executed, creating more than two DP Pools is recommended

60

Best practice for creating Virtual-LU


Multiples of 1GB size
#
1

61

Item
Capacity

Best practice
Recommendation: Set the Virtual-LU on 1GB size boundaries
(If you cannot do this, some DP Pool space will be wasted)

Global sparing with HDD roaming


Improved reliability and
flexibility of recovery point
Ideal for mission-critical data
Mitigate risk of data loss due to
double-disk failures

No need to copy back from


spare HDD after rebuild
Spare becomes part of new RAID
group
Swap bad drive for new spare

Normal Operation

DATA 1

DATA 2

DATA 3

DATA 4

DATA 5

DATA 6

PARITY 1

PARITY 2

62

DATA
3
HOT
SPARE

Rebuild
Operation

Hitachi Cache Partition Manager Feature


Modular storage systems typically have simple single block-size caching
algorithm, resulting in inefficient use of cache and/or I/O
Hitachi Cache Partition Manager feature delivers cache environment that can
be optimized to specific customer application requirements

Less cache required for specific workloads


Better hit rate for the same cache size
Better optimization of I/O throughput for mixed workloads
No need to mirror all writes where this does not benefit the application

Typical 4KB Database


blocks in 16KB Cache
pages
75% Cache Wasted!

63

Cache Partition Manager Feature


Modular storage systems typically have simple single block-size
caching algorithm, resulting in inefficient use of cache and/or I/O
Cache Partition Manager feature delivers cache environment that can
be optimized to specific customer application requirements

Less cache required for specific workloads


Better hit rate for the same cache size
Better optimization of I/O throughput for mixed workloads
No need to mirror all writes where this does not benefit the application
4KB Database blocks
in 4KB Cache pages
16KB File system
blocks in 16KB
Cache pages
512KB Video data
blocks in 64KB
Cache pages

64

Cache Partition Manager


Feature Specifications
Item

65

Conditions

Devices and cache supported

Adaptable Modular Storage 2100: 2GB,


4GB/controller
Adaptable Modular Storage 2300: 4GB,
8GB/controller
Adaptable Modular Storage 2500: 8GB,
16GB/controller

Memory segment size


supported

Master partition: Fixed 16KB


Sub partition: 4KB, 8KB, 16KB, 64KB,
256LB, or 512 KB

RAID stripe sizes supported

64KB, 256KB, or 512KB

Number of partitions

Adaptable Modular Storage 2100: 2 to 16


Adaptable Modular Storage 2300: 2 to 32
Adaptable Modular Storage 2500: 2 to 32

Cache Partition Manager


for Services Oriented Storage Solutions
Cache Partition Manager enables cache memory to be sub-divided and
managed for more efficient use:
Each of the divided portions of cache memory is called a partition

LUNs defined in the Adaptable Modular Storage 2000 family systems can
be assigned to a partition
Customer can specify a size of the partition

Optimize the data transmission from/to a host by assigning the most


suitable partition to a LUN according to the application (data) received
from a host
The same Adaptable Modular Storage 2000 family storage system can be
optimized in many different ways depending upon the characteristics of the
application

66

Overview of the Cache Partition Manager


Cache is divided into partitions that can be exclusively used by assigned
LUNs

Adaptable Modular Storage 2100: 2 16 partitions per system


Adaptable Modular Storage 2300: 2 32 partitions per system
Adaptable Modular Storage 2500: 2 32 partitions per system
Partition 0/1 are the master partitions (16KB only)
Partition 2~n have selectable-size segments 4, 8, 16, 64, 256, & 512K
Partition sizes are flexible (each partition has a certain minimum)
Controller #1

Controller #0
Dir #0

67

Partition #0
16KB(Fix)

Partition #2
4KB

Partition #3
256KB

Multipurpose
(Default)

For DB
(Small
size
access)

Streamin
g
(Exclusive
Partition)

Cache Partition Manager


Cache Partition Manager allows performance tuning and it includes the
following:
Selectable segment size
Customize the cache segment size for a user application
Partitioned cache memory
Decrease negative effect on performance between applications in a 'storage
consolidated' system by dividing the cache memory into multiple partitions
individually used by each application (LUs)
Selectable stripe size
To increase performance by customizing the disk access size

68

Cache Utilization is Key to Application Performance

Without Cache Partitioning Manager

Data in cache memory is


processed uniformly in units of
16KB regardless of lengths of data
sent from a host
Data with a length of 4KB or 8KB
is scattered on the cache memory

16KB

69

4KB data

Cache-hit rate at the time of


response to the read command is
effectively lowered

Cache Utilization is Key to Application Performance

With Cache Partition Manager

The area of the cache memory


is divided into two (for this
example), making the two
portions fit two data lengths

4KB data

When handling a lot of small


Partition 2
data with lengths shorter than
(Sub partition)
16KB, you can raise the cache
hit rate and lower response
16KB
time (read) by specifying data
Partition 0
to be processed via partition 2
(Master partition)
8KB

Contents of the cache memory


are controlled more efficiently

70

Advantage of Selectable Segment Size


Fixed size cache segments (16KB)

16KB

For an application with short


size (ex.4KB) host access

Low cache hit rate!

Adaptable Modular Storage 2000


Selectable Segment Size 8KB
By choosing a smaller segment size the
cache hit rate will increase, and the
performance will increase

71

Advantage of Selectable Segment Size

Fixed size cache segments (16KB)

16KB
Adaptable Modular Storage 2000
Selectable Segment Size 256KB

72

For an application with long


size (ex.128KB) host access

Takes overhead to handle


many segments for each I/O

By choosing larger segment size,


overhead will decrease,
and the performance
will increase.

Advantage of Selectable Stripe Size

Adaptable Modular Storage 2000:


Selectable Stripe Size

High throughput with


concurrent I/Os to HDDs
Good for applications with
sustained I/Os

Stripe Size
256KB

HDDs

1~2 WRITES to HDD

Stripe Size
64KB

1 WRITE to LUN

2~3 WRITES to HDD

73

128KB

128KB

1 WRITE to LUN

Fixed Stripe Size (64KB)

HDDs

Lower overhead because


of fewer HDD I/Os
Good for applications with
transaction I/Os

Cache Residency Manager

LUN
Extent
74

LUN

LUN

LUN
Extent

Cache
Mirrored Cache

Extent

Enhanced
Read/Write
Performance

Cache Residency
Area

Storage Administrator defines a


portion of cache for use by Cache
Residency Manager
Storage Administrator or application
(through API) dynamically maps
extents
All Read/Write activity occurs in
cache
Write data is mirrored and protected
on disk asynchronously.

Unsurpassed Heterogeneous Connectivity


Simplify: Connectivity
Facilitates consolidation with broadest
range of open systems O/S support and
easily accommodates dynamic,
heterogeneous environments, using
native path management and failover
Hitachi Universal Storage Platform V and
VM
Microsoft Windows 2000, Windows
Server 2003, Windows 2008
Hyper-V
Sun Solaris (Sparc and x64 )
VMware
RedHat Enterprise Linux, SuSE Linux,
RedFlag Linux, Oracle Enterprise Linux,
Asianux, MiracleLinux, others
HP-UX
IBM AIX
MAC OSX
Novell NetWare
HP Tru64 UNIX
HP OpenVMS

ADAPTABLE
MODULAR STORAGE
2100
75

Host Storage Domains allow multiple


heterogeneous servers to share a single
port

Host Protocol Options


AMS 2100
Fibre Channel System
(base)

AMS 2500
Fibre Channel System

Fibre Channel System


(max)

iSCSI System

FC and iSCSI
Multiprotocol
FC and iSCSI
Multiprotocol

AMS 2300
Fibre Channel System
(base)
Fibre Channel System
(max)
FC and iSCSI
Multiprotocol

76

FC Port
iSCSI Port

Host Port Intermix


New AMS2000 Controller Designs
AMS2100 (Embedded FC + FC Option)
Optional 2x 8Gb/s FC

User LAN

B
Embedded 2x 8Gb/s FC

F
Mnt LAN

AMS2100 (FC Onboard + iSCSI Option)


Optional 2x 1Gb/s iSCSI

User LAN

Embedded 2x 8Gb/s FC
Mnt LAN

AMS2300 (FC Onboard + FC Option)


Optional 4x 8Gb/s FC

User LAN

H
Mnt LAN

Embedded 4x 8Gb/s FC

AMS2300 (FC Onboard + iSCSI Option)


Optional 2x 1Gb/s iSCSI

User LAN

D
Embedded 4x 8Gb/s FC

F
Mnt LAN

77

10 GbE / 1GbE FCIP with the iSR6250-HDS

Remote replication with AMS2500 and USP


Cost-effective remote BC and DR connections
iSR6250-HDS

AMS with TrueCopy SAN-over-WAN replication


Storage Subsystem Drag; SW + Array revenue
1GbE FCIP

8Gb FC

WAN

iSR6250-HDS

78

Hitachi Volume Migration Modular Software for


Data Lifecycle Management
Example: SAS to SATA

SAS
LU1

Migration
finish

SAS
LU2

Online
Migration

SATA
LU2

79

SATA
LU1

Volume Migration does


not require host resource
Does not require change
of host configuration,
mount, or boot.
Low I/O access LU
migration from SAS HDD
to SATA HDD
Or move heavy IO LU
from 10K RPM to 15K
RPM HDDs

Hitachi Volume Migration Software for


Performance Improvement
Example: Performance Tuning

Use Hitachi Performance Monitor feature


to find performance bottleneck.

High load I/O

LU1
Online
Migration

High load RG

LU2

Low load RG

80

High load I/O

LU2
Migration
finish

Medium load RG

LU1
Medium load RG

Performance imbalance is
addressed

Performance bottleneck is
removed

ShadowImage In-System Replication/Copy-on-Write


Snapshot Software Cloning

Hitachi ShadowImage In-System Replication software


Full Volume Clone/Copy-on-Write (PiT) Snapshot
software
ShadowImage full volume clones per source 8 maximum
ShadowImage command devices 128 maximum
ShadowImage copy requests per AMS2500 - 2047 maximum

Copy-on-Write snapshots per source - 32 maximum


Copy-on-Write command devices 128 maximum
Copy-on-Write PiT snapshots per AMS2500 - 2046 maximum
Reverse resynchronization - supported
Instant snap restore - supported
Auto I/O Switch on double/triple drive failure - supported

81

TrueCopy Remote Replication Software for


Business Continuity

TrueCopy Synchronous

1000

2500

Bi-directional
Differential re-synchronization
Reverse re-synchronization
R/W Source and Read Target Continuous Access
Copy between AMS/WMS and AMS2000 models

TrueCopy Extended Distance (Asynchronous)

2500

82

1000

Automated source and remote synchronization


Automated local and remote RPO snapshots
Consistency Group support
Write order fidelity preservation
Bi-directional
Local or remote site back-up capability
Copy between AMS2000 and AMS1000/500 models

TrueCopy Remote Replication Software for


Business Continuity

TrueCopy Synchronous

Bi-directional
Differential re-synchronization
Reverse re-synchronization
R/W Source and Read Target Continuous
Access

TrueCopy Extended Distance (Asynchronous)

83

Automated source and remote synchronization


Automated local and remote RPO snapshots
Consistency Group support
Write order fidelity preservation
Bi-directional
Local or remote site back-up capability

TrueCopy Extended Distance Software


TrueCopy Update Sequence

Updates

T0
PVOL

Updates

T2

T1
T1

SVOL

T2

Snapshot

T2

T1

T2

T3

Snapshot

T3

Send
Differentials

Send
Differentials
T1

Snapshot
T1
Available 2H CY06

Update Cycle
84

T3

T2

Snapshot
Same
Data

Disaster

Updates

T2

T2
Snapshot
T2

Restore

Time

Power Savings
Lower Total Cost of Ownership
Hitachi Data Systems HDD spin down solutions
Spinning down of

ShadowImage drive groups involved in backup to tape


VTL drive groups involved in backups
Local internal backup copy of data

Drive groups within archive storage


Unused drive groups
Spun-Up RAID Group

Deliver power consumption savings by

Spun-Down RAID Group

Reduce electrical power consumption of idled hard drives


Reduce cooling costs related to heat generated by the hard drives.

85

The Hitachi Green Storage Initiative


Power savings (power down) option available
Higher density drives utilizing less power = fewer BTUs
Saves money, saves the environment

SATA II drives take advantage of power modes


Heads unload whenever HDD is idle for 2 hours

Variable fan speeds


Internal temperature of the subsystem will control the speed of the fans

86

System and Data Security


Role Based Access Control
System management access based on one of three roles (account administrator,
storage administrator or auditor)
Unique login and password for each user
Active Directory integration in conjunction with Command Suite 7.0

System Audit Logging


Records all system changes and all power ups and downs
Sends all changes to Syslog server
Enables IT managers to meet compliance requirements including auditable trails

Data and System Security

Self-encrypting drive option


Communication between management software and systems is SSL/TLS encrypted
Maintenance ports support IPv6 security specs
All models meet Common Criteria EAL2 Certification

Data Retention Utility (optional)


Tamperproof Write Once Read Many (WORM) makes selected data non-erasable and
non-rewriteable
Read access can be limited
87

Questions/Discussion

88

Thank You

89

You might also like