You are on page 1of 355

Centera

Version 3.0

Online Help (printable version)


P/N 300-002-547
REV A01

EMC Corporation
Corporate Headquarters:
Hopkinton, MA 01748- 9103
1-508 -435 -1000
www.EMC.com

Copyright 2005 EMC Corporation. All rights reserved.


EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN
THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
Copyright 1991-2, RSA Data Security, Inc. Created 1991. All rights reserved.
License to copy and use this software is granted provided that it is identified as the "RSA Data Security,
Inc. MD5 Message-Digest Algorithm" in all material mentioning or referencing this software or this
function. RSA Data Security, Inc. makes no representations concerning either the merchantability of this
software or the suitability of this software for any particular purpose. It is provided "as is" without express
or implied warranty of any kind.
These notices must be retained in any copies of any part of this documentation and/or software.
Copyright (c) 1995-2002 International Business Machines Corporation and others. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software without restriction, including without
limitation the rights to use, copy, modify, merge, publish, distribute, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so, provided that the above copyright
notice(s) and this permission notice appear in all copies of the Software and that both the above copyright
notice(s) and this permission notice appear in supporting documentation.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. IN NO
EVENT SHALL THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN THIS NOTICE BE LIABLE
FOR ANY CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY
DAMAGES WHATSOEVER
RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE
USE OR PERFORMANCE OF THIS SOFTWARE
Trademark Information
The EMC version of Linux, used as the operating system on the Centera server, is a derivative of Red
Hat and SuSE Linux. The operating system is copyrighted and licensed pursuant to the GNU General
Public License (GPL), a copy of which can be found in the accompanying documentation. Please read the
GPL carefully, because by using the Linux operating system on the Centera server, you agree to the terms
and conditions listed therein.
Sun, the Sun Logo, Solaris, the Solaris logo, the Java compatible logo are trademarks or registered
trademarks of Sun Microsystems, Inc.

All SPARC trademarks are trademarks or registered trademarks of SPARC International. Inc. Linux is a
registered trademark of Linus Torvalds.
ReiserFS is a trademark of Hans Reiser and the Naming System.
Microsoft, Windows, and Windows NT are registered trademarks of Microsoft Corporation. Red Hat is a
registered trademark of Red Hat Software, Inc.
UNIX is a registered trademark in the United States and other countries and is licensed exclusively
through X/Open Company Ltd.
HP-UX is a trademark of Hewlett-Packard Company.
AIX is a registered trademark of IBM Corporation.
IRIX and SGI are registered trademarks of Silicon Graphics, inc. in the United States and other countries
worldwide.
PostScript and Adobe are trademarks of Adobe Systems Incorporated.
This product includes software developed by the Apache Software Foundation
(http://www.apache.org/).
This product includes software developed by L2FProd.com (http://www.L2FProd.com/).
The Bouncy Castle Crypto package is Copyright 2000 of The Legion Of The Bouncy Castle
(http://www.bouncycastle.org).
All other trademarks used herein are the property of their respective owners.
ReiserFS is hereby licensed under the GNU General Public License version 2.
Source code files that co ntain the phrase "licensing governed by reiserfs/README" are "governed files"
throughout this file. Governed files are licensed under the GPL. The portions of them owned by Hans
Reiser, or authorized to be licensed by him, have been in the past, and likely will be in the future, licensed
to other parties under other licenses. If you add your code to governed files, and don't want it to be owned
by Hans Reiser, put your copyright label on that code so the poor blight and his customers can keep things
straight.
All portions of governed files not labeled otherwise are owned by Hans Reiser, and by adding your code to
it, widely distributing it to others or sending us a patch, and leaving the sentence in stating that licensing is
governed by the statement in this file, you accept this. It will be a kindness if you identify whether Hans
Reiser is allowed to license code labeled as owned by you on your behalf other than under the GPL,
because he wants to know if it is okay to do so and put a check in the mail to you (for non-trivial
improvements) when he makes his next sale. He makes no guarantees as to the amount if any, though he
feels motivated to
motivate contributors, and you can surely discuss this with him before or after contributing. You have the
right to decline to allow him to license your code contribution other than under the GPL.
Further licensing options are available for commercial and/or other interests directly from Hans Reiser:
hans@reiser.to. If you interpret the GPL as not allowing those additional licensing options, you read it
wrongly, and Richard Stallman agrees with me, when carefully read you can see that those restrictions on
additional terms do not apply to the owner of the copyright, and my interpretation of this shall govern for
this license.
Finally, nothing in this license shall be interpreted to allow you to fail to fairly credit me, or to remove my
credits, without my permission, unless you are an end user not redistributing to others. If you have doubts
about how to properly do that, or about what is fair, ask. (Last I spoke with him Richard was
contemplating how best to address the fair crediting issue in the next GPL version.)

Table of Contents
Table of Contents ................................................................................................................................................... - 4 Welcome to Centera ...........................................................................................................................................- 15 About Centera Online Help ..................................................................................................................................17
Architecture and Performance.............................................................................................................................19
Nodes with Access Roles ..................................................................................................................................19
Nodes with Storage Roles ................................................................................................................................20
Storage on Nodes with Access Role..............................................................................................................20
Fixed Overhead and Data Size .......................................................................................................................21
Data Availability .................................................................................................................................................21
Cluster Topology.................................................................................................................................................21
Centera Interfaces ...............................................................................................................................................23
Storage Strategy...................................................................................................................................................25
Threads...................................................................................................................................................................25
Application Server ..............................................................................................................................................26
Replication and Performance..........................................................................................................................26
Capacity Reporting.............................................................................................................................................27
Capacity Values ...................................................................................................................................................27

Capacity Definitions...........................................................................................................................................28
Capacity Requirements .....................................................................................................................................29
Storage Efficiency................................................................................................................................................30
Centera Features .......................................................................................................................................................31
Content Addressed Storage.............................................................................................................................31
Centera....................................................................................................................................................................31
Self Managing.......................................................................................................................................................31
Garbage Collection .............................................................................................................................................32
Governance/Compliance.................................................................................................................................33
Content Protection ..............................................................................................................................................34
Centera Alerts ......................................................................................................................................................36
Access Control...........................................................................................................................................................40
Centera Security ..................................................................................................................................................40
Access Security Model.......................................................................................................................................40
Pool Membership................................................................................................................................................44
Disaster Recovery.....................................................................................................................................................46
Replication and Restore ....................................................................................................................................46
Replication.............................................................................................................................................................47
Replication and other Product Features ......................................................................................................49

Replication Topologies ......................................................................................................................................50


Restore ....................................................................................................................................................................51
Centera Maintenance ..............................................................................................................................................52
ConnectEMC.........................................................................................................................................................52
Centera Service Tiers .........................................................................................................................................53
Hardware Service Models................................................................................................................................54
Configuring Centera................................................................................................................................................55
Configuration Steps............................................................................................................................................55
Copy all pools and profiles to another cluster ..........................................................................................57
Copy a Profile Definition to Another Cluster............................................................................................58
Copy a Pool to Another Cluster .....................................................................................................................59
Create a .pea File .................................................................................................................................................60
Create a Profile Secret........................................................................................................................................61
Enable Replication of Delete ...........................................................................................................................61
Host Application Data .......................................................................................................................................63
Host Application Data Example ....................................................................................................................63
Merge Two or More .pea Files ........................................................................................................................64
Migrating Legacy Data ......................................................................................................................................65
Migrating Legacy Data Example ...................................................................................................................66

Provide Legacy Protection ...............................................................................................................................67


Provide Legacy Protection Example.............................................................................................................68
Segregate Application Data and Selective Replication...........................................................................69
Segregate Application Data Example...........................................................................................................70
Replicate Specific Pools.....................................................................................................................................72
Replicate Specific Pools Example ..................................................................................................................73
Restore Specific Pools ........................................................................................................................................74
Restore Specific Pools Example......................................................................................................................75
Set Up Bidirectional Replication of One or More Pools .........................................................................76
Set Up Chain Replication of One or More Pools ......................................................................................78
Set Up Star Replication on One or More Pools .........................................................................................80
Set up Unidirectional Replication of one or more pools........................................................................82
Support Application Failover .........................................................................................................................83
CLI Reference Guide ...............................................................................................................................................85
Overview ...............................................................................................................................................................85
Launch the CLI ....................................................................................................................................................85
CLI Commands....................................................................................................................................................86
CLI Conventions .................................................................................................................................................87
CLI Settings ...........................................................................................................................................................89

CLI Command Example ...................................................................................................................................90


Scripting the CLI .................................................................................................................................................90
Scripting CLI Example ......................................................................................................................................91
Enter CLI Commands........................................................................................................................................92
Create a batch file................................................................................................................................................92
Create......................................................................................................................................................................93
Delete ......................................................................................................................................................................99
Export................................................................................................................................................................... 103
Help ...................................................................................................................................................................... 106
Import .................................................................................................................................................................. 107
Migrate ................................................................................................................................................................ 110
Notify ................................................................................................................................................................... 113
Quit ....................................................................................................................................................................... 115
Replication.......................................................................................................................................................... 116
Restore ................................................................................................................................................................. 119
Set .......................................................................................................................................................................... 124
Show Commands............................................................................................................................................. 148
Update ................................................................................................................................................................. 204
How Do I..?..........................................................................................................................................................- 213 -

Overview ........................................................................................................................................................- 213 Change the Administrators Password.................................................................................................- 213 Change the Administrators Details ......................................................................................................- 215 View Administrators Details ..................................................................................................................- 216 Configure the Regeneration Buffer........................................................................................................- 217 View the Capacity Details of a Node.....................................................................................................- 218 View the Capacity of All Pools................................................................................................................- 220 View Cluster Capacity................................................................................................................................- 221 View Node Capacity ...................................................................................................................................- 223 View the Capacity of All Nodes ..............................................................................................................- 225 View the Capacity of the Regeneration Buffer ...................................................................................- 227 Configure Centera .......................................................................................................................................- 228 Learn More About Centera .......................................................................................................................- 229 Monitor Centera ...........................................................................................................................................- 230 Add a Cluster to a Domain.......................................................................................................................- 231 Define a Cluster Mask................................................................................................................................- 232 Remove a Cluster from a Domain..........................................................................................................- 234 View the Health of a Cluster ....................................................................................................................- 235 Define and Manage Retention Periods .................................................................................................- 236 -

Modify a Retention Period........................................................................................................................- 237 Set and Change the Default Retention Period....................................................................................- 239 View ConnectEMC Settings .....................................................................................................................- 240 Verify Email Connectivity to EMC ........................................................................................................- 241 Change ConnectEMC Parameters ..........................................................................................................- 243 Add a Cluster to a Domain......................................................................................................................- 244 Create a Domain...........................................................................................................................................- 245 Delete a Domain ...........................................................................................................................................- 246 Remove a Cluster from a Domain..........................................................................................................- 247 Set Up ICMP ..................................................................................................................................................- 248 View ICMP Settings ....................................................................................................................................- 248 Learn About Content Protection Schemes ...........................................................................................- 249 Create a Pool Mapping...............................................................................................................................- 250 Delete a Pool Mapping...............................................................................................................................- 251 Migrate Legacy Data ...................................................................................................................................- 252 Migrating Legacy Data Example ............................................................................................................- 253 Provide Legacy Protection ........................................................................................................................- 253 Provide Legacy Protection Example......................................................................................................- 254 Revert Pool Mapping..................................................................................................................................- 255 -

10

Start Migration Process ..............................................................................................................................- 255 View Pool Mapping ....................................................................................................................................- 257 View Pool Migration ...................................................................................................................................- 258 Change the Settings of an Access Node ...............................................................................................- 259 View Node Network Configurations....................................................................................................- 260 View Detailed Network Switch Information ......................................................................................- 262 View the Status of Network Switches ...................................................................................................- 263 Change the Network Settings of an Access Node.............................................................................- 264 Lock Nodes for Remote Service ..............................................................................................................- 266 Set the Speed of a Network Controller .................................................................................................- 266 View the Number of Nodes on a Cluster.............................................................................................- 268 Unlock Nodes for Remote Service..........................................................................................................- 269 View the Details of all Nodes ...................................................................................................................- 270 View Node Network Configurations....................................................................................................- 271 Create a .pea File ..........................................................................................................................................- 273 Merge Two or More .pea Files .................................................................................................................- 274 Copy a Pool to Another Cluster ..............................................................................................................- 275 Copy all pools and profiles to another cluster ...................................................................................- 276 Create a Pool ..................................................................................................................................................- 277 -

11

Create a Pool Mapping...............................................................................................................................- 278 Define a Pool Mask......................................................................................................................................- 279 Export Pools and Profiles to Another Cluster ....................................................................................- 281 Remove a Pool...............................................................................................................................................- 282 Segregate Application Data and Selective Replication....................................................................- 283 Segregate Application Data ......................................................................................................................- 285 Start Pool Migration ....................................................................................................................................- 286 Update Pool Details.....................................................................................................................................- 288 View Pool Capacity .....................................................................................................................................- 290 View Pool Migration ...................................................................................................................................- 291 View Relationship between a Pool and Profile ..................................................................................- 292 Assign a Profile Access Rights to a Pool ..............................................................................................- 293 Copy a Profile Definition to Another Cluster.....................................................................................- 295 Copy all pools and profiles to another cluster ...................................................................................- 296 Create a Profile..............................................................................................................................................- 297 Create a Profile Secret.................................................................................................................................- 298 Delete a Profile ..............................................................................................................................................- 299 Disable the Anonymous Profile ..............................................................................................................- 300 View the Relationship between a Pool and a Profile .......................................................................- 302 -

12

Enable Replication of Delete ....................................................................................................................- 303 Learn More About Replication Topologies .........................................................................................- 304 Monitor Replication ....................................................................................................................................- 305 Pause Replication .........................................................................................................................................- 307 Prepare for Disaster Recovery .................................................................................................................- 307 Report Replication Performance .............................................................................................................- 309 Resume Replication .....................................................................................................................................- 311 Set Up Bidirectional Replication of One or More Pools ..................................................................- 311 Set Up Chain Replication of One or More Pools ...............................................................................- 313 Set Up Replication .......................................................................................................................................- 315 Set Up Star Replication on One or More Pools ..................................................................................- 317 Set up Unidirectional Replication of one or more pools.................................................................- 320 View Detailed Replication Statistics ......................................................................................................- 321 Support Application Failover ..................................................................................................................- 323 Monitor Restore ............................................................................................................................................- 324 Pause Restore.................................................................................................................................................- 326 Restore Specific Pools .................................................................................................................................- 326 Restore Specific Pools Example...............................................................................................................- 327 Resume Restore ............................................................................................................................................- 328 -

13

Perform a Restore.........................................................................................................................................- 329 Change the Password of the Administrator........................................................................................- 331 Change the Administrators Details ......................................................................................................- 332 Change the Password to the Default .....................................................................................................- 333 Lock All Cluster Nodes ..............................................................................................................................- 334 Manage Access Control..............................................................................................................................- 335 Manage Centera ............................................................................................................................................- 336 Unlock a Specific Node..............................................................................................................................- 337 View Security Status of a Node ...............................................................................................................- 338 Configure SNMP ..........................................................................................................................................- 340 Set Up SNMP.................................................................................................................................................- 341 View the Current Storage Strategy ........................................................................................................- 343 Glossary................................................................................................................................................................- 344 -

14

Welcome to Centera
Centera is a networked storage system specifically designed to store and provide fast, easy
access to fixed content (information in its final form). It is the premier solution to offer online
availability with long term retention and assured integrity for this fastest-growing category
of information.
Centera provides a simple, scalable, secure storage solution for cost-effective retention,
protection, and disposition of a wide range of fixed contentincluding X -rays, voice
archives, electronic documents, e-mail archives, check images, and CAD/CAM designs.
Exceptional performance, seamless integration, and proven reliability make Centera the
online enterprise archiving standard for virtually any application and data type.
Centera is the first magnetic disk-based WORM (Write Once Read Many) device and helps
facilitate compliance with externally driven regulations and internal governance
requirements while delivering online access at a total cost of ownership superior to tape.
Centera ensures secure, cost-effective storage and management of your fixed content across
its lifecycle.
Centera ensures applications no longer have to track the physical location of stored
information. Instead, Centera creates a unique identifier, based on the attributes of the
content, which applications can use for retrieval.

15

About Centera Online Help


Centera Online Help provides support and information about Centera. It is
designed to be a useful and practical guide where users can quickly find
information.
The following types of Help are available:
Help button: Page level Help. Click Help from a dialog or window to go to the
page-level help of the tab for that Help button.
How Do I?: Questions related to the context of the screen currently being
viewed.
Glossary: Description of terms and definitions. Find the meaning of commonly
used terms. Glossary provides detailed descriptions.
Index: Keywords and Cross References.
Table of Contents: Navigation aid displayed in the left pane of Online Help in a
tree structure. Topics are logically organized into books. Click the book name to
open the book and display the topic list.
The Centera Online Help is separated into the following topics:
?
?
?
?
?
?

Welcome to Centera
About Centera Online Help
Centera Overview
Configuring Centera
CLI Reference Guide
How Do I..?

17

Architecture and Performance


Nodes with Access Roles
Nodes with the access role provide the interface between the applications on the client
network and the Centera cluster. The recommended ratio is 2 nodes with one access role per
eight nodes.
In Centera, each node can either have the access and/or Storage role. Combined node roles
are only supported on Gen3 and Gen4 hardware and require a license.
Nodes with the access role communicate with the customer's environment and with the other
nodes in the cluster. Nodes with the Access role are essentially the gateway to the cluster for
customer's data. In certain cases, an increase in the number of access nodes can increase the
speed of read/write performance on a cluster.
Note: All nodes can have the storage role assigned meaning that all nodes can store data.
EMC recommends that for every 8 nodes, 2 of the nodes should have access roles.
A cluster with 32 nodes would thus have 8 nodes with the access and storage role and 24
nodes with the storage role only.
Theoretically there is no limit to the number of nodes to which the access role can be
assigned. In certain use cases for read/write, performance can be improved by enabling more
nodes with the access role.
Object Size

Cluster

Operation

Nodes with
access role and
CPM

Nodes with
access role and
CPP

large (>10MB Avg)

32

read
write

8
6

8
8

16

read
write
read
write

8
4
6
2

4
4
NA*
NA*

19

small (<100K Avg)

32

read
write

8
6

NA**
NA**

16

read
write
read
write

8
4
6
2

NA**
NA**
NA**
NA**

8
*

CPP is not supported on 8 node clusters.

** CPP is not recommended for small files.

Nodes with Storage Roles


Data can only be stored on nodes with the storage role. By default, an EMC engineer assigns
the storage role to all nodes.
There is a 30 Million object count limit per node in Centera.
Data is stored reliably on the nodes with the storage role when the node with the access role
acknowledges request completion to the client.

Storage on Nodes with Access Role


With the introduction of CentraStar 2.4, it is now possible to store data on Access Nodes as
well as Storage Nodes. From this release on it is possible to assign both access and storage
roles to each node in a Centera cluster. This change offers the possibility to increase capacity
or bandwidth and performance without adding extra nodes.
A separate license is required to assign the access role to additional sets of nodes. Contact
your EMC representative for more details.
A CE+ cluster should have at least two nodes without the access role to support
manageability connections. To check if the cluster supports storage on access nodes use the
show features command.

20

Fixed Overhead and Data Size


An important characteristic of the CentraStar architecture is that there is a fixed overhead for
any given transaction regardless of the size of the data object that is being transferred. Due to
this overhead throughput is highly dependant on the size of the data written to a Centera.

Data Availability
To ensure continuous data availability and allow self healing, Centera mirrors (CPM) or
fragments (CPP) all data on the cluster. Data copies or fragments are stored on different
nodes thus ensuring data redundancy in the event of disk or node failure. Taking load
balancing into account, a node with the access role elects two nodes to store data using CPM
or seven nodes using CPP.
If the requirement is to store many small files (<10K), EMC recommends that the application
embeds the file directly in the CDF or combine multiple small files in one C-Clip. Two data
objects, for example a CDF and a blob have twice the overhead of a single object (CDF with
embedded blobs).

Cluster Topology
This section describes Centeras basic hardware elements that are grouped into structures
that we call cubes and clusters.
A Cube is a Centera unit containing between 4 and 32 nodes including a set of switches and
an ATS (Automatic Transfer Switch) used to provide data redundancy.
Generation

ATS

Max. Cube Size

Increment Size

Max. No. Cubes in cluster

Gen 1

No

32

16

Gen 2/3

Yes

32

Gen 4

No

16

The maximum number of nodes in a cluster is 128.


A cube is a single Centera unit containing 4, 8, 16, 24, or 32 nodes. A cluster consists of one or
more multiple Centera cubes interconnected to present a single storage area. We currently

21

support up to 4 cubes in a cluster.


All Centera clusters should be installed by qualified EMC personnel only. Attempts by
unqualified personnel to set up or power up a Centera may void product warranties. Do
NOT change any hardware configuration or internal cabling or your Centera may not
function or be damaged.
A single cabinet consists of the following hardware elements:
? One 40U NEMA rack
? 4, 8, 16, 24, or 32 nodes of which:
? Each node is connected to both cube switches.
? Depending on the hardware version, a node can either have the access role (Access
Nodes) or the storage role (Storage Nodes) or they can have both roles assigned.
Combined node roles are only supported on Gen3 hardware or higher and require a
separate license.
? 2, 4, or more nodes are connected to the client LAN infrastructure (Access
Nodes/nodes with the access role assigned). The maximum number of nodes with
the access role in a 32-node cube is 8. Nodes with the access role communicate with
the customers environment and with the other nodes in the cluster. Each node with
the access role has an external IP address. Nodes with the access role are essentially
the gateway to the cluster for customers data.
? All nodes can have the storage role assigned meaning that all nodes can store data.
The recommended ratio is 2 nodes with the access role per 8 nodes. A cluster with 32
nodes would thus have 8 nodes with the access and storage role and 24 nodes with
the storage role only.
? The ETH2 ports on all nodes without the access role can be used for manageability
connections by EMC service personnel or customers. This implies that on a CE+
cluster at least two nodes should not have the access role assigned.
Gen 1

Gen 2

Gen 3

Gen 4

Access
connectivity

100BASE-T

100BASE-T

1000BASE-T
100BASE-T
internal
connectivity

1000BASE-T
(internal and
external
connectivity)

Hard drive

160GB

250GB

320GB

320GB

? Two or more internal LAN-24 or -48 port cube switches that connect to all of the
nodes to enable the CentraStar network and provide full redundancy in case one

22

fails.
? Two internal root switches for multi-cube clusters (these switches are optional).
? Two power distribution units (PDU).
? An optional Automatic (AC) Transfer Switch (ATS) to provide power failover so that
the cluster continues to operate in case one of the two AC power feeds fails.
? An optional external modem (though two are recommended to maintain
redundancy) to allow the remote support capability.
The above cube information can differ, depending on the generation of hardware being used.
The above node information can differ, depending on the generation of hardware being used.
Cube Power Field

Generation 1

Generation 2

Generation 3

Generation 4

Power Status

Not Available

Single Source or
Redundant

Single Source or
Redundant

Not Available

Current Power
Source

Not Available

Rail A or Rail B

Rail A or Rail B

Not Available

Redundancy
Provider

None

ATS

ATS

None

WARNING: Do not touch the cabling within the Centera cabinet.

Centera Interfaces
Applications and system operators can access a Centera in different ways. The following
sections describe the main Centera interfaces.

23

API
API Applications interface with Centera via the Access Application Program Interface (API).
The API connects the Application Server(s) to the nodes with the access role using IP
addressing. Refer to the Centera API Reference Guide, P/N 0069001185, for a detailed
overview of the Access API.

CLI
The system operator can administer the cluster and monitor its performance using the
Centera Command Line Interface (CLI), a set of predefined commands that you enter via a
command line. Refer to the Centera Online Help, P/N 300-002-547, for more information on
the use of the CLI and specific CLI commands.

Centera Viewer
The system operator can use Centera Viewer to discover, manage, and report the health of a
Centera. It provides status on all aspects of the system and its components. Refer to Centera
Viewer Online Help for detailed information on how to use this application.

24

Storage Strategy
Data can be stored on a Centera using one of two Storage Strategies: Storage Strategy
Capacity or Storage Strategy Performance. Storage Strategy Capacity ensures single
instancing and thus optimizes capacity. Storage Strategy Performance improves the speed of
write operations at the cost of single instance storage. When Storage Strategy Performance is
enabled, identical content may be stored multiple times.
Setting the Storage Strategy to Performance does not improve performance substantially for
large files (> 1 MB). When setting the Storage Strategy to Performance the default threshold
(maximum file size) is 250 KB. Objects larger than this threshold will revert to Storage
Strategy Capacity in order to benefit from single instance storage. The threshold can be
adjusted to optimize the balance between performance and capacity.
For CPM the threshold of 250 KB equals the file size. For CPP the threshold corresponds to a
file size of 6*250 KB = 1.5 MB.
The Storage Strategy is set to Performance by default. Only qualified Centera service
personnel can change the Storage Strategy and the threshold for that strategy. The system
operator can view which Storage Strategy is enabled by using the CLI command show
features. This command also shows the threshold. Refer to the Centera Online Help, P/N
300-002-547, for more information.

Threads
EMC recommends using multi-threaded applications to increase the maximum transfer rate.
Multi-threading operating systems enable different parts of a program or threads to execute
simultaneously.
The Centera architecture is highly parallel and supports multiple parallel activities. A single
thread cannot take advantage of this parallelism.

25

Threads are distributed evenly over the available nodes with the access role.
The number of nodes influences the number of threads that can be supported. Clearly a 32
node cluster will perform better than a 16 node cluster.
Generation

ATS

Max. Cube Size

Increment Size

Max. No. Cubes in cluster

Gen 1

No

32

16

Gen 2/3

Yes

32

Gen 4

No

16

The maximum number of nodes in a cluster is 128.


Centera architecture is not the only factor when evaluating performance as this also depends
on the application and the application server configuration.

Application Server
Centera architecture is not the only factor when evaluating performance as this also depends
on the application and the application server configuration. The processing power of an
application server is very important, especially if a large quantity of C-Clips or even a few
very large C-Clips (containing tens of thousands of tags) are in use.
Dedicated application servers that operate with 2, 4 or 8 processors also deliver a better
performance.
The number of application servers writing to a single Centera cluster influences the overall
number of files stored and the bandwidth utilization of the cluster.

Replication and Performance


The impact of replication on the Centera configuration depends on whether unidirectional or
bidirectional replication has been configured and the read/write rates on the involved
clusters.
By tuning the number of threads assigned to replication, replication can keep up with write

26

activity with minimal impact on write performance.


A thread is a part of a program that can execute independently of other parts. The CentraStar
operating system supports multi-threading which enable threads to run at the same time
without interfering with each other.
Threads are distributed evenly over the available nodes and the number of nodes influences
the number of threads that can be supported.
EMC recommends using multi-threaded applications to increase the maximum transfer rate.
Contact your EMC representative to assist you in sizing your configuration.

Capacity Reporting
Capacity on a Centera cluster is reported through the CLI, the MoPI interface in the SDK and
ConnectEMC.

Capacity Values
In the storage industry, capacity is reported using either a Decimal or Binary notation.
Centera uses the decimal system.
Centera reports capacity values by rounding them to the nearest whole number. If the value
is too long to display in the CLI then the unit of measurement increases. For example, 999.999
GB would be displayed as 1TB.

27

Capacity Definitions
Centera uses the following set of capacity definitions in all reporting channels.
Capacity
Definition

Definition

Total Raw
Capacity

The total physical capacity of the cluster/cube/node or disk.

System
Resources

The capacity that is used by the CentraStar software and is never available for storing data.

Spare Capacity

The capacity that is available on nodes that do not have the storage role assigned.

Offline Capacity

The capacity that is temporarily unavailable due to reboots, offline nodes, or hardware faults. This
capacity will be available as soon as the cause has been solved.

Free Raw
Capacity

The capacity that is free and available for storing data or for self healing operations in case of
disk or node failures or for database growth and failover.

Used Raw
Capacity

The capacity that is used or otherwise not available to store data; this includes the capacity
reserved as system resources, not assigned for storage or offline, and capacity actually used to
store user data and associated audit and Metadata.

Protected user
Data

The capacity taken by user data, including CDFs, reflections and protected copies of user files.

Audit and
Metadata

The overhead capacity required to manage the stored data. This includes indexes, databases,
and internal queues.

Available
Capacity

The amount of capacity available to write. If the Regeneration Buffer Policy is set to Alert Only,
this equals Free Raw Capacity - System Buffer. If the Regeneration Buffer Policy is set to Hard
Stop, this equals Free Raw Capacity - System Buffer - Regeneration Buffer.

Available
Capacity Until
Alert

The amount of capacity available until the regeneration buffer is reached and an alert is raised.
Irrespective of the Regeneration Buffer Policy, this equals Free Raw Capacity - System Buffer Regeneration Buffer.

28

System Buffer

Allocated space that allows internal databases and indexes to safely grow and failover. As the
system is filled with user data, and the Audit & Metadata capacity increases, the capacity
allocated to the System Buffer decreases. Regeneration Buffer = Space that is allocated for
regeneration. Depending on the Regeneration Buffer Policy, this allocation can be a soft (Alert
Only) or hard (Hard Stop) alloc ation,

Regeneration
Buffer

Reserves capacity to be used for regenerating data after disk and/or node failure. The reservation
can be a hard reservation (stop) preventing write activity to use the space or it can be a soft
reservation (alert) used for alerting only.

Capacity Requirements
The capacity requirements of a Centera Cluster depend on several criteria including the
protection scheme used, which can be either Content Protection Parity (CPP) or Content
Protection Mirrored (CPM).
The number of nodes needed to accommodate a customer's content is driven both by the
storage requirements of the customer and the customer's performance requirements.
The following table below displays the available capacity (TB) per storage node (Gen3 and
Gen4).
Avg File Size (MB)

CPM (TB)

CPP (TB)

0.05

0.6

0.3

0.10

0.6

0.6

0.25

0.6

0.50

0.6

1.00

0.6

10.00

0.6

100.00

0.6

250.00

0.6

29

500.00

0.6

The total available capacity can be calculated by multiplying the capacity per node with the
storage role assigned by the total number of nodes in the cluster that have the storage role
assigned.

Storage Efficiency
The classic trade-off for increased storage efficiency is increased processing time to read and
write the data. Centera is no different in this regard. For the benefit of increasing capacity
utilization from 50% using CPM to 75% using CPP, read and write performance suffers (50%)
In CPP there are 9 cluster objects created for every user file. Each object creates an overhead
in audit and metadata such as file system entries, database entries and internal indexes.
This means CPP will be significantly slower for garbage collection, regeneration and other
self healing tasks.
Space allocated for storage capacity can be configured to be used for regenerating data after
disk or node failure.
The regeneration buffer provides space that cannot be used for storage because it is allocated
for potential regeneration. The regeneration buffer is expressed in the number of disks per
Cube. For clusters with mixed hardware, CentraStar calculates the actual buffer per Cube
using the largest disk/node size to determine the appropriate buffer size.
EMC has recommended buffer settings for the regeneration buffer per Cube size.
Cube Size (#of
nodes)

Recommended Regeneration Buffer


Size

Recommended Mode

1 node

Alert

16

1 node

Alert

24

1 node

Alert

30

32

1 node

Stop

Centera Features
Content Addressed Storage
With Centera, applications no longer have to track the physical location of stored
information. Instead, Centera creates a unique identifier, based on the content which
applications can use for retrieval.

Centera
EMC Centera is the worlds first solution designed to store and provide simple, scalable and
secure access to fixed content. Centera uses the CentraStar software operating environment
which employs an innovative content addressing system to simplify management and ensure
content uniqueness.
More information will be created in the next two years, most of it in fixed content form, than
in the entire history of humanity. EMC Centera is ideally suited for simple, scalable and
secure storage and retrieval of this tidal wave of fixed content information.

Self Managing
Failure Resistant
Centera is a no single point of failure system. Node level data protection and failover and full
redundancy for power and network connectivity ensure continuous data availability.

Self Healing
The system detects and excludes from the cluster any drives that fail and regenerates their

31

data to ensure that a fully redundant copy of the data is always available.
No system outage or restore is required.

Dynamic Expansion
Centera is a highly scalable and non-disruptively serviceable over the course of its life.
Centera's data management technology automatically detects new nodes when they are
connected and powered on. It allows dynamic expansion in 4TB increments, expanding to
petabyte level capacity - easily, non -disruptively and cost effectively.

Data Integrity
A permanent checking task runs in the background to ensure the integrity of the data. This
task continuously recalculates the Content Addresses for all objects and compares them to
the original calculations.
It also verifies that mirror copies or segmented fragments exist for every data object. It
regenerates missing copies or segmented fragments and it manages overprotection by
cleaning up redundant data.

Garbage Collection
Garbage Collection (GC) is part of the Centera deletion process and has two distinct phases:
Incremental GC and Full GC.
Incremental GC only runs after a delete or purge operation to remove the following:
? References of blobs that refer to C-Clips that have been deleted or purged since the
previous run.
? Unreferenced blobs and C-Clips.
Incremental GC will not always be able to release all deleted capacity, due to nodes being
offline.
Full GC continually runs as a background process on the cluster and processes any

32

remaining blobs and C-Clips.

Governance/Compliance
Centera supports three models: Basic, Governance Edition (GE) or Compliance Edition Plus
(CE+).
Governance Edition (GE) has replaced Compliance Edition (CE), however in the CLI; this is
still displayed as CE.

Retention Periods
Classification of record types based on a number of features (notably, retention requirements)
is integral to any large-scale records management program. Data stored in Centera has a
configurable retention period which is enforced and enables organizations to impose policies
around records retention.
The retention period is the time that a data object has to be stored before an application is
allowed to delete it on a compliant Centera.
Each data object stored on a Centera has a retention period. The value of the retention period
either is given to the data object by the SDK before it was stored, or is the default value set by
the cluster.
A Centera Basic model does not enforce retention periods and thus allows applications to
delete data regardless of its retention period.
When the retention period of the stored data expires, applications will be allowed to delete
the data on CE and CE+ models. (A CE model allows a privileged delete if the data has not
yet expired, refer to the Centera API Reference Guide, P/N 069001185 for more information.)
The data will not be deleted automatically. It is the responsibility of the application or end
user to delete it.
If the C-Clip is stored with a specific retention period, this period cannot be changed without

33

changing the C-Clip. The Centera Programmers Guide, P/N 069001127, and Centera API
Reference Guide, P/N 069001185, contain more information on how to set retention periods
using the SDK.

Retention Classes
Retention classes simplify management of complex retention schedules by allowing a logical
referencerather than a discrete retention intervalto be assigned to each electronic record.
This capability makes managing and updating complex record series easier and more
efficient.

Audited Delete
Audited delete allows enterprises to comply with strict European Union and US privacy
laws. With an audited delete, administrators can initiate a highly controlled and audited
removal of information which is still under retention. The audit information can be retrieved
by an application using the Query functionality.

Data Shredding
Dispose of electronic records no longer required by the organization for legal or regulatory
reasons, in accordance with government regulations.
Once the object has been deleted or purged by an application after its retention period
expires and the application has deleted the C-Clip, the Centera data shredding process
overwrites each object to ensure it's irrecoverable, in accordance with government
regulations.
Retention periods enable an audited delete of data which helps enterprises comply with strict
European Union and US laws. With an audited delete, administrators can initiate a highly
controlled and audited removal of information protection under a retention period.

Content Protection
An important characteristic of the CentraStar architecture is that there is a fixed overhead for
any given transaction regardless of the size of the data object that is being transferred. Due to

34

this overhead, throughput is highly dependant on the size of the data written to a Centera.
To assure continuous data availability and allow self healing, Centera uses Content
Protection Mirrored (CPM) or Content Protection Parity (CPP) all data on the cluster. Data
copies or fragments are stored on different nodes thus ensuring data redundancy in the event
of disk or node failure.

Content Protection Mirrored (CPM)


Mirroring (CPM) is the process whereby each stored object is copied to an additional node in
a Centera cluster. Each node is connected to a separate power rail ensuring that at least one
copy of the data will always be available in the event of a disk, node or power failure.
Mirroring also provides the best performance in a multi-user system in that the least-loaded
node can be selected for data retrieval. CPM provides faster data regeneration and improves
performance during normal operations.
For a typical user object stored on a cluster with CPM, 1 user object = 4 cluster objects (2
copies of the user object being stored (mirrored) and 2 copies of the CDF).
CPP should not be used for average file sizes below 50 KB. CPP provides a maximum
capacity gain from 100 KB onwards.

Content Protection Parity (CPP)


Parity (CPP) is a more space efficient way to store data. Whereas mirroring requires a 100%
overhead of disk space for each stored object, the additional disk space for CPP is only 1/6 of
the object size. Access nodes split data that need to be stored into 6 segments and exports
these segments to different storage nodes in the same cluster.
It calculates a parity fragment from the stored fragments and stores that as the 7th data
segment on yet another node. This provides the ability to reconstruct the object in the event
of data loss of any 1 of the 7 fragments.
The overhead for CPP is larger in terms of the number of objects. The performance of CPP is
50% of the performance when CPM is used. EMC recommends using CPP only for files > 250
KB.
For a typical user object stored on a cluster with CPP, 1 user object = 9 cluster objects (7

35

fragments for the 6+1 parity protection and 2 copies of the CDF).
When CPP is enabled, not all files will be using CPP. CDF's are always written in CPM and
small files are also written in CPM.
If the protection scheme is set to 'Parity', when there are not enough nodes available for
writing the 7 fragments on 7 different storage nodes, an alternative scheme can be selected. A
file will either be written in CPM or alternatively an error is returned.
The alternate scheme default is no fallback. This can be configured.

Regeneration
Regeneration prevents data loss by disk and node failures, and provides self-healing
functionality. It relies on the existence of mirrored copies Content Protection Mirrored (CPM)
or fragmented segments Content Protection Parity (CPP) of the data on different nodes in the
cluster.

Regeneration Levels
There are two levels of regeneration and two mechanisms to trigger regeneration tasks (node
and disk). Disk regeneration, for instance, occurs when a node detects a disk failure. The
node informs the other nodes in the cluster. This triggers a regeneration task on every node
that has a copy of the objects stored on the failed disk.
Node level regeneration is triggered during a periodic check, when the system cannot reach a
node for more than two hours (except on a 4 node system). This triggers a regeneration task
on every node that has a copy of the objects stored on the failed node.

Centera Alerts
CentraStar actively monitors the health, capacity and performance of a Centera using sensors.
Sensors monitor individual nodes and clusters on a Centera. For every sensor, rules exist
which decide when an alert should be generated.

36

Each sensor has a value and a threshold level. The value is the data that it records by
continuously monitoring Centera components such as nodes, disks, etc. When the sensor
records a value that is greater than the defined threshold for the component, a degradation
alert is sent.
When the value falls back in the normal range, an improvement alert is sent. An alert is a
message contained in XML format with information on the clusters state to indicate a
warning, error or critical situation.
Each alert has a unique symptom code associated with it.

Severity Levels
There are several severity levels based on the value recorded. For example, the CPU
Temperature could rise to 86 degrees Celsius or higher. At this temperature, the node is in
serious danger of going offline. An alert is generated with a higher severity level.
Severity Level

Meaning

Normal

The value of the sensor is within normal range of


operations. This is used for improvement messages only
which are displayed in the Alert History table.

Fatal

There is a serious problem and immediate action is


required to be taken. Depending on the alert, the user
may be required to take action or EMC Customer
Support will fix the problem.

Critical

There is a problem with a Centera component. Action is


required to be taken. Depending on the alert, the user
may be required to take action or EMC Customer
Support will fix the problem.

Major

There is a potential problem with a Centera component.


Refer to note below.

Minor

There is a problem with a Centera component. Refer to


note below.

Harmless

Notifies the user that there is a problem with a


component but that this is not serious. Refer to note

37

below.

In many cases, due to Centera self healing and regeneration ability, no action is required on
the part of the user. If ConnectEMC is enabled, alerts are automatically sent to the EMC
Customer Support center where it is determined if intervention by an EMC engineer is
necessary.
Severity levels indicate the seriousness of the problem.

Sensor Levels
Sensors can exist on the node level and the cluster level.
? Node-level sensors: Exist on the individual nodes, and record information about the
particular node.
? Cluster-level sensors: Exist on the Sensor Principal node, and record information
about the cluster and aggregate information obtained from the node-level sensors
(such as taking the sum of the values of all the node-level sensors). Alert events
triggered by a cluster level sensor can be monitored via different channels
(ConnectEMC, SNMP, and MoPI) and can be displayed in EMC ControlCenter.

Alert Message
An alert is a message contained in XML format with information on the cluster's state to
indicate a warning, error or critical situation.
The following alert messages can be received.
Each new alert triggered by Centera is available via several monitoring channels in real time,
provided they are enabled and configured on the cluster. There are no monitoring restrictions
on any Compliance model.
A sensor has a state and a value. The value is the information it records, by querying the
Centera. A sensor can be in one of following states.
State

Description

38

Initializing

The sensor has not recorded any values yet.

OK

The value of the Sensor is within the normal range


of operations. It is used for improvement messages
only.

Warning

The value of the Sensor indicates


that something might be wrong or
could go wrong in the near future.

Error

Something is wrong and some


intervention is needed to correct the
situation.

Critical

Something is seriously wrong and urgent action is


required to correct the situation.

Sensors can live at two levels :


? Node-level sensors exist on the individual nodes, and record information about the
node they live on.
? Cluster-level sensors exist on the Sensor Principal node, and record information
about the cluster, or aggregate information obtained from the node-level sensors
(such as taking the sum of the values of all the node-level sensors).
For every sensor, rules exist which define the sensor state, based on the value of the sensor.
For example, the rule for the CPUTemperature sensor at cluster level states that the state of
the sensor goes to WARNING if the value of the sensor exceeds 66 degrees Celsius. Every
sensor alert has a unique identifying symptom code.
The Symptom Code identifies the type of alert and level.
For example, if the CPU temperature exceeds 66 degrees Celsius then an alert is received
with a unique identifying symptom code.
If the temperature continues to rise to above 86 degrees Celsius, the same sensor sends an
alert but with a unique symptom code to indicate that the severity level has changed.

39

Access Control
Centera Security
Connecting a Centera to a customer's network requires the following security measures:
? Network Security: Centera should be considered as a storage device accessible by a
limited number of application servers and system administrators. It is not a generic
device on the network visible and accessible by any other network device.
? Access Security: The Centera Access Security model provides a way to authenticate
applications and authorize their access to a Centera when they request a connection.
This prevents unauthorized applications from storing data on or retrieving data from
a Centera. The security model operates on the application level, not on the level of
individual end users. Nodes can be locked by system administrators which means
only they can make connections to the cluster for manageability purposes.
Refer to Access Security Model for more information.

Access Security Model


The Centera access security model prevents unauthorized applications from storing data on
or retrieving data from a Centera.
The security model operates on the application level, not on the level of individual end users.
A Centera authenticates applications before giving them access with application specific
permissions.

Pools and Profiles


The Centera security model is based on the concept of pools and application profiles

40

Pools
Pools are used to segregate data into logical groups. Data is clearly segregated and the
administrator can decide the type of operations that applications can perform on the data.
Pools provide an answer to this problem:
? Data Segregation: C-Clips from one application can be kept apart from C-Clips from
other applications. It is possible to prevent one application from accessing C-Clips
written by another application.
? Access Control: The administrator can determine the operations an application can
perform on a set of specific pools.
? Selective Replication/Restore: The administrator can choose a subset of pools that
must be replicated and/or restored.
? Selective Query: Only the C-Clips within a specific pool can be queried.
? Enhanced Topologies: The number of supported topologies by replication and
restore has been enhanced.
? Capacity Reporting: Capacity usage can be more easily analyzed and reported.
There are three different types of pools:
? Cluster Pool: This is a cluster level pool that contains every single C-Clip in the
cluster. The cluster pool allows operations to work across the boundaries of pools.
? Application Pool: This is an application level pool that can create at most, 98 custom
application pools to implement pool bound data and functions.
A custom application pool is identifiable by a unique ID generated by CentraStar. A display
name can also be given to the pool which can be configured. The pool ID cannot be modified.
? Default Pool: This is an application level pool that contains every C-Clip that is not
contained in a custom application pool.
Every C-Clip in the cluster belongs to exactly two pools: The cluster pool and Default pool.
All C-Clips are members of the cluster pool by definition. The same pool, identified by a
unique ID cannot exist on different clusters.
Every pool can contain information or be empty. If the cluster level pool is empty, the other

41

pools will also be empty.

Application Profiles
Application profiles are a means to enforce authentications and authorization. The
administrator can determine which applications have access to the cluster and what
operations they can perform. There are different types of application profile:
? Access Profiles represent the application that is accessing the pool. Pools grant
capabilities to the profile. Each profile has a default home pool assigned to it when it
is created. An application can be given some or all of the following rights: read,
write, delete, privileged-delete, c-clip copy, purge, query, exist, monitor and profiledriven metadata.
? Anonymous Profiles: The Anonymous profile is a special kind of application profile
and differs from other profiles in that it always exists and cannot be deleted. It can
however be disabled. It does not have a profile secret. An application can connect to
Centera without specifying an Access Profile. It then automatically connects using
the anonymous profile. The anonymous profile is enabled by default and can be
disabled by the system operator. EMC recommends disabling the anonymous profile
to enforce the use of Access Profiles.
Cluster Profiles differ from access profiles in the following ways:
? The home pool of a cluster is the cluster pool
? An application that is using a cluster profile cannot perform normal writes. This
means that new C-Clips cannot be created but existing C-Clips on an external media
can be put on the cluster using the 'raw write' operations. If an application using a
cluster profile wants to create new C-Clips on the cluster, it must simultaneously use
a cluster profile and an access profile.
If the cluster pool grants the delete capability to a cluster profile, the application using this
profile will be able to delete any C-Clip regardless of the pool in which it resides and the
capabilities that are set on it.
Cluster profiles are mainly intended to be used by backup/restore style operation. If an
application wants to write C-Clips to a Centera, it must use an access profile at the same time.
Pools grant access rights (capabilities) to applications using application profiles. Access
profiles are assigned to applications by the system administrator.

42

Capabilities

Definition

Write (w)

Write to a C-Clip. WriteClip access must be enabled to write. 'Enabled' or 'Disabled'.

Read (r)

Read a C-Clip. 'Enabled' or 'Disabled'.

Delete (d)

Deletes C-Clips. 'Enabled' or 'Disabled'.

Exist (e)

Checks for the existence of a specified C-Clip. 'Enabled' or 'Disabled'.

Privileged Delete
(D)

Deletes all copies of the C-Clip and can overrule retention periods. 'Enabled' or 'Disabled'

Query (q)

Query the contents of a Pool. When set to 'Enabled', C-Clips can be searched for in the pool
using a time based query. 'Enabled' or 'Disabled'.

Clip-Copy (c)

Copy a C -Clip. 'Enabled' or 'Disabled'. This capability is needed for replication and restore
operations.

Purge (p)

Remove all traces of C-Clip from the cluster. 'Enabled' or 'Disabled'. Purge is only available for
cluster level profiles.

Monitor (m)

Retrieves statistics concerning Centera

Profile-Driven
Metadata

Supports storing/retrieving per-profile metadata, automatically added to the CDF. 'Enabled' or


'Disabled'.

Authenticated operations are performed on all C-Clips in the cluster. Enter the capabilities in
a single string to assign to a profile, for example rw enables read and write. rwd, enables
read, write and delete.
Access to a pool can be controlled at different levels: Cluster level (Cluster Mask), Pool level
(Pool Mask) and Profile Level (ACL or Access Control List).
Access Control
Levels

Use

Cluster Level (Cluster


Mask)

The Cluster Authorization Mask represents cluster level access control. For example, if the
cluster mask denies read access, no application will be able to perform read operations on
any pool.
An application using a cluster profile cannot perform write operations. The settings of the
root profile in earlier Centera releases are migrated to the cluster mask.

Pool Level (Pool


Mask)

The Pool Authorization Mask controls access on the level of the pool. There are two types of
pools (Application and Cluster) so there are two types of pool level masks.

43

The Application Pool Mask enables specific operations to the application pool to be blocked.
If the application pool denies read access, no application can perform read operations on
the cluster pool.
Profile Level (ACL or
Access Control List )

The ACL grants specific access rights to applications. For example, an application will not
be able to perform an operation on a pool unless the pool has explicitly granted access to
the corresponding profile.

To understand the concepts of controlling access rights at different levels, the analogy of a file
system is used below
Level

File
System
Level

Use Case Example

Cluster Level
(Cluster
Mask)

File
System

The delete operation can be denied by putting the file system as a whole read only.
No user has delete access to any file within any directory.

Pool Level
(Pool Mask)

Directory

A particular directory can be put on read only. No user will be able to delete any file
within this directory, even if the ACL would allow it. However, the user would be able
to delete files in other directories.

Profile Level
(ACL or
Access
Control List)

ACL's
On
Directory

By default, no user has access to any directory even if the two previous settings did
not specifically disallow any operation.
The user has delete access on files within a directory if the ACL explicitly allows it.

Pool Membership
Pool membership is partly based on the value of fields in the CDF (C-Clip Descriptor File).
The CDF is immutable which means that a single C-Clip belonging to a pool cannot be
explicitly moved. All C-Clips created with that particular access profile belong to the profiles
default home pool.

C-Clip
A C-Clip is a package containing the user's data and associated Metadata. When a user saves
a file to Centera, the system calculates a unique Content Address (CA) for the data.
It then stores this address in a new XML file with the C-Clip Descriptor File (CDF) together

44

with application-specific Metadata.


The system then calculates another CA for the CDF and stores the CDF and the user's file in
the complete C-Clip package on the cluster. The CA for the CDF is a handle for the C-Clip
that the system uses to retrieve the user's data.
When a C-Clip is deleted the same time as it is replicated, it is possible that the blobs
associated with the C-Clip are replicated without the C-Clip. This only happens when the CClip is deleted immediately after replication has built the list of blobs to replicate. The
replicated orphan blobs will be cleaned up by Full GC on the replica cluster.

Legacy Data Migration


After a cluster has been upgraded, all C-Clips will be members of the default pool. The
system administrator must configure the pools and profiles using CLI commands (Mapping
Commands) in Centera Viewer.

Legacy Profiles
Profiles that were created in earlier releases will automatically be associated with the default
pool. The settings of the root profile are migrated to the cluster mask. This enables a
transparent upgrade.
An application using a certain access profile will have the same access rights after the
upgrade.
During the upgrade, no changes to the pool or profile configuration should be made.
Changes can cause potential inconsistent behavior.

Immutable Pools
The CDF is immutable; therefore the membership of a single C-Clip cannot be changed
explicitly. CentraStar generates a unique ID for each pool created which means that no other
pool can have the same ID.
To get around this problem and enable replication, there are CLI commands that enable pools
and their associated profiles to be imported and exported to different clusters.

45

Disaster Recovery
Replication and Restore
This section explains how to prepare for disaster recovery by replicating and when necessary
restoring data to other clusters, ensuring redundant copies of data are always available.

Replication
Replication complements Content Protection Mirrored (CPM) and Content Protection Parity
(CPP) by putting copies of data in geographically separated sites. If a problem renders an
entire cluster inoperable, the target cluster can support the application server until the
problem is fixed.
To copy data from one cluster to another Centera can perform a restore operation. Restores
can be either full (all data) or partial (data stored between a specified start and end date).

Restore
Restore is a single operation that restores or copies data from a source cluster to
a target c luster and is only performed as needed by the system operator.
Restores can be either Full (all data) or Partial (data stored between a specified
start and end data).
EMC recommends setting up replication before starting the application. Centera will only
replicate data that is written from the moment replication is enabled. To copy data that was
written before replication was enabled, use Restore.
To enable replication:
? Port 3218 must be available through a firewall or proxy for UDP and TCP for all
replication paths between the source and target cluster. For port 3218 this includes all
replication paths. Port 3682 can be enabled as well to allow remote manageability
connections (CV/CLI). This does not apply to CE+ models.
? A valid EMC replication license is required to enable replication. Replication can
only be setup by qualified EMC service personnel.
? To guarantee authorized access to the target cluster, EMC recommends using an

46

access profile in a replication setup.

Replication
To support multi-cluster failover, applications must be able to access both the source and the
target cluster. To support unidirectional replication the source cluster must be able to access
the target cluster.
To support bidirectional replication the source cluster must be able to access the target cluster
and the target cluster must be able to access the source cluster. Correct working of replication
cannot be guaranteed if there are obstacles in the network infrastructure.
EMC recommends using third-party network traffic shaper devices to control the network
consumption of the different applications that are using the network.
Replicating a 2.4 cluster to a 1.2 cluster is not supported. Replicating from a 2.4 cluster to a
2.0.1 or earlier version cluster is not supported if storage strategy performance is set. The
blobs with naming scheme GUID-MD5 (if storage strategy performance is set) or MD5-GUID
(if storage strategy capacity and Content Address Collision Avoidance is set) will block the
replication.

Connection Pooling
When setting up replication the IP address(es) or Fully Qualified Domain Name(s) of the
target cluster have to be given. A node with the access role of the source cluster tries to make
a connection to one of the nodes with the access role of the target cluster using TCP/IP.
Once a connection has been established this connection is kept open for future replication
transactions. If the connection is not used for a period of 2 minutes, it is closed automatically.

Replication of Delete
To support global delete in a replication setup the following requirements have to be met:
? Enable the replication of deletes when setting up replication.
? The delete capability has to be enabled in the application profile used for replication.
? If applicable, the privileged-delete capability has to be enabled in the application
profile used for replication (only for GE). The retention classes on both clusters have

47

to be identical.
? The retention classes on both clusters have to be identical.
? CentraStar release 2.3 or higher has to be installed on both clusters.
Replicated Compliance Clusters need to be time synchronized when using the Global Delete
Option

Selective Replication of Pools


To support selective data replication, pools and can be replicated to other clusters. It is
possible to replicate only a selective set of pools from one cluster to another using the set
replication command in the CLI. The corresponding pool must exist on the target cluster and
replication/restore is automatically paused if a C-Clip is replicated for which the pool does
not exist on the target cluster.
Newly created C-Clips can be added to the running replication session. C-Clips that already
exist in this pool will not automatically be replicated.

Replication of Profiles
When replicating pools to another cluster, it is also possible to replicate their associated
profiles. The export/import poolprofilesetup command enables the user to export pools
and/or profiles to another cluster.

Problem Detection
Centera can detect problems and display an alert message to the user. For example,
replication is paused if the replication rate is greater than the ingest rate (rate at which data
queuing to be replicated is processed). An alert message is sent to the user and also to EMC
Customer Support. In many cases due to Centera self healing and regeneration abilities, no
action is required by the user.
It is the responsibility of the system operator to make sure that the target cluster has enough
disk space. If the target cluster does not have enough disk space to continue the replication
process, replication is paused automatically.

48

Once there is enough disk space, the system operator has to resume the replication process
using the CLI.

Replication and other Product Features


Protection Schemes
The source and the target cluster may have different configurations that affect the protection
scheme they use for data storage. Data stored with a particular protection scheme (Content
Protection Mirrored or Content Protection Parity) will therefore not necessarily be replicated
with the same protection scheme on the target cluster.
The target cluster does not even have to support the same protection scheme as was used to
store the data on the source cluster.

Authorization and Authentication


To authenticate access to a target cluster in a replication setup, EMC recommends to create an
access profile on the target cluster to be used for replication only. The Clip-Copy capability
for this dedicated replication profile must be true for each pool that is being replicated. To
support global deletes, the (privileged) delete capability must be true as well.

Replication and Restore Security


Replication transactions are not encrypted. Secure network topologies such as a Virtual
Private Network (VPN) should be implemented to guarantee a secure connection between
the source and target cluster.

Compliance/Retention
The interpretation of retention periods of source and replicated data can differ if the source
cluster and the target cluster have a different Centera Compliance model. To guarantee
Compliance on both clusters, the two clusters must have the same Compliance model plus
the retention classes and privileged-delete settings must be defined the same on both clusters.

Backwards Compatibility

49

Replication is supported between clusters with CentraStar version 2.0 SP1 or higher.

Statistics
System operators can view replication details such as: replication start time, address of the
target cluster, replication state, performance, and progress.

Replication Topologies
Centera provides application failover access through replication. The following replication
topologies are supported:
? Unidirectional: Data written to a source cluster is automatically replicated to a target
cluster. In case of disaster, the application server will failover to the cluster. Failover
means that no data is lost due to Centera self healing abilities.
? Unidirectional (Hot Standby): Data written to a source cluster is automatically
replicated to a target cluster. In case of disaster, there are two application servers
available. Data written to cluster A will automatically be replicated to cluster B. In
case of a disaster, application server 2 will failover to cluster B. When cluster A and
application server 1 are available again, data from cluster B has to be restored to
cluster A before application server 1 starts reading and writing data to cluster A. The
restore operation only has to write the data that is not stored on the source cluster.
The restore guarantees that both clusters contain the same data.
? Bidirectional: Data written to a source cluster is automatically replicated to a target
cluster. In case of disaster, there are two application servers available. Data written to
cluster A will automatically be replicated to cluster B and data written to cluster B
will automatically be replicated to cluster A. In case of a disaster, cluster A will
failover to cluster B for read operations and cluster B will failover to cluster A. There
is no need for a restore after a disaster.
? Chain: Data written to a source cluster is automatically replicated to a target cluster.
The target cluster then replicates this data to a third cluster. A maximum of three
clusters can replicate to a source cluster.
? Incoming Star: A cluster is a destination for multiple replication processes. A
maximum of three clusters can replicate to a source cluster.

50

Restore
To copy data from one cluster to another, Centera offers restore in addition to replication.
Replication is an ongoing process that starts after replication has been set up and continues
unless replication is paused by the system operator or by the system. Restore is a single
operation that copies data from a source cluster to a target cluster and is only performed as
needed by the system operator.

Use Case
A use case of restore is to copy data to the target cluster that was stored on the source cluster
before replication was setup. After the restore, the target cluster will contain all data that has
been stored on the source cluster and not only the data that was stored after replication was
set up.
A restore operation can be performed from any source cluster to any target cluster. There is
no need for a replication setup between the two clusters.
Only one restore operation can run at a time on a cluster. This operation can be
paused/resumed or cancelled. The restore ends automatically when all data has been copied
to the target cluster.

Functionality
The restore functionality is basically the same as the replication functionality. The same
details apply for restore as for replication in terms of application profiles, authorization,
connection pooling, detection of full cluster, and protection schemes and monitoring.

Full and Partial Restore


The restore function has two modes: full and partial. A full restore copies all data from the
source cluster to a target cluster up to the time the restore started. A partial restore only
copies data that was written to the source cluster from a given start date until a given end
date. The given start and end dates refer to the dates on which the data arrived at the source
cluster, not to the creation dates of the data.

51

Centera Maintenance
ConnectEMC
ConnectEMC allows Centera to communicate with the EMC Customer Support Center via
email. ConnectEMC sends email messages to the EMC Customer Support Center via the
customer SMTP infrastructure or via a customer workstation with EMC OnAlert TM
installed.
EMC OnAlert is an application that provides remote support functionality to networked
EMC devices and includes Automatic Error Reporting and Remote Technical Support. Refer
to the Centera Online Help, P/N 300-002-547, for more information on configuring SMTP
and to the EMC OnAlert Product Guide, P/N 300-999-378, for more information on EMC
OnAlert and how to install it. If an OnAlert station is used, it must run an SMTP server to
accept the health reports and alert messages.
An on-site EMC engineer can enable or disable ConnectEMC. In order to provide optimal
customer service, EMC strongly recommends that all clusters are configured with
ConnectEMC.
ConnectEMC States
Off

ConnectEMC has been disabled. The Centera will not


send a message.

On

Centera will send a message on a daily basis and also if


an alert event is detected. EMC recommends this setting.
This is the default setting.

Upon receipt of the email message, the EMC Customer Support Center decides if it is
necessary to send a Customer Support Engineer to the customers site. If EMC determines
that the cluster can be accessed remotely, a Customer Support Engineer dials into the Centera
Alert Station or directly to the Centera through a modem connection. Once logged in, the
engineer uses Centera tools to analyze the error and implement recovery procedures.
The health report can also be downloaded by the system operator using the CLI.
Note: On a Compliance Edition Plus Centera, remote service requires written customer

52

authorization for each instance of modem connection. Upon completion of servicing, EMC
technical support will send an email to the customer confirming that the remote connection
has been terminated.

Centera Service Tiers


Centera hardware maintenance is available on two tiers: Grooming and Premium.

Hardware

Warranty
Period

Type of
Service

During Warranty
Period

2 Years

Grooming

Included

Premium

Optional*

CentraStar Operating
System

2 Years

Maintenance

Included

Optional Software

90 Days

Maintenance

Included

* Maintenance that has to be purchased separately. Contact an EMC sales representative for
pricing. This also applies to post-warranty maintenance.
Note: All Centera software has a single service tier.
For all Centera cluster models, ConnectEMC and a modem setup for remote dial in are
mandatory. For a Centera Compliant Edition Plus (CE+) model, ConnectEMC is mandatory
as is a modem setup.
The modem will be connected for service and disconnected when not used for service. If
these requirements are not met, customers have to accept a reduced level of service.

53

Hardware Service Models


Centera hardware maintenance is available on two tiers: Grooming and Premium.
Hardware Service Grooming Model
Disk

Email alert to EMC or user contacts EMC

Node

Email alert to EMC or user contacts EMC

Switch (Performance Impact)

Email alert to EMC or user contacts EMC

Loss of Power Rail (v1 HW only)

Email alerts indicate half the nodes in "off" status or


customer contacts EMC reporting "read only" operation.

Loss of ConnectEMC Heartbeat

Not receiving ConnectEMC within 1 hour of expectations


or User contacts EMC with complaint.

FRU Repair Process


Disk

Replace at grooming visit

Critical Fault (node, switch, power rail)

Next business day response, 5X7 availability

Loss of ConnectEMC Heartbeat

Dispatch CE to remediate. Next business day response,


5X7 availability

Note: All Centera software has a single service tier.

54

Configuring Centera
Configuration Steps
Once the tools for the system operator have been installed, the system operator should follow
the following procedures to ensure proper access controls to the Centera:
? Disable the anonymous profile.
? Change the password for the admin account using the CLI.
? Grant the cluster mask the maximum capabilities any application should have on the
Centera.
? Disable the Purge capability for the cluster mask.
? Lock the cluster when done.
For each new application that is using Centera, the following steps must be followed:
For each new application that is using Centera, the following steps must be followed.
? Create a pool. Best practice is to set a quota and grant the pool mask the maximum
capabilities the user wants any application to have on this pool.
? Disable all capabilities for the default pool using the pool mask.
? Create an access profile. Select the Generate option to generate a strong password.
Assign the application pool created in the previous step as the home pool. Assign the
capabilities you want the application to have/need on this pool.
? Choose to download the .pea file and save it to your workstation.
? Have the application use the access profile for authentication to Centera.
? Communicate the access profile to the application administrator by sending the PEA
file and save it on the application server.
? Enforce the use of the profile by adding the PEA file location in the connection string
of the application.
? Enter the PEA file location in a system variable called PEA_FILE_LOCATION on the
application server.

55

Add-on 1 Replication Target


The customer wants to set up replication for one or more of these applications to another
cluster.
Refer to the procedure in Centera Online Help on how to set up replication for one or more
pools.

Add-on 2 Replication Target


If the customer wants the new cluster to be a replication target for another cluster, the
following steps must be followed.
If the user wants the new cluster to be a replication target for another cluster, the following
steps must be followed.
? Import the pool definition from the source cluster.
? Define a profile to be used for replication with Clip-copy as the only capability for
this pool.
? Import the profile definition from the source cluster for failover.
? Download the PEA file and follow the replication/failover procedure.

Upgrade with Data in the Default Pool


If the customer has one or more applications using the cluster and has already data in the
default pool, the following steps should be followed:
? If the application already uses a profile, update the profile instead of creating a new
one (there is no need to re-send the .pea file).
? If not, execute the entire procedure.
? Enable the default pool or set the pool mask to allow access.
? For each application, grant the Read capability to the default pool (and other
capabilities if needed).
? If the customer does not want to migrate existing data, or the data cannot be
migrated (pre-2.3 SDK data), the procedure is finished, if not continue.
? Create a mapping to migrate default pool data written by a certain profile to the new
pool for each different application.

56

? Start the migration task


? When the migration task is completed, if all data is migrated (no C-Clips written
with pre-2.3 SDK) you can disable the default pool. If some data remains, then you
need to still allow access to it.

Copy all pools and profiles to another cluster


The purpose of this procedure is to explain how to copy all pools and their associated profiles
to another cluster.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.
It is assumed that the necessary pools and profiles have already been created on the source
cluster.
1. Launch the CLI and connect to the source cluster.
2. Determine which pools and profiles to copy using the show pool list command.
3. Export pool and profile information of the pools to copy to a file using the export
poolprofilesetup command.
4. Export the complete setup when prompted by entering Yes.
5. Enter the pathname where the exported pools and profile information should be
saved.
6. Launch another CLI session and connect to the target cluster.
7. Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 5.
Refer to the CLI Reference Guide for the complete version of each CLI command.

57

Copy a Profile Definition to Another Cluster


The purpose of this procedure is to explain how to copy one or more profiles to another
cluster.
It is assumed that the necessary pools and profiles have already been created on the source
cluster.
1.
2.
3.
4.
5.
6.
7.

Launch the CLI and connect to the source cluster.


Determine which profiles to copy using the show profile list command.
Export pool and profile information of the pools to copy to a file using the export
poolprofilesetup command.
Do not export the full setup when prompted. Export based on profiles.
Do not export all profiles. Enter the name(s) of the profiles to copy and a location and
name for the generated file to be saved (local machine).
Launch another CLI session and connect to the target cluster.
Import the profile information of the profiles using the import poolprofilesetup
command. Enter the location and name of the file (local machine) as given in step 5.

Refer to the CLI Reference Guide for the complete version of each CLI command.

58

Copy a Pool to Another Cluster


The purpose of this procedure is to explain how to copy one or more pools to another cluster.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.
It is assumed that the necessary pools and profiles have already been created on the source
cluster.
1.
2.
3.
4.
5.
6.
7.

Launch the CLI and connect to the source cluster.


Determine which pools to copy using the show pool list command.
Export pool and profile information of the pools to copy to a file using the export
poolprofilesetup command.
Do not export the full setup when prompted. Export based on pools.
Do not export all pools. Enter the names of the pools to copy and a location and
name for the generated file to be saved (local machine).
Launch another CLI session and connect to the target cluster.
Import the profile information of the pools using the import poolprofilesetup
command. Enter the location and name of the file (local machine) as given in step 5.

Refer to the CLI Reference Guide for the complete version of each CLI command.

59

Create a .pea File


The purpose of this example is to explain how to create a .pea (Pool Entry Authorization) file.
Application authentication is the process whereby the application has to provide
authentication information to a Centera before access is granted. This information is created
in a .pea file. The .pea file contains the following:
? Username: Name identifying the application that wants to access the Centera.
? Secret: Password. Each Username has a secret or password that is used to
authenticate the application.
When replicating pools and profiles, the pools and profiles created on the source cluster must
also exist on the target cluster to support failover.
In the example below, a .pea file is created and placed into the directory
C:\temp\Finance.txt.

Example
Config# create profile Finance
Profile Secret [generate]: C:\temp\Secret.txt
Granted Rights for the Profile in the Home Pool [rdqeDcw]:
Establish a Pool Entry Authorization for application use? (yes, no) [n]: Yes
Enter Pool Entry Authorization creation information: C:\temp\Finance.pea

1. Enter Pool Entry Authorization creation information. Enter the pathname for a file
where the specified file is accessible on the local system and where the Pool Entry
Authorization file will be saved. If the file exists, then the user is prompted to
overwrite the file.
The update profile command can also be used to update the profile secret. This command
can also be used to update a profiles access rights while leaving the profile secret unchanged.

60

Create a Profile Secret


The purpose of this example is to explain how to create a profile secret or password by
entering it into a .TXT file and saving it to the local drive.
There are two ways to create a profile secret:
1.

Generate Secret: When prompted to enter a profile secret, press Enter. This
automatically creates a strong secret, so the user does not have to manually create it.
2. Create a File: When prompted to enter a profile secret, enter the name of the file that
holds the profile secret. This file holds a human readable password. Enter File fist, for
example: File C:\DirectoryName\FileName.txt.

Example
Config# create profile Centera
Profile Secret [generate]: File C:\temp\Secret.txt
Enable Profile? (yes, no) [no]: Y
Monitor Capability? (yes, no) [no]: Y
Profile-Metadata Capability? (yes, no) [no]: Y
Profile Type (access, cluster) [access]: access
Home Pool [default]:
Granted Rights for the Profile in the Home Pool [rdqeDcw]: rwd
Issue the command?
(yes, no) [no]: n
The update profile command can also be used to update the profile secret. This command

can also be used to update a profiles access rights while leaving the profile secret unchanged.

Enable Replication of Delete


The purpose of this example is to explain how to enable replication of delete. Replication of
delete means that if a file is deleted on the source cluster, then this delete operation will also
be performed on the target cluster.
Enable replication using the set cluster replication command.
Delete has to be enabled on the source and target cluster to delete C-Clips. If the
configuration settings are different between the source and target cluster, then delete will fail

61

and replication will be paused.

Example
Config# set cluster replication
Replication Enabled? (yes, no) [no]: Y
Replication Address : 10.69.136.126:3218,10.69.136.127:3218
Replicate Delete? (yes, no) [no]: Y
Replicate incoming replicated Objects? (yes, no) [yes]: N
Replicate all Pools? (yes, no) [yes]: N
Pools to Replicate: action
Profile Name: New_York
Location of .pea file: C:\Console1
Issue the command?
(yes, no) [no]: Y

62

Host Application Data


The purpose of this procedure is to explain how data from multiple applications can be
logically segregated on the same Centera.
1.
2.
3.

4.

5.

Launch the CLI and connect to the source cluster.


Create separate pools for each application using the create pool command. Assign
rights (capabilities) to the pool and set the quota.
Create access profiles for each application using the create profile command. An
access profile is bound to its home pool. All C-Clips created with this profile belong
to the same pool as the profile. Select the home pool for the profile and assign
relevant access rights to the profile.
Enter Pool Entry Authorization creation information. Enter the pathname for a file
where the specified file is accessible on the local system and where the Pool Entry
Authorization file will be saved. If the file exists, then the user is prompted to
overwrite the file.
View the pools and profile previously created using the show pool detail command.

The following example is used to explain the above procedure.

Host Application Data Example


This example explains how the system administrator can use pools to ensure data from
multiple applications can be logically segregated on a Centera.
In this example, three customers host their data on the same Centera. Their data is logically
separated using pools. Three pools are created, each of which contains individual customer
data.
Centera 1
Config# create pool Customer1
Config# create profile Customer1_App
Establish a pool entry authorization for application use? (yes, no) [no]: Y
Enter pool authorization creation information: C\PEA1
Config# set grants Customer1 Customer1_App
Config# create pool Customer2
Config# create profile Customer2_App
Establish a pool entry authorization for application use? (yes, no) [no]: Y

63

Enter pool authorization creation information: C\PEA2


Config# set grants Customer1 Customer1_App
Config# create pool Customer3
Config# create profile Customer3_App
Establish a pool entry authorization for application use? (yes, no) [no]: Y
Enter pool authorization creation information: C\PEA3
Config# set grants Customer1 Customer3_App
Config# show pool detail

Refer to the CLI Reference Guide for the complete version of each CLI command.

Merge Two or More .pea Files


The purpose of this procedure is to explain how to merge two or more .pea files generated on
different clusters into one .pea file to support replication and application failover:
1.

Launch a text editor on your local machine and open the .pea file of the profile that
was generated on the source cluster. The content of this file should be similar to:

-----------------------------------------------------------------------<.pea version=1.0.0>
<defaultkey name=App1>
<credential id=csp1.secret enc=base64>
MyApplicationSecret
</credential>
</defaultkey>
<key type=cluster id=12345-12345-12345-12345 name=App1>
<credential id=csp1.secret enc=base64>
MySpecialApplicationSecretForClusterA
</credential>
</key>
</.pea>

2. Open the .pea file that was generated on the target cluster for the same profile and
copy the <key>-section with the profile-cluster information from this file into the first

64

one:
-----------------------------------------------------------------------<key type=cluster id=56789-56789-56789-56789 name=App1>
<credential id=csp2.secret enc=base64>
MySpecialApplicationSecretForClusterB
</credential>
</key>

3.
4.
5.

Repeat step 2 for each .pea file that has been created for the same profile on a
different cluster.
Close all .pea files and save the concatenated .pea file. Quit the text editor.
Copy the concatenated .pea file to the application server and set the environment
variable CENTERA_PEA_LOCATION to point to this file. For more information on
initializing PAI modules and parsing of .pea files refer to the Centera Programmers
Guide, P/N 069001127.

Migrating Legacy Data


The purpose of this procedure is to explain how to migrate legacy data from the default pool
to a custom application pool. In some cases, applications need access to legacy data held in
the default pool to function correctly.
C-Clips written using version 2.2 SP2 or earlier will always belong to the default pool. It is
not possible to migrate these C-Clips to an application pool.
A migration task does however exist for C-Clips written using the SDK 2.3 or higher. These
C-Clips can be moved into an application pool, provided that an appropriate access profile
was used when writing the C-Clip.
1.
2.

3.
4.

Launch the CLI and connect to the source cluster.


Create a mapping between the profile used to write C-Clips to the default pool and a
pool using the create poolmapping command. The mappings are placed on the
scratchpad which is used to prepare mappings until the user has the correct
mappings to start migration.
View the pool mapping to check that it is correct using the show poolmapping
command.
Once the mappings are in place, run a migration task using the migrate

65

5.

poolmapping start command. This command copies all scratchpad mappings to the
active mappings and re-starts the migration process.
Display the migration task using the show pool migration command.

For each profile with legacy data in the default pool, perform the above procedure.
The following example is used to explain the above procedure.

Migrating Legacy Data Example


In this example, a mapping is made between a pool and a profile to migrate legacy data from
the default pool into a custom application pool.
Any C-Clips held in the default pool that use the FinanceProfile are migrated to the defined
custom application pool which in this case is called FinancePool.
Centera 1
Config# create poolmapping FinanceProfile FinancePool
Config# migrate poolmapping start
View Migration Process
Config# show pool migration
Migration Status:
finished
Average Migration Progress:
100%
Completed Online Nodes:
5/5
ETA of slowest Node:
0 hours

Refer to the CLI Reference Guide for the complete version of each CLI command.

66

Provide Legacy Protection


The purpose of this procedure is to explain how to provide legacy protection to applications
using Centera. In some cases, applications need access to legacy data held in the default pool
to function correctly.
The default pool contains every C-Clip that is not contained in a custom application pool. The
main purpose of this pool is backwards compatibility and protection of historical data.
C-Clips written using version 2.2 sp2 or earlier will always belong to the default pool. It is not
possible to migrate these C-Clips to an application pool. A migration task does however exist
for C-Clips written using the SDK 2.3 or higher. These C-Clips can be moved into pools,
provided that an appropriate access profile was used when writing the C-Clip.
1.
2.
3.
4.
5.

6.
7.

Launch the CLI and connect to the source cluster.


Create a pool using the create pool command. Assign rights (capabilities) to the pool
and set the quota.
Create an access profile using the create profile command.
Create or generate a profile secret and assign rights for the profile in its home pool.
Set the capabilities the access profile should have in its home pool.
Enter Pool Entry Authorization creation information. Enter the pathname for a file
where the specified file is accessible on the local system and where the Pool Entry
Authorization file will be saved. If the file exists, then the user is prompted to
overwrite the file.
Grant the profile minimum access rights to the home pool to enable the application
to function properly. These may be: read, exist, query and/or delete.
Grant the profile read only access to the default pool. No data can be added because
the application does not have write access.

The following example is used to explain the above procedure.

67

Provide Legacy Protection Example


In this example two pools are created, each of which has an access profile associated with it.
Legacy data is always held in the default pool which is automatically created. Any objects
that exist on the cluster before replication is enabled are held in the default pool. To copy
data to a target cluster that was created after replication was set up, use the restore start
command.
Centera
Config#
Config#
granted
Config#
Config#
granted
Config#
granted

1
create pool App2.0
create profile Application
rights for the profile in the home pool [rdqeDcw]: rw
create pool Media
create profile MediaApplication
rights for the profile in the home pool [rdqeDcw]: rwd
set grants Default Application
pool rights for profile [rdqeDcw]: r

Refer to the CLI Reference Guide for the complete version of each CLI command.

68

Segregate Application Data and Selective Replication


The purpose of this example is to explain how the system administrator can use pools to
assign applications access rights and logically segregate data.
Pools assign access rights to profiles enabling the system administrator to control the
capabilities of an application.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate same pool id which is unique for each pool, hence replication/restore
will fail. Use the Export/Import commands to copy a pool to another cluster.
1.

Launch the CLI and connect to the source cluster

Create separate pools for each application using the create pool command. Assign rights
(capabilities) to the pool and set the quota.
2.

3.
4.

5.
6.

7.

Create access profiles for each application using the create profile command. An
access profile is bound to its home pool. All C-Clips created with this profile belong
to the same pool as the profile. Select the home pool for the profile and assign
relevant access rights to the profile.
Enable the profile and grant the profile the necessary access rights in its home pool.
This is part of the create profile command.
Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command. Enter the pool names and a location and name for the
generated file (local machine).
Launch another CLI session and connect to the target cluster (data will be replicated
to the target cluster).
Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 5.
Create an access profile that will be used by the source cluster to replicate data to the
target cluster using the create profile command followed by the name of the new
profile. Establish a .pea file and enter the location and name for this file (local
machine).

69

8.

Grant the replication profile the clip-copy right for the pools on the target cluster to
which data will be replicated using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools, including the
default pool if needed.
9. Return to the CLI on the source cluster.
10. Enable replication using the set cluster replication command. Enter the IP address
and port number of the target cluster, enter the pool names to replicate, and enter the
location and name of the replication .pea file (local machine) as given in step 8.
The following example is used to explain the above steps.

Segregate Application Data Example


This example explains how pools can be used to logically segregate data stored on the same
Centera.
Two pools are created:(Finance and ITOps) each of which has an access profile associated
with it. Application 1 has read and write capability while Application 2 has read, write and
delete capabilities.
Refer to the CLI Reference Guide for the complete version of each CLI command.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate same pool id which is unique for each pool, hence replication/restore
will fail. Use the Export/Import commands to copy a pool to another cluster.
Centera 1
Create pool and profile for Finance data
Config# create pool Finance
Config# create profile Application1
Home Pool [default]: Finance
Granted rights for the profile in the home pool [rdqeDcw]: rw
Establish a pool entry authorization for application use? (yes, no) [no]: Y
Enter pool entry authorization creation information: C:\PEA1
Create pool and profile for ITOps data
Config# create pool ITOps
Config# create profile Application2
Home Pool [default]: ITOps
Granted rights for the profile in the home pool [rdqeDcw]:
Export all pools and profiles on the source cluster
Config# export poolprofilesetup

70

Config# Export complete setup? (yes, n) [no]: Y

Enter the pathname where the export file should be saved:


C:\Config_Centera
Centera 2
Import Data
Config# import poolprofilesetup
Enter the pathname for the pool to import data: C:\Config_Centera

Create Profile for Replication


Config# create profile
Config# create profile Finance
Profile Secret [generate]:
Enable Profile? (yes, no) [no]: Y
Monitor Capability? (yes, no) [no]: Y
Profile-Metadata Capability? (yes, no) [no]: Y
Profile Type (access, cluster) [access]: access
Home Pool [default]:
Granted Rights for the Profile in the Home Pool [rdqeDcw]: c
Issue the command?
(yes, no) [no]: Y
Establish a Pool Entry Authorization for application use? (yes, no) [no]: Y
Enter Pool Authorization creation information: C:\Finance..pea

Set Replication
Config# set cluster replication
Replication Enabled? (yes, no) [no]: Y
Replication Address [10.88.999.191:3218]: 10.88.999.191:3218,
10.68.999.111:3218
Replicate Delete? (yes, no) [yes]: Y
Replicate incoming replicated Objects? (yes, no) [yes]: Y
Replicate all Pools? (yes, no) [yes]: Y
Profile Name:
Issue the command?
(yes, no) [no]:

Refer to the CLI Reference Guide for the complete version of each CLI command.

71

Replicate Specific Pools


The purpose of this procedure is to explain how a pool or number of pools can be selected for
replication to another cluster. To replicate a pool, the pool must already exist on the target
cluster. The pool membership of a single C-Clip cannot be changed explicitly. CentraStar
generates a unique ID for each pool created which means that no other pool can have the
same ID.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate same pool id which is unique for each pool, hence replication/restore
will fail. Use the Export/Import commands to copy a pool to another cluster.
It is assumed below that pool(s) and any associated profiles have already been created on the
source cluster. Monitor capabilities are required on the source cluster.
1.
2.
3.

Launch the CLI and connect to the source cluster.


Determine which pools to replicate using the show pool list command.
Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command.
4. Do not export the complete setup. Export based on pool selection when prompted.
5. Enter the pool names and a location and name for the generated file (local machine).
6. On the target cluster, launch the CLI.
7. Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 5.
8. Import based on pools when prompted.
9. Enter the name of the pool(s) to import.
10. Create an access profile that will be used by the source cluster to replicate data to the
target cluster using the create profile command followed by the name of the new
profile. Establish a .pea file and enter the location and name for this file (local
machine).
11. Grant the replication profile the clip-copy right for the pools on the target cluster to
which data will be replicated using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools.

72

12. Return to the CLI on the source cluster.


13. Enable replication using the set cluster replication command. Enter the IP address
and port number of the target cluster, enter the pool names to replicate, and enter the
location and name of the replication .pea file (local machine) as given in step 11.

Replicate Specific Pools Example


In this example, two pools are selected for replication. The export/import poolprofilesetup
commands are used to manage pools and profiles across clusters.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate same pool id which is unique for each pool, hence replication/restore
will fail. Use the Export/Import commands to copy a pool to another cluster.
Centera 1
Config# export poolprofilesetup
Export complete setup? (yes, no) [no]: n
Export based on pool or profile selection? (pool, profile) [pool]: pool
Export all Pools? (yes, no) [no]: n
Pools to Export: Centera, Console
Please enter the pathname where the export file should be saved:
C:\ExportFile
Exported 2 pools and 1 profile.
Centera 2
Config# import poolprofilesetup
Enter the pathname for the pool import data: C:\ExportFile
Found 2 pools and 1 profile.
Import pools? (yes, no) [yes]: Y
Set Replication
Config# set cluster replication
Replication Enabled? (yes, no) [yes]: Y
Replication Address [10.69.155.191:3218]: 10.69.155.191:3218,
10.68.999.111:3218
Replicate Delete? (yes, no) [yes]: Y
Replicate incoming replicated Objects? (yes, no) [yes]: Y
Replicate all Pools? (yes, no) [yes]: N
Pools to Replicate: Centera, Console
Profile Name:
Issue the command?
(yes, no) [no]: Y

73

Replication should be configured with multiple IP addresses. If only one IP address has been
configured, replication will stop when the node with the defined IP address goes down.
Refer to the CLI Reference Guide for the complete version of each CLI command.

Restore Specific Pools


The purpose of this procedure is to explain how to restore only selective pools.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster. Monitor capabilities are required on the source cluster.
1.
2.
3.
4.
5.
6.
7.

8.
9.
10.

11.

12.
13.

Launch the CLI and connect to the source cluster.


Determine which pools to restore using the show pool list command.
Export pool and profile information of the pools to restore to a file using the export
poolprofilesetup command.
Do not export the complete setup. Export based on pool selection when prompted.
Enter the pool names and a location and name for the generated file (local machine).
On the target cluster, launch the CLI.
Import the pool and profile information of the pools that will be restored using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 5.
Import based on pools when prompted.
Enter the name of the pool(s) to import.
Create an access profile that will be used by the source cluster to restore data to the
target cluster using the create profile command followed by the name of the new
profile. Establish a .pea file and enter the location and name for this file (local
machine).
Grant the restore profile the clip-copy right for the pools on the target cluster to
which data will be restored using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools.
Return to the CLI on the source cluster.
Enable restore using the restore start command. Enter the IP address and port
number of the target cluster. Enter either Full or Partial mode, enter the Start and

74

End dates for the restore operation. All pools and associated profiles will be restored
to the target cluster between this period.
The following example is used to explain the above steps.

Restore Specific Pools Example


In this example, two pools are selected to be restored. The export/import poolprofilesetup
commands are used to manage pools and profiles across clusters.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate same pool id which is unique for each pool, hence replication/restore
will fail. Use the Export/Import commands to copy a pool to another cluster.
Centera 1
Config# export poolprofilesetup
Export complete setup? (yes, no) [no]: n
Export based on pool or profile selection? (pool, profile) [pool]: pool
Export all Pools? (yes, no) [no]: n
Pools to Export: Centera, Console
Please enter the pathname where the export file should be saved:
C:\ExportFile
Exported 2 pools and 1 profile.

Centera 2
Config# import poolprofilesetup
Enter the pathname for the pool import data: C:\ExportFile
Found 2 pools and 1 profile.
Import pools? (yes, no) [yes]: Y
Start Restore
Config# restore start
Restore Address: 10.68.999.111:3218, 10.69.133.3:3218
Mode (full, partial):
{if partial}
Start Date (MM/dd/yyyy):
Stop Date (MM/dd/yyyy):
Profile Name:
{if profile is given}
Location of .pea file: C:\ExportFile
Issue the command? (yes, no) [yes]:

75

Refer to the CLI Reference Guide for the complete version of each CLI command.

Set Up Bidirectional Replication of One or More Pools


The purpose of this procedure is to explain how to set up bidirectional replication between
two clusters. The source cluster contains one or more pools that need to be replicated to the
target cluster and the target cluster contains one or more pools that need to be replicated to
the source cluster.
Refer to the CLI Reference Guide for the complete version of each CLI command.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.
1.
2.
3.

4.
5.
6.

7.

8.

9.

Launch the CLI and connect to the source cluster.


Determine which pools to replicate using the show pool list command.
Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command. Enter the pool names and a location and name for the
generated file (local machine).
Launch another CLI session and connect to the target cluster.
Determine which pools to replicate using the show pool list command.
Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command. Enter the pool names and a location and name for the
generated file (local machine). Do not include the default pool because it already
exists on the target cluster.
Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 3.
Create an access profile that will be used by the source cluster to replicate data to the
target cluster using the create profile command followed by the name of the new
profile. Establish a .pea file and enter the location and name for this file (local
machine).
Grant the replication profile the clip-copy right for the pools on the target cluster to
which data will be replicated using the set grants command. Enter c when asked for

76

10.
11.

12.

13.

14.

pool rights. Issue this command for each of the replication pools, including the
default pool if needed.
Go to the CLI session that is connected to the source cluster.
Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 6.
Create an access profile that will be used by the target cluster to replicate data to the
source cluster using the create profile command followed by the name of the new
profile. Establish a .pea file and enter the location and name for this file (local
machine).
Grant the replication profile the clip-copy right for the pools on the source cluster to
which data will be replicated using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools, including the
default pool if needed.
Enable replication using the set cluster replication command. Enter the IP address
and port number of the target cluster, enter the pool names to replicate, and enter the
location and name of the replication .pea file (local machine) as given in step 8.

Do not select Replicate incoming replicated objects.


15. Go to the CLI session that is connected to the target cluster.
16. Enable replication using the set cluster replication command. Enter the IP address
and port number of the source cluster, enter the pool names to replicate, and enter
the location and name of the replication .pea file (local machine) as given in step 12.
Do not select Replicate incoming replicated objects.
17. Quit both CLI sessions.

77

Set Up Chain Replication of One or More Pools


The purpose of this procedure is to explain how to set up chain replication between a source
cluster and two target clusters. The source cluster contains one or more pools that need to be
replicated to the first target cluster and the first target cluster contains one or more pools that
need to be replicated to the second target cluster.
Refer to the CLI Reference Guide for the complete version of each CLI command.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.
The setup of chain replication is the same as for unidirectional replication between a source
and one target cluster. Additionally the user can set up unidirectional replication between the
first and second target cluster.
The following procedure is graphically represented here.
1.
2.
3.

4.
5.
6.

7.

8.

Launch the CLI and connect to the source cluster.


Determine which pools to replicate using the show pool list command.
Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command. Enter the pool names and a location and name for the
generated file (local machine).
Launch another CLI session and connect to the first target cluster.
Determine which pools to replicate using the show pool list command.
Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command. Enter the pool names and a location and name for the
generated file (local machine). Do not include the default pool because it already
exists on the target cluster.
Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 3.
Create an access profile that will be used by the source cluster to replicate data to the
target cluster using the create profile command followed by the name of the new
profile. Establish a .pea file and enter the location and name for this file (local

78

9.

10.
11.

12.

13.

14.
15.

16.
17.

18.

machine).
Grant the replication profile the clip-copy right for the pools on the target cluster to
which data will be replicated using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools, including the
default pool if needed.
Launch another CLI session and connect to the second target cluster.
Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 6.
Create an access profile that will be used by the first target cluster to replicate data to
the second target cluster using the create profile command followed by the name of
the new profile. Establish a .pea file and enter the location and name for this file
(local machine).
Grant the replication profile the clip-copy right for the pools on the second target
cluster to which data will be replicated using the set grants command. Enter c when
asked for pool rights. Issue this command for each of the replication pools, including
the default pool if needed.
Go to the CLI session that is connected to the source cluster.
Enable replication using the set cluster replication command. Enter the IP address
and port number of the first target cluster, enter the pool names to replicate, and
enter the location and name of the replication .pea file (local machine) as given in
step 8.
Go to the CLI session that is connected to the first target cluster.
Enable replication using the set cluster replication command. Enter the IP address
and port number of the second target cluster, enter the pool names to replicate, and
enter the location and name of the replication .pea file (local machine) as given in
step 12.
Quit all CLI sessions.

When asked for Replicate incoming replicated objects, select Yes.

79

Set Up Star Replication on One or More Pools


The purpose of this procedure is to explain how to set up star replication between two source
clusters and one target cluster. The two source clusters contain one or more pools that need
to be replicated to one target cluster.
Refer to the CLI Reference Guide for the complete version of each CLI command.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.
The setup of star replication is the same as for unidirectional replication between one source
and one target cluster. In this use case we additionally set up unidirectional replication
between a second source cluster and the target cluster.
To support star replication with a third source cluster, set up unidirectional replication
between the third source cluster and the target cluster.
The following procedure is graphically represented here.
1.
2.
3.

4.
5.
6.

7.
8.

Launch the CLI and connect to the first source cluster.


Determine which pools to replicate using the show pool list command.
Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command. Enter the pool names and a location and name for the
generated file (local machine).
Launch another CLI session and connect to the second source cluster.
Determine which pools to replicate using the show pool list command.
Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command. Enter the pool names and a location and name for the
generated file (local machine). Do not include the default pool because it already
exists on the target cluster.
Launch another CLI session and connect to the target cluster.
Import the pool and profile information of the pools that will be replicated from the
first source cluster using the import poolprofilesetup command. Enter the location

80

and name of the file (local machine) as given in step 3.


Create an access profile that will be used by the first source cluster to replicate data
to the target cluster using the create profile command followed by the name of the
new profile. Establish a .pea file and enter the location and name for this file (local
machine).
10. Grant the replication profile the clip-copy right for the pools on the target cluster to
which data will be replicated using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools, including the
default pool if needed.
11. Import the pool and profile information of the pools that will be replicated from the
second source cluster using the import poolprofilesetup command. Enter the
location and name of the file (local machine) as given in step 6.
12. Create an access profile that will be used by the second source cluster to replicate
data to the target cluster using the create profile command followed by the name of
the new profile. Establish a .pea file and enter the location and name for this file
(local machine).
9.

Use the same access profile as created in step 9 and establish a .pea file for that profile on this
cluster.
13. Grant the replication profile the clip-copy right for the pools on the target cluster to
which data will be replicated using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools, including the
default pool if needed.
14. Go to the CLI session that is connected to the first source cluster.
15. Enable replication using the set cluster replication command. Enter the IP address
and port number of the target cluster, enter the pool names to replicate, and enter the
location and name of the replication .pea file (local machine) as given in step 9.
16. Go to the CLI session that is connected to the second source cluster.
17. Enable replication using the set cluster replication command. Enter the IP address
and port number of the target cluster, enter the pool names to replicate, and enter the
location and name of the replication .pea file (local machine) as given in step 12.
18. Quit all CLI sessions.

81

Set up Unidirectional Replication of one or more pools


The purpose of this procedure is to explain how to set up unidirectional replication between
a source cluster and a target cluster. The source cluster contains one or more custom pools
that need to be replicated to the target cluster.
Additionally, the default pool can be selected for replication.
Refer to the CLI Reference Guide for the complete version of each CLI command.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.
1.
2.
3.

Launch the CLI and connect to the source cluster.


Determine which pools to replicate using the show pool list command.
Export the pool and profile information of the pool to replicate to file using the
export poolprofilesetup command. Do not include the default pool because it
already exists on the target cluster.
4. Launch another CLI session and connect to the target cluster.
5. Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command.
6. Enter the location and name of the file (local machine) as given in step 3.
7. Create an access profile using the create profile command. This profile will be used
by the source cluster to replicate data to the target cluster.
8. Establish a .pea file and enter the location and name for this file (local machine).
9. Grant the replication profile the clip-copy right for the pools on the target cluster to
which data will be replicated using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools, including the
default pool if needed.
10. Return to the CLI session that is connected to the source cluster.
11. Enable replication using the set cluster replication command.
12. Enter the IP address and port number of the target cluster, enter the pool names to
replicate, and enter the location and name of the replication .pea file (local machine)
as given in step 6.

82

Support Application Failover


This purpose of this procedure is to explain how to support multicluster failover for all
operations that fail on the source cluster by enabling the same list of capabilities on the target
cluster.
To support multicluster failover for applications the user has to make sure that the access
profile(s) that the application uses to access data on the target cluster(s) have the necessary
rights to support the failover strategy on the target cluster(s).
In a replication setup, the application automatically fails over to the target cluster(s) to read a
C-Clip if that C-Clip cannot be found on the source cluster. The SDK supports failover
strategies for operations other than read (write, delete, exist, and query).
Check detailed application settings with the application vendor. Refer to the Centera API
Reference Guide, P/N 069001185, and the Centera Programmers Guide, P/N 069001127, for
more information on multicluster failover.
Refer to the CLI Reference Guide for the complete version of each CLI command.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.
1.
2.
3.
4.
5.
6.
7.

Launch the CLI and connect to the source cluster.


Determine which access profile(s) need to failover to the target cluster using the
show profile list command.
Enter export poolprofilesetup. Select No when asked to export the complete setup or
Yes for all profiles to failover.
Select Profile to export and enter the name(s) of the profile(s).
Enter a location and name for the file that will store the export information. The
export file contains information on the selected profiles. (Profiles name and secret).
Launch another CLI session and connect to the target (=failover) cluster.
Enter import poolprofilesetup. Enter the pathname of the export file created in step
3.

The effective rights for the same access profile on the two clusters might differ because the

83

cluster masks can be different.


8.

Enter update profile profile_name for all profiles that have been exported. Accept
the default settings and enter a path and file name for the .pea file. This command is
issued to generate .pea files for the exported profiles on the target cluster.
9. Quit both CLI sessions.
10. Merge the .pea file(s) that were generated in step 6 into one .pea file that is accessible
by the application.
This procedure shows how to support multicluster failover for all operations that fail on the
source cluster. Depending on the applications settings it might not be necessary to support
failover for all operations.
Check detailed application settings with your application vendor. Refer to the Centera API
Reference Guide, P/N 069001185, and the Centera Programmers Guide, P/N 069001127, for more
information on multicluster failover.

84

CLI Reference Guide


Overview
The CLI is a Command Line Interface for system operators which enable them to manage and
monitor a Centera. The CLI enables the system operator to:
? Configure cluster settings.
? Send a notification to customer services or download a health report.
? Manage pools and access profiles.
? Control replication and restore.
The mapping table shows how commands can be grouped logically depending on their
function. For example, all commands relevant to pools are grouped, as are all commands
relevant to profiles.

Launch the CLI


1.

Launch the CLI on a Windows PC by doing one of the following:

?
?
?
?

Double-click Centera CLI in the Centera Tools <version x.x> folder on the desktop.
Click Start\Programs\Centera Tools <version x.x>\Centera CLI.
Launch Centera Viewer and then the CLI.
Open a command prompt, type Centera CLI and press Enter.

2.

Launch the CLI on a Solaris machine by opening a command line and type
CenteraCli.

On a CE+ model all remote management capabilities are disabled. Use the CLI on a CE+
model by connecting a laptop directly to the eth2 port of any node of the cluster that does not
have the access role.

85

The IP address for this port is 10.255.0.1. This requires physical access to the cluster.
Compliance is invalidated if the CLI is used on a CE+ model by connecting a router to the
eth2 port of a node with the storage role and using Network Address Translation (NAT).
3.
4.

Enter a username when prompted and press Enter.


Enter a password when prompted and press Enter.

We recommend changing the admin password immediately to assure data security.


5.

Enter the IP address of the node you want to access and press Enter.

The CLI will not launch if the information provided is incorrect. The message: A connection
cannot be established with the server is returned.
The command line prompt changes to config#. This means the user is in the administrators
CLI.
6. Enter quit to exit the CLI.

CLI Commands
The CLI commands can be mapped to different areas of Centera.
The mapping table shows how commands can be grouped logically depending on their
function. For example, all commands relevant to pools are grouped, as are all commands
relevant to profiles.
The following sets of CLI commands are available to system administrators:
? create: create domains, pools, pool mappings and profiles on the cluster.
? delete: delete domains, pools, pool mappings and profiles
? export: exports configuration data from the cluster
? help: provides help with using the CLI
? import: imports configuration data into the cluster
? migrate: migrate pool mappings

86

? notify: sends a notification to Customer Services


? quit: ends the CLI program
? replication: controls replication on the cluster
? restore: controls restore on a cluster
? set: configures specific settings
? show: retrieves information
? update: update domains, pools, pool mappings and profiles on the cluster.

CLI Conventions
This section lists the conventions used in the CLI.

Using Reserved Characters


The characters , , &, /, <, > are not valid in the CLI. However, a URL entered during
installation can contain slashes (/).

Using Reserved Password Characters


The characters , , &, /, <, >, \, ^,%, and space are not valid for passwords.

Selecting Defaults
Default settings are displayed within square brackets. Press Enter to accept a default value.
Any changes made will become the new default setting, for example:
? Domain to which the cluster belongs [emc.com]:
? Press Enter to keep the default value set to emc.com.

Confirming Commands
The CLI executes commands that impact a cluster only after confirmation is received, for

87

example:
Issue the command? (yes, no) [no]: Y

Measurements Used in the CLI


Disk space is expressed in number of bytes.
1 GB = 10^9 bytes.
Time values that the CLI returns, are expressed in milliseconds, seconds, minutes, hours or
days, depending on the magnitude of time. A month equals 30 days, a year equals 365 days.
The CLI interprets the value in milliseconds if you enter a time value without a measurement
unit.

Node IDs Referenced in the CLI


The syntax for a node ID is cxxxnyy, where xxx and yy are numeric.
cxxx identifies the cabinet and nyy is a node in that cabinet.

Switch IDs Referenced in the CLI


The syntax for a switch ID is cxxxswyy, where xxx and yy are numeric. cxxx identifies the
cabinet and swyy identifies a switch in that cabinet.

88

CLI Settings
To display status information and current CLI settings use the show config CLI command.
The output displays:
? Version of the CLI in use
? When the current CLI session was started
? The node of the cluster on which (the Remote Manager) the connection has been
established.
? The number of lines to print before pausing.
? The number of columns to use for output.
To change these last two settings use the set cli command.

CLI Password
For security reasons, change the default administrators password after the first login. Use set
security password to change the current password. When using version 2.0 or higher of the CLI
or Centera Viewer on a pre-2.0 cluster, the CLI or Centera Viewer password should not
contain more than 64 characters.
To set the current password back to default, the user has to login as security. The
administrator does not have rights to change the password back to the default. Use the set
security default command.
The field length of the email recipient list has a maximum setting of 256 characters. No emails
can be sent if more than 256 characters are entered. As an alternative a mail group can be
created.

89

CLI Command Example


Syntax
create domain <name>

Use
This command creates a new domain. A domain is a logical grouping of clusters. For
example, clusters could be grouped together based on their physical location or the type of
data they store.
When CentraStar is installed, a default domain is created on each cluster. This domain is
called Default. The domain takes the access nodes associated with the cluster and uses their
IP addresses as the access addresses for the cluster. The domain contains the one cluster
called DefaultCluster.

Output
Config# create domain Finance
Issue the command? (yes, no) [no]: Y

Description
The following information is displayed:
? Name of the domain: Domain name must be unique. The command returns an error
if the name already exists. A domain name is case-sensitive.

Scripting the CLI


The purpose of this procedure is to explain how to automate frequently used CLI commands
using a batch file. Batch files are useful for storing sets of commands that are always executed
together because a user can simply enter the name of the batch file instead of entering each

90

CLI command individually.


The following steps explain one way to run/schedule a batch file:
1.
2.
3.

4.

5.

Create a text file with the CLI commands and parameters the user would normally
enter manually into the CLI.
Save this file, for example: Script.cli.
Create a batch file which calls the Centera Viewer and CLI with the file script.cli
(previously created) as a parameter and re-directs output to another text file which
will be used to display the results of the CLI commands.
Save the file with a name (.bat) that can be easily remembered and associated with
the batch file and what it does. For example, a batch file that starts and pauses
replication could be named startstopreplication.bat.
Run the batch file or schedule it to run at a configurable time. On a Windows
operating system, a batch file can be scheduled to run using Scheduled Tasks in the
Control Panel.

Scripting CLI Example


Create a text file with the CLI commands and parameters the user would normally enter
manually into the CLI.
In this example, the user wants to automate the following CLI commands which pause and
resume replication and displays the status of nodes and the replication process.
show node status all
show replication detail
replication pause
yes
show replication detail
replication resume

91

yes
quit

Enter CLI Commands


Enter CLI commands into a text file. Save the file as script.cli. This file will be called from the
batch file in step 3. The results of the above CLI commands will be displayed in an output
file.

Create a batch file


Create a batch file which calls the Centera Viewer and CLI with the file containing the CLI
commands as a parameter and re-directs output to another text file which will be used to
display the results of the CLI commands.

Example
@echo off
java -cp CenteraViewer.jar com.filepool.remote.cli.CLI -u username -p
password -ip 10.65.133.5:3682 -script script.cli > output.txt

Description
This script calls the Centera Viewer and CLI with the file script.cli as a parameter and redirects output to another text file which will be used to display the results of the CLI
commands.
? -u: Enter the username of the administrator, used to login to Centera.
? -p: Enter the password of the administrator.
? -ip: Enter the ip address of the cluster.
? -script: Enter the name of the file containing the CLI commands that the user wants
to run.
? output.txt: Enter the name (Output.txt) of the file that will display the results of the
CLI commands.
Note: The user must have Centera Viewer and Java installed for the above procedures.

92

Create
Syntax
create domain <name>

Use
This command creates a new domain. A domain is a logical grouping of clusters. For

example, clusters could be grouped together based on their physical location or the type of
data they store.
When CentraStar is installed, a default domain is created on each cluster. This domain is

called Default. The domain takes the access nodes associated with the cluster and uses their IP
addresses as the access addresses for the cluster. The domain contains the one cluster called
DefaultCluster.

Output
Config# create domain Finance
Issue the command? (yes, no) [no]: Y

Description
The following information is required:
? Name of the domain: Domain name must be unique. The command returns an error
if the name already exists. A domain name is case-sensitive.

93

Syntax
create pool <name>

Use
This command creates a new pool on the cluster. This command also sets the pool mask and
pool quota.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate same pool id which is unique for each pool, hence replication/restore
will fail. Use the Export/Import commands to copy a pool to another cluster.

Output
Config# create pool Finance
Pool Mask [rdqeDcw] :
Pool Quota [1GB]:3000
Issue the Command?
(yes, no)[no]: yes
Created pool Finance with ID 9de08de2-1dd1-11b2-8a50-c408ebafa0c8-2

Description
The following information is displayed.
? Pool Name: Name of the pool. This must be unique and is case sensitive.
? Pool Mask: This assigns access rights (capabilities). Press return to automatically
assign all access rights, otherwise enter the individual letters corresponding to each
capability.
? Pool Quota: Current quota for the pool. This is the maximum amount of data that
can be written to the pool. This sets the size of the pool. Press return to accept the
default of 1 GB or enter a new quota.
Once the pool is created, CentraStar generates a unique ID. The display name of a pool can be
changed at will using the update pool command. The pool ID cannot be modified. An error is
returned if the pool already exists.

94

Syntax
create poolmapping <profile name> <pool name>

Use
This command creates a mapping between a profile used to write C-Clips to the default pool
and a pool. For each profile with legacy data in the default pool, a pool mapping is required
to be created before data can be migrated.
Pool mappings are added to the scratchpad which is used to prepare mappings until they are
correct and ready to be migrated.

Output
Config# create poolmapping FinanceProfile FinancePool
Warning: The given pool is not the home pool of the given profile
Issue the Command? (yes, no) [no]: Y
Added the mapping to the mapping scratchpad.

Description
The following information is displayed:
? Create poolmapping: Enter the name of the profile followed by the pool on which to
create a mapping to.
The following errors can be returned:
? Command failed: The pool was not found on the cluster.
? Command failed: It is not possible to map a profile to the cluster pool.
? Command failed: It is not possible to map a profile to the default pool.
A warning is given if the pool is not the home pool of the given profile.

95

Syntax
create profile <profile name>

Use
This command creates a new profile.

Output
Config# create profile Finance
Profile Secret [generate]:
Enable Profile? (yes, no) [no]: Y
Monitor Capability? (yes, no) [no]: Y
Profile-Metadata Capability? (yes, no) [no]: Y
Profile Type (access, cluster) [access]: access
Home Pool [default]:
Granted Rights for the Profile in the Home Pool [rdqeDcw]:
Issue the command?
(yes, no) [no]: Y
Establish a Pool Entry Authorization for application use? (yes, no) [no]: Y
Enter Pool Authorization creation information: C:\Finance.pea
File C:\Finance already exists, do you want to overwrite this file? (yes,
no) [no]: Y

Description
The following information is displayed:
? Profile Name: This must be unique and is case sensitive.
? Profile Secret: This is the unique credentials associated with the profile. This can
either be generated by CentraStar automatically or the user can create a .txt file and
save the secret in it. Press Enter to generate it automatically or reference the directory
containing the file, for example: C:\temp\secret.txt. The profile secret is stored in a
.TXT file on a directory on the selected drive.
? Monitor Capability: Retrieves Centera statistics.
? Profile Metadata Capability: This supports storing/retrieving per profile metadata,

96

automatically added to the CDF.


? Profile Type: This can be set to a Cluster or Access profile. In this case, it should be
an Access profile.
? Home Pool: This is the profile's default pool. Every profile has a default pool. A
profile can perform all operations on its home pool. [rdqeDcw]
? Granted Rights: This sets the operations that can be performed on the pool of data in
the cluster. Authenticated operations are performed on all C-Clips in the cluster, also
on those that have been written for another access profile. The access rights can be
adapted at any time; the server immediately enforces the newly defined rights. A
profile does not have all rights by default in its home pool. The rights need to be
configured using the set grants command.
? Pool Entry Authorization: This file contains the profile name and credentials. Enter a
directory to store the .pea file.
The following errors can be displayed.
? Error: Could not create the Pool Entry Authorization file. The specified file could not
be created or is not writable.
? Command failed: The profile already exists on the cluster.
? Command failed: The maximum number of profiles is exceeded.

97

Syntax
create retention <name> <period>

Use
This command creates a retention period class and sets the length of the retention period.

Output
create retention CL1 3 Years
WARNING: Once activated, a retention period class cannot be deleted without
returning your Centera to EMC manufacturing and deleting ALL existing data
on your cluster.
Are you sure you want to create this retention period class?
(yes, no) [no]: Yes
Issue the command?
(yes, no) [no]: Yes

Description
The following information is displayed:
? Name: Name to give to the retention period class. The name can contain a maximum
of 255 characters and must not contain the following characters ' " / & < > <tab>
<newline> or <space>. A retention period class enables changes to be made to a CClip without modifying the C-Clip itself. The retention class is a symbolic
representation of the retention period.
? Period: Length of time you want to set the retention period class to. This can be in
minutes, days, weeks, months or years. A retention period is the time that a data
object has to be stored before an application is allowed to delete it.
The following errors can be displayed:
If the name of the retention period class already exists the following error will be
displayed:
? Error: The specified retention class already exists.

98

The update retention command may be used to change the retention period.
The retention period values can be entered in any combination of units of years, months,
days, hours, minutes and seconds. The default unit is seconds. The value infinite is also
allowed.

Delete
Syntax
delete domain <name>

Use
This command deletes a domain.

Output
delete domain Finance
Issue the command? (yes, no) [no]: Y

Description
The following information is displayed:
? Delete Domain: Enter a domain name of an existing domain. A domain name is
case-sensitive.
The following error can be displayed:
The command returns an error if the domain name does not exist:
This domain name is not defined

99

Syntax
delete pool <name>

Use
This command deletes a pool. Pools can be created and removed by an administrator
according to certain rules. See below.

Output
Config# delete pool Finance
WARNING: Are you sure you want to delete the pool and lose its granted
rights configuration?
(yes, no) [no]: Y
Issue the Command? (yes, no) [no]: Y

Description
The following information is displayed:
? Pool Name: Enter the name of the pool to delete.
A pool can only be deleted if it meets the following requirements:
? The pool does not contain any C-Clips or reflections.
? The pool is not the home pool of any profile.
? No mappings are defined from a profile to this pool.
? No database operations are ongoing involving this pool.
? All nodes are online and the pool is not being replicated or restored.
If the pool does not exist the following error is returned:
? Command failed: The pool was not found on the cluster.

100

Syntax
delete poolmapping <profile name> <pool name>

Use
This command deletes a mapping between the pool and the profile.

Output
Config# delete poolmapping FinanceProfile FinancePool
Issue the Command? (yes, no) [no]: Y
Deleted the mapping to the mapping scratchpad.

Description
The following information is displayed:
? Delete Poolmapping: Enter the name of the profile followed by the pool on which to
delete a mapping from.
The following errors can be displayed:
? Command failed: The mapping was not found in the scratchpad of the cluster.

101

Syntax
delete profile <profile name>

Use
This command deletes a profile.

Output
profile delete App1
Issue the command? (yes, no) [no]:

Description
The following information is displayed:
? Profile Name: Enter the profile name to delete.
The profile must exist on the cluster or an error message is returned.
The following errors can be returned:
? The profile <profile name> was not found on the server.
The root profile and the anonymous profile cannot be deleted. Attempting to delete either of
them will result in the following error message:
? Command failed: The user does not have permission to delete this profile.

102

Export
Syntax
export poolprofilesetup

Use
This command exports a pool and/or its profile definitions to another cluster.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.

Output
Config# export poolprofilesetup
Export complete setup? (yes, no): no
Export based on pool or profile selection? (pool, profile) [pool]: pool
Export all pools? (yes, no) [no]: No
Pools to export: Finance
Enter the pathname where the export file should be saved: C:\Finance
Exported 1 pool and 1 profile

Description
The following information is displayed:
? Export complete setup: Every pool and their associated profiles can be exported or a
specific pool or profile.
? Export pool or profile: Select a pool or profile to export.
? Export all pools/profiles: Export all pools/profiles on the cluster or a specific
pool/profile.
?

Pathname: Enter the full pathname of the location where the exported file should be

103

saved.
The following errors can be displayed:
If the pool cannot be found on the cluster, the following error is displayed:
? The pool <name> was not found on the cluster.
If the profile cannot be found on the cluster, the following error is displayed:
? The profile <name> was not found on the cluster.

104

Syntax
export profile <profile name>

Use
This command saves all profile information in a suitable format for loading onto another
cluster. This command does not generate a .pea file. It generates a file that is used to quickly
copy profiles between clusters.

Output
export profile Finance
Enter the pathname where the profile should be saved:
c:\savedprofiles\app3.prf
Profile saved into c:\savedprofiles\app3.prf.

Description
The following information is displayed:
? Pathname: Enter the full pathname where the profile should be saved.
The following errors can be displayed:
? Command failed: The profile was not found on the cluster.
? Error: Could not save the profile. The specified file could not be created or is not
writable.
The anonymous and root profiles cannot be saved to disk.
If the command is successful, the system will display: Profile saved into <filename entered>.

105

Help
Syntax
help <CLI command>

Use
This command provides help with using the CLI.

Output
This is the CLI for remote manageability of clusters.
All CLI commands can be abbreviated provided that enough of the command is
entered in order to uniquely identify it.

Commands
The full list of commands are then listed. Refer to CLI Commands.

Description
The following information is displayed:
? Enter help followed by that command, for example: help show config owner. This
command returns information about the show config owner command. Enter help
show to display a list of show commands.

106

Import
Syntax
import poolprofilesetup

Use
This command imports a pool and its profile definitions from another cluster.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.

Output
Config# import poolprofilesetup
Enter the pathname for the pool import data: C:\SavedPoolsData
Found 2 pools and 5 profiles

Step 1: Importing Pools


-------------------------Import Pools? (yes, no) [yes]: Y
Pool Name
ID
Mask
Profiles
-----------------------------------------------------------------------------Finance
APN01234567890-123456
rdqeDcw
5
Import pool Finance? (yes, no, abort) [yes]: Y
Imported pool Finance.

Step 2: Importing Profiles


--------------------------Import Profiles? (yes, no) [yes]: Y
Import profile FinanceApp? (yes, no, abort) [yes]: Y
Imported profile FinanceApp

107

Description
The following information is displayed:
? Pathname: Enter the full pathname of the file containing the pools and/or profiles to
import.
? Importing Pools: Enter Y to import the pools saved to file.
? Importing Profiles: Enter Y to import the profiles saved to file.
The following error can be displayed:
If the file which contains the data does not exist or is corrupt, the following error is displayed.
? Error: Could not import data. The specified file does not exist or is not readable.

108

Syntax
import profile <profile name>

Use
This command loads a profile from a stored location.

Output
config# import profile
Enter the pathname for the profile data: C:\savedprofiles\app3.prf

Description
The following information is displayed:
? Pathname: Enter the full pathname of the file to be imported.
The following errors can be displayed:
? Error: Could not import the profile. The specified file does not exist or is not
readable.

109

Migrate
Syntax
migrate poolmapping revert

Use
This command reverts the scratchpad mappings to the active mappings. It copies the active
mappings over the scratchpad mappings. It can be used to make the scratchpad in sync with
the active mappings.

Output
Config# migrate poolmapping revert
Do you want to lose the scratchpad mappings and revert the scratchpad to the
active mappings? (yes, no) [no]: Y
Issue the command? (yes, no) [no: Y

Description
The scratchpad mappings revert to the active settings.

110

Syntax
migrate poolmapping start

Use
This command starts the migration process. This copies data from the default pool to a
custom pool. It copies the scratchpad mappings to the active mappings and starts the
migration process on each node asynchronously.

Output
Config# migrate poolmapping start
Do you want to start a migration process based on the scratchpad mappings?
(yes, no) [no]: Y
Capacity information might become incorrect
Do you want to recalculate it? (yes, no) [yes]: Y
Issue the Command? (yes, no) [no]: Y
Committed the scratchpad mappings and started the migration process.

Description
The following information is displayed:
? Start Migration Process: The mapping set up between the defined pools and profiles
is started. Use the create poolmapping command to map profiles to pools.
If there is no change to either the scratchpad mappings or the active mappings, the following
is displayed:
? No changes detected between active and scratchpad mappings.
? Trigger the migration process to restart? (yes, no) [no]: Y
The following errors can be returned:
If the pool cannot be found, the following error is displayed:

111

? Command failed: The pool was not found on the cluster.


If the profile cannot be found, the following error is displayed:
? Command failed: The profile was not found on the cluster.

112

Notify
Syntax
notify

Use
This command sends an email to the EMC Customer Support Center to verify email
connectivity. It can only be issued when ConnectEMC has been enabled on the cluster.
On a Compliance Edition Plus (CE+) model all remote serviceability procedures allow
modems to be connected to the Centera cabinet for the duration of the service intervention. It
is mandatory that the modems are disconnected when the service intervention is complete.

Output
Config# notify
Enter your name: admin
Enter a telephone number where you can be reached now: 01-234.2541
Describe maintenance activity:
End your input by pressing the Enter key twice
Please confirm:
Centera Maintenance Report
-------------------------------------------------------Generated on Tuesday February 1, 2005 12:10:43 CET
Cluster Name CUBE4-V3
Cluster Serial Number APM00205030103
Installation Location 171 South Street
Date of last maintenance Monday 31 January 2005 20:59:45 CET
Administrative contact admin
Admin email admin_contact@home.com
Admin phone 01-234.2541
Activity reported
----------------Performed by admin
Reachable at 01-234.2541
-------------------------------------------------------Can this message be sent (yes, no) [yes]: Y

113

Description
The following information is displayed:
? Enter your name: The name of the person who sends the email.
? Enter a telephone number: The telephone number where the person who sends the
email can be reached.
? Describe maintenance activity: The maintenance activity that has taken place.
End the input process by pressing Enter twice. The cluster now generates a report.
When ConnectEMC is not configured, the following error is displayed:
? Unable to send notification. Please verify that the cluster is correctly configured to
send email and that all necessary information has been correctly entered.
This command does not accept characters beyond ASCII127 for names and telephone
numbers.

114

Quit
Syntax
quit

Use
This command ends the CLI program. To login again, the user must connect to the relevant
cluster.

Output
quit

Description
The CLI application closes.

115

Replication
Syntax
replication parking process

Use
This command pushes the process responsible for re-queuing the parked entries to run now.
Issue this command after problems with the replication parking have been resolved.

Output
Config# replication parking process
Issue the command?
(yes, no) [no]: Y

Description
Parked entries will now be retried.

116

Syntax
replication pause

Use
This command pauses replication. When replication is paused, the system continues to queue
newly written data for replication, but will not write any data to the target cluster until
replication is resumed.
Always pause replication on the source cluster before shutting down the target cluster.
Resume replication on the source cluster when the target cluster is up and running again.

Output
Config# replication pause
Issue the command? (yes, no) [no]:

Description
The following error is displayed if replication is not enabled:
? Command Failed: replication is not enabled.

117

Syntax
replication resume

Use
This command resumes replication when paused. When resuming, all queued data is written
to the target cluster.
Always pause replication on the source cluster before shutting down the target cluster.
Resume replication on the source cluster when the target cluster is up and running again.

Output
Config# replication resume
Issue the command?
(yes, no) [no]:

Description
The following error is displayed if replication is disabled:
? Command failed: replication is not enabled.

118

Restore
Syntax
restore cancel

Use
This command cancels the restore process.

Output
Config# restore cancel
Continue? (yes, no) [no]:

Description
The restore process has been cancelled.

119

Syntax
restore pause

Use
This command pauses the restore process.
When a restore operation is launched just after a restart of the target cluster, the operation
may be paused, reporting that the target cluster is full. Wait a couple of minutes before
issuing the restore resume command.

Output
Config# restore pause
Issue the command? (yes, no) [no]:

Description
Restore is now paused.

120

Syntax
restore resume

Use
This command resumes the paused restore process.
When a restore operation is launched just after a restart of the target cluster, the operation
may be paused, reporting that the target cluster is full. Wait a couple of minutes before
issuing the restore resume command.
The number of restored C-Clips reported by the CLI is an estimate only. This number is
calculated by halving the total number of C-Clips exported (as reported by each storage
node).

Output
Config# restore resume
Issue the command?
(yes, no) [no]:

Description
The restore process is now resumed.

121

Syntax
restore start

Use
This command restores data from the cluster to the given restore address. Restore does not
perform a delete for the reflections it encounters. A C-Clip and a reflection for that C-Clip can
both be present on the same cluster. For every C-Clip that needs to be copied, a cluster wide
location lookup is carried out. If this reveals that there is a reflection with a newer write
timestamp, the C-Clip is not copied.
Restore should be configured with multiple IP addresses. If only one IP address has been
configured, restore will stop when the node with the defined IP address goes down. More
When a restore operation is launched just after a restart of the target cluster, the operation
may be paused, reporting that the target cluster is full. Wait a couple of minutes before
issuing the restore resume command.
The number of restored C-Clips reported by the CLI is an estimate only. This number is
calculated by halving the total number of C-Clips exported (as reported by each storage
node).

Output
Config# restore start
Restore Address:10.68.999.111:3218, 10.69.188.4:3218
Mode (full, partial):

{if partial}
Start Date (MM/dd/yyyy): 04/14/2005
Stop Date (MM/dd/yyyy): 04/25/2005
Profile Name: Centera

{if profile is given}


Location of .pea file:

122

Issue the command? (yes, no) [yes]:

Description
The following information is displayed:
? Restore Address: The IP address and port number (port number is 3218) to which
objects will be restored.
? Mode: Indicates whether a full restore of all data on the cluster will be processed
(full) or whether the restore will only process data that is stored between the
specified start and end date (partial).
? Start Date: The date from which the restore range should begin.
? End Date: The date at which the restore range should end.
? Profile Name: The name of the profile on the target cluster.
? Location of .pea file: The location of the .pea file on your local machine. Enter the
full pathname to this location.
There are a number of possible errors that can be received:
If restore is already running, the following error will be displayed:
? Error: Restore is active. First cancel active restoration.
When multiple credentials for the profile name are present in the .pea file, the following error
will be displayed:
? Error: Multiple credentials for the given profile name defined in .pea file.
If no credential is defined in the .pea file, the following error will be displayed:
? Error: No credential for the given profile name defined in .pea file.

123

Set
Syntax
set capacity regenerationbuffer

Use
This command sets a regeneration buffer per cabinet. This buffer reserves capacity to be used
for regenerating data after disk and/or node failures. The reservation can be a hard
reservation (stop), preventing write activity to use the space, or it can be a soft reservation
(alert), used for alerting only.

Output
Config# set capacity regenerationbuffer
Mode (alert, stop) [stop]:
Limit in disks per cube [1]:
Issue the command? (yes, no) [no]:

Description
The following information is displayed:
? Mode: The type of reservation you want to set when available space reaches the
regeneration buffer.
?

Alert : Sends an alert when the regeneration buffer has been


reached.

Stop: Immediately ceases writing to the cabinet and sends an


alert.
? Limit: The value per cabinet, in disks, of space that will be reserved for the purposes
of regeneration.
The default setting is 1 disk and stop.

124

Syntax
set cli

Use
This command sets the number of lines that appear on screen before pausing and the number
of columns for output.

Output
Config# set cli
Number of lines to print before pausing [40]: 35
Number of columns to use for output [80]:

Description
The following information is displayed:
? Number of lines to print before pausing: Maximum number of lines that the CLI
command window successively displays. The default setting is 40. Press Enter to
confirm the default or enter another number to change the default.
? Number of columns to use for output: Width of the CLI screen. The default setting is
80. Press Enter to confirm the default or enter another number to change the default.
The following text appears if the maximum number of lines in the CLI command window is
reached before all returned information is displayed:
more? (yes, no) [yes]:
? Enter Y or press Enter to display more info, type n to stop the display of info.
EMC recommends accepting the default values. Default settings are 40 for Number of lines to
print before pausing and 80 for Number of columns to use for output.

125

Syntax
set cluster mask

Use
This command enables the administrator to change the cluster level mask. This level of access
control specifies which actions to block on the cluster, regardless of the profile and/or pool
that is being used.
For example, if the cluster denies read access, no application will be able to perform read
operations on any pool.

Output
Config# set cluster mask
Cluster Mask [rdqe-cw-m]:
Issue the command? (yes, no) [no]: n

Description
The access rights available are: The operations available are: read, write, delete, privilegeddelete, C-Clip Copy, purge, query, exist, and monitor:
Capabilities

Definition

Write (w)

Write to a C-Clip. WriteClip access must be enabled to write. 'Enabled' or 'Disabled'.

Read (r)

Read a C-Clip. 'Enabled' or 'Disabled'.

Delete (d)

Deletes C-Clips. 'Enabled' or 'Disabled'.

Exist (e)

Checks for the existence of a specified C-Clip. 'Enabled' or 'Disabled'.

Privileged Delete
(D)

Deletes all copies of the C-Clip and can overrule retention periods. 'Enabled' or 'Disabled'

Query (q)

Query the contents of a Pool. When set to 'Enabled', C-Clips can be searched for in the pool
using a time based query. 'Enabled' or 'Disabled'.

126

Clip-Copy (c)

Copy a C -Clip. 'Enabled' or 'Disabled'. This capability is needed for replication and restore
operations.

Purge (p)

Remove all traces of C-Clip from the cluster. 'Enabled' or 'Disabled'. Purge is only available for
cluster level profiles.

Monitor (m)

Retrieves statistics concerning Centera

Profile-Driven
Metadata

Supports storing/retrieving per-profile metadata, automatically added to the CDF. 'Enabled' or


'Disabled'.

127

Syntax
set cluster replication

Use
This command sets the IP address of the target (target) cluster and enables or disables
replication. Replication should be configured with multiple IP addresses. If only one IP
address has been configured, replication will stop when the node with the defined IP address
goes down.
Furthermore, do not leave IP addresses in the configured list of nodes that are unavailable for
an extended period of time. Pause replication before changing IP addresses.
To support multicluster failover applications must be able to access both the source and the
target cluster.
To support unidirectional replication the source cluster must be able to access the target
cluster.
To support bidirectional replication the source cluster must be able to access the target cluster
and the target cluster must be able to access the source cluster.
Correct working of replication cannot be guaranteed if there are obstacles in the network
infrastructure. We recommend using third-party network traffic shaper devices to control the
network consumption of the different applications that are using the network.
Ports 3218 and 3682 must be available through a firewall or proxy for UDP and TCP. For port
3218 this includes all replication paths. Port 3682 is used for remote manageability
connections (CV/CLI), this does not apply to CE+ models.
Replicating a 2.4 cluster to a 1.2 cluster is not supported. Replicating from a 2.4 cluster to a
2.0.1 or earlier version cluster is not supported if storage strategy performance is set. The
blobs with naming scheme GUID-MD5 (if storage strategy performance is set) or MD5-GUID
(if storage strategy capacity and Content Address Collision Avoidance is set) will block the
replication.

128

Replicated Compliance Clusters need to be time synchronized when using the Global Delete
Option.

Output
Config# set cluster replication
Replication Enabled? (yes, no) [no]: Y
Replication Address [10.88.999.191:3218]: 10.88.999.191:3218,
10.68.999.111:3218
Replicate Delete? (yes, no) [yes]: Y
Replicate incoming replicated Objects? (yes, no) [yes]: Y
Replicate all Pools? (yes, no) [yes]: Y
Profile Name:
Issue the command?
(yes, no) [no]: Y

Description
The following information is displayed:
? Replication Enabled: If replication is enabled (yes), all new data will be replicated to
the specified address. Replication requests are generated as data is written to the
cluster. If replication is disabled, no data will be replicated. Multiple IP addresses can
be added in dotted notation or FQDN. In order to use FQDNs, a DNS server must be
configured.
? Replication Address: Cluster IP address. This is the target cluster where data is
being replicated to. The address consists of the host name or IP address, followed by
the port number of the replication cluster. Separate multiple addresses with a
comma. The following is an example of the different replication addresses:
? Host name: NewYork_183:3218, NewYork_184:3218
? IP address: 10.68.999.111:3218, 10.70.129.123:3218
? Replicate Delete: Propagates deletes and privileged deletes of C-Clips on the source
cluster to the target cluster. The corresponding pool must exist on the target cluster
and replication/restore is automatically paused if a C-Clip is replicated for which the
pool does not exist on the target cluster.
? Replicate Incoming replicated objects : Enter Yes to enable this. This must be
enabled on the middle cluster in a Chain topology. This does not have to be enabled
in a Unidirectional or Bidirectional topology.
? Replicate all Pools: One or more pools can be replicated. Enter Yes to replicate all

129

pools on the cluster to the target cluster. Enter No to select the pool(s) to replicate.
? Profile Name: Profile name that was created on the target cluster. The export/import
poolprofilesetup command enables the user to export pools and/or profiles to
another cluster.
? Location of .pea file: Location of the .pea file on your local machine. Enter the full
pathname to this location.
The following errors can be returned:
When multiple credentials are present in the .pea file, the following error will be displayed:
? Error: Multiple credentials for the given profile name defined in .pea file.
If no credential is defined in the .pea file, the following error will be displayed:
? Error: No credential for the given profile name defined in .pea file.
Note: Make sure that your application server has access to both Access Profiles.

130

Syntax
set constraint <constraint name>

Use
This command provides the capability to specify minimum and maximum retention settings
per Centera.
Additionally, the administrator can indicate whether specifying a retention period is
mandatory or not.

Output
Config# set constraint check_retention_present
Mandatory retention period in clip? (true, false) [false]:
Issue the command? (yes, no) [no]: yes
Config# set constraint check_retention_range
Minimal retention period: 1 month
Maximal retention period: infinite
Issue the command? (yes, no) [no]: yes

Description
The following information is displayed:
There are two possible constraints:
? check_retention_present. This specifies whether retention is mandatory or not. This
can be set to True or False.
? check_retention_range: This specifies minimum and maximum retention settings.
It is only possible to set both retention limits, not each one separately.
The current retention settings are displayed using the show constraint command. If the
administrator has not set any constraints, then the default values are shown.

131

The following errors can be displayed:


? Unknown Constraint Name: Error: This Constraint name is not known to Centera.

Syntax
set grants <pool name> <profile name>

Use
Pools explicitly grant rights to profiles. Use this command to modify the specific rights
allowed by a pool to a specific profile.

Output
Config# set grants Finance Accounts
Granted pool rights for profile [r------]: r
Issue the command? (yes, no) [no]: yes
The new effective rights are displayed.

Description
The following information is displayed.
The operations available are: read, write, delete, privileged-delete, C-Clip Copy, query, and
exist:
Capabilities

Definition

Write (w)

Write to a C-Clip. WriteClip access must be enabled to write. 'Enabled' or 'Disabled'.

Read (r)

Read a C-Clip. 'Enabled' or 'Disabled'.

Delete (d)

Deletes C-Clips. 'Enabled' or 'Disabled'.

Exist (e)

Checks for the existence of a specified C-Clip. 'Enabled' or 'Disabled'.

Privileged Delete (D)

Deletes all copies of the C-Clip and can overrule retention periods. 'Enabled' or 'Disabled'

132

Query (q)

Query the contents of a Pool. When set to 'Enabled', C-Clips can be searched for in the
pool using a time based query. 'Enabled' or 'Disabled'.

Clip-Copy (c)

Copy a C -Clip. 'Enabled' or 'Disabled'. This capability is needed for replication and restore
operations.

Purge (p)

Remove all traces of C-Clip from the cluster. 'Enabled' or 'Disabled'. Purge is only available
for cluster level profiles.

Monitor (m)

Retrieves statistics concerning Centera

Profile-Driven

Supports storing/retrieving per-profile metadata, automatically added to

Metadata

the CDF. 'Enabled' or 'Disabled'.

The following errors can be returned:


? Set grants : Name of the pool and profile to which rights are being granted.
? Granted pool rights: Enter the operations that can be performed on the pool.
The access rights of a profile in a pool can also be updated using the update profile
command.

133

Syntax
set icmp

Use
This command enables or disables ICMP.

Output
Enable ICMP (yes, no) [no]:
Issue the command? (yes, no) [no]:

Description
ICMP is now enabled or disabled.

134

Syntax
set node ip <node ID>

Use
This command changes the connection details for a node with the access role. Node ID refers
to a specific node. The syntax of a node ID is cxxxnyy, where cxxx identifies the cabinet and
nyy the node in that cabinet (xxx and yy are numeric).

Output
Config# set node c001n05
Use DHCP (yes, no) [no]: no
IP address [10.88.999.91]:
Subnet mask [255.333.255.0]:
IP address of default gateway [10.88.999.1]:
IP address of Domain Name Server [152.99.87.47]:
Issue the command? (yes, no) [no]: yes

Description
The following information is displayed:
Use DHCP: Dynamic Host Configuration Protocol (DHCP) automatically assigns IP
addresses to devices attached to a Local Area Network (LAN). If you select yes for this option
then the following fields are not required.
? IP address: IP address of a node with the access role. Ping the IP address to make
sure that it is not in use before assigning it to a node with the access role.
? Subnet mask: Subnet mask is the address that refers to the network ID.
? IP address of default gateway: Gateway is a network point that acts as an entrance to
another network.
? IP address of Domain Name Server: Domain Name Server (DNS) locates and
translates real names into IP addresses. For example, a cluster could be called
Centera and this would be mapped to the appropriate IP address.

135

The following errors can be displayed:


If the command is executed on a node which does not have the access role, then the following
error message is displayed:
? Command failed: The local node does not have the Access role.

136

Syntax
set node linkspeed <node ID>

Use
This command sets the speed of the external network controller on a node with the access
role. Node ID refers to a specific node. The syntax of a node ID is cxxxnyy, where cxxx
identifies the cabinet and nyy the node in that cabinet (xxx and yy are numeric).

Output
Config# set node linkspeed c001n02
Link speed [auto]:
Issue the command?
(yes, no) [no]:

Description
The following information is displayed:
? Node linkspeed: Current connection speed of the external network link. The
available linkspeed options are 10Mbit, 100Mbit and 1000Mbit (1000Mbit for V3
hardware only).
? Link Speed: Autoneg or Force. Auto is the preferred setting. When the user requests
force, it does not connect with any other speed. When the user requests autoneg, it
will sense if the speed is available or it will try a lower speed.
When set to 1000Mbit, force behavior is not available. When speed is set to 1000Mbit and
force the platform will report auto.
If the command is executed on a node which does not have the access role, then the following
error is returned:
? Command failed: The local node does not have the Access role.

137

Syntax
set notification

Use
This command changes the parameters of ConnectEMC. ConnectEMC sends daily health
reports, alerts, and other notifications through an SMTP server. If an OnAlert station is used,
it must run an SMTP server to accept the information sent by ConnectEMC.

Output
Config# set notification
Mandatory: what is the primary ConnectEMC smtp relay? [not configured]:
What is the secondary ConnectEMC smtp relay? [not configured]:
Domain to which the cluster belongs? [local]: New York
Issue the command? (yes, no) [no]:

Description
The following information is displayed:
? Primary ConnectEMC smtp relay: The IP address or host name of an SMTP server or
customer workstation on which OnAlert is installed. The SMTP address for OnAlert
is Centera -alert@alert-station.customer.net.
? Secondary ConnectEMC smtp relay: The IP address or host name of a backup SMTP
server or a secondary customer workstation (not mandatory) on which OnAlert is
installed. This parameter is optional.
? Domain: The domain to which the SMTP server belongs. For example, a cluster
installed at Acme Inc. would probably have the local domain set to "acme.com". This
is required even though the cluster itself cannot receive emails.
This command does not accept characters beyond ASCII127 for email and IP addresses.

138

Syntax
set owner

Use
This command changes the administrators details and checks the cluster identification.

Output
Config# set owner
Administrator name [not configured]:John Doe
Administrator email [not configured]: JohnDoe@EMC.com
Administrator phone [not configured]: 555 3678
Location of the cluster [not configured]: Mechelen
Name of the cluster: EMC_Centera_989893
Serial number of the cluster [not configured]: CL12345567SEW3
Issue the command? (yes, no) [no]: Y

Description
The following information is displayed:
? Cluster administrators details: Name, email address, phone number.
? Location of the cluster: Physical location of the cluster, for example, the location of
the customer site, floor, building number, and so on.
? Cluster name. Unique cluster name.
? Cluster serial number. EMC assigns this number. You can find it on the sticker with
P/N 005047603, 005048103, or 005048326 (located in the middle of the rear floor panel
directly inside the rear door).
Use the show config owner command to display these settings.

139

Syntax
set security default

Use
This command restores the default password of all users, except that of the administrator.

Output
Config# set security default
Issue the command? (yes, no) [no]:

Description
The password is set to the default for all users.

140

Syntax
set security defaultretention <value>

Use
This command sets the default retention period for the entire cluster.
The default retention period is only applicable to C-Clips for which no retention period was
specified by the SDK. The default retention period is not added to the CDF and is only used
when a delete is issued.

Output
Config# set security defaultretention 1 year
Issue this command?
(yes, no) [no]:

Description
The following information is displayed:
The value of the default retention period can be expressed in (milli) seconds, minutes, days,
months, and years.
Note: This only applies for GE models.
The following errors can be displayed:
If a negative value is entered, the following error is displayed:
? An illegal retention period has been entered.

141

Syntax
set security lock

Use
This command locks all nodes and restores the default network access security on all nodes
that are accessible. By issuing this command only admin accounts can make connections to
the cluster for manageability. Any current service connections will not be closed but future
service connections will not be possible anymore.

Output
Config# set security lock
Issue the command? (yes, no) [no]:

Description
When issuing this command to lock all nodes and some of the nodes are down, those nodes
will not be locked when they are up again.

142

Syntax
set security management

Use
This command enables an administrator to specify the IP address that will be used by
Centera to validate the manageability connection. If the address does not match, access will
be denied. When enabled, only the machine with the specified IP address can manage a
Centera.

Output
Config# set security management
Restrict management access (enabled, disabled) [disabled]: enabled
Management address: 10.69.155.6
Issue the command?
(yes, no) [no]: Y

Description
The following information is displayed:
? Restrict Management Access: Enabled or Disabled. When enabled only the machine
with the specified IP address can manage Centera.
? Management Address: IP address used to manage Centera.
If the administrator unlocks the access node through CV or using the front panel, ssh
connections can be made by somebody who knows the service or l3 password, from any
station.
Unlocking of access nodes is not possible in CE+. In that case the ssh traffic would be limited
to stations that have access to a storage node eth2 card.

143

Syntax
set security password

Use
This command changes the admin password. Change the default admin password
immediately after first login for security reasons.

Output
Config# set security password
Old Password:
New Password:
New Password (confirm):

Description
The following information is displayed:
A valid admin password is required to connect to the cluster. The administrator's password
can be modified:
1. Enter the default password.
2. Enter a new password. The characters , , &, /, <, >, \, ^, %, and space are not valid
in passwords.
3. Confirm the new password.
The following error can be displayed:
The following error is displayed when the user confirms a new password which does not
match the previously entered new password.
? Passwords are not equal

144

Syntax
set security unlock <node ID>

Use
This command unlocks a specific node in the system. Service connections to a node are only
possible if that node is unlocked. EMC engineers may require specific nodes to be unlocked
prior to a service intervention. Once a node is unlocked all users can connect to it.
Use the show security command to display the security settings.
You cannot use this command to unlock a node with the access role on a CE+ model.

Output
Config# set security unlock <node id>
Warning! Cluster security can be compromised if you unlock a node.
Issue the command? (yes/no) [no]:

Description
The following information is displayed:
? Node ID: A specific node.
The syntax of a node ID is cxxxnyy, where cxxx identifies the cabinet and nyy the node in
that cabinet (xxx and yy are numeric).

145

Syntax
set snmp

Use
This command sets the parameters necessary to use SNMP against a Centera cluster.
The set snmp command is unique to Centera and should not be confused with the more
widely known snmp set command. This CLI does not support the snmp set command.
CentraStar version 2.0, 2.1, and 2.2 CE models support SNMP access. A CE+ model does not.
Version 2.3 and higher supports SNMP access on all compliance models.
Centera supports SNMP version 2 only.

Output
Config# set snmp
Enable SNMP (yes, no)? [no]
Management station [10.99.1.129:453]:
Community name [public]:
Heartbeat trap interval [1 minute]:
Issue the command? (yes, no) [no]:

Description
The following information is displayed:
? Enable SNMP: Yes or No.
? Management station IP: IP address and port number of the server to which the
SNMP traps should be sent. The management station address can be entered as an IP
address in dotted quad format, for example 10.72.133.91:155, or as a hostname, for
example centera.mycompany.com:162.
The first time the set snmp command is issued, the default IP address of the management
station is that of the system on which the CLI is running. If the SNMP management station is

146

not set, the system will display unknown.


? Community name: Password for the SNMP traps is used to authenticate and can be
considered as a password. The Community name cannot be longer than 255
characters and may contain any non-control character except , , /, <, >, &,
<newline>, and <cr>.
? Heartbeat trap: Interval for sending I am alive traps.
The Heartbeat trap interval is an integer value of the form [0-9] + [mhd], where m indicates
minutes, h indicates hours, and d indicates days. If no suffix is specified, then minutes is
assumed.
? A value of 0 will disable the Heartbeat trap.
? The maximum value for the Heartbeat trap interval is 30 days.
The following errors can be returned:
? Command failed: The management station address is not valid.
? Command failed: The heartbeat interval has to be between 1 minute and 30 days.

147

Show Commands
Syntax
show capacity availability

Use
This command displays the capacity availability of all the nodes on the cluster.

Output
Config# show capacity availability
Number of nodes:
Number of nodes with storage role:
Total Raw Capacity:
Used Raw Capacity:
Free Raw Capacity:
System Buffer:
Regeneration Buffer:
Available Capacity:
Total Object Count:
Used Object Count:
Free Object Count:

7
5
9,020 GB (100%)
2,578 GB
(29%)
6,442 GB
(71%)
323 GB
(3%)
644 GB
(7%)
5,474 GB
(61%)
210 M (100%)
58
(0%)
210 M (100%)

Description
The following information is displayed:
Term

Definition

Total Raw
Capacity

The total physical capacity of the cluster/cube/node or disk.

Used Raw
Capacity

The capacity that is used or otherwise not available to store data; this includes the capacity
reserved as system resources, not assigned for storage or offline, and capacity actually used to
store user data and associated audit and Metadata.

148

Free Raw
Capacity

The capacity that is free and available for storing data or for self healing operations in case of
disk or node failures or for database growth and failover.

System Buffer

Allocated space that allows internal databases and indexes to safely grow and failover. As the
system is filled with user data, and the Audit & Metadata capacity increases, the capacity
allocated to the System Buffer decreases.

Regeneration
Buffer

Regeneration Buffer = Space that is allocated for regeneration. Depending on the Regeneration
Buffer Policy, this allocation can be a soft (Alert Only) or hard (Hard Stop) allocation.

Availability
Capacity

The amount of capacity available to write. If the Regeneration Buffer Policy is set to Alert Only,
this equals Free Raw Capacity - System Buffer. If the Regeneration Buffer Policy is set to Hard
Stop, this equals Free Raw Capacity - System Buffer - Regeneration Buffer.

Total Object
Count

The number of objects that can be stored.

Used Object
Count

The total object capacity already used.

Free Object
Count

The total number of objects that can still be written to the cluster.

149

Syntax
show capacity detail <scope>

Use
This command displays an overview of used and free capacity for the defined nodes. The
scope of this command is extensive and provides the following capacity options to display:
The following node details can be displayed.
? Access: All nodes with access role
? Accessprincipal: The principal node for the pool server service
? All: All nodes
? Cabinet X : X being 1 - n : returns all nodes in one cabinet
? Clusterservice X: X being the name of one of the cluster services
? Failures X: nodes needing intervention
? N: Nodes with network interface failures
? V: Nodes with volume or disk failures
? *: Nodes with any type of failure
? Mirrorgroup X : X being one of the mirror groups used in the cluster
? Model X : X being one of the models used in the cluster
? Node CxxxNyy: A specific (existing) node
? Nodelist X,X:X being specific (existing) nodes
? Off: All nodes whose status is off
? On: All nodes whose status is on
? Principal: The principal node
? Rail X: X being 1 or 0 (denoting power rail 1 or power rail 0)
? Spare: All spare nodes
? Storage: All nodes with storage role
The output below is sample output, displaying the capacity details for nodes with the access
role.

150

The CLI will take the offline nodes into account to calculate the total raw capacity.

Output
Config# show capacity detail access
Node
Roles
Status Total Raw System Offline
Used Free Raw
------------------------------------------------------------------------c001n01 access
on
1,289 GB
7 GB
0 GB
0 GB
0 GB
c001n02 access
on
1,289 GB
7 GB
0 GB
0 GB
0 GB
------------------------------------------------------------------------Total (online nodes: 2)
2,577 GB
14 GB
0 GB
0 GB
0 GB

Description
The following information is displayed:
Term

Definition

Total Raw

Total Raw Capacity: The total physical capacity of the


cluster/cube/node or disk.

System Resources

System Resources: The capacity that is used by the


CentraStar software and is never available for storing
data.

Offline

Offline Capacity: The capacity that is temporarily


unavailable due to reboots, offline nodes, or hardware
faults. This capacity will be available as soon as the
cause has been solved.

Used

Used Raw Capacity: The capacity that is used or


otherwise not available to store data; this includes the
capacity reserved as system resources, not assigned for
storage or offline, and capacity actually used to store
user data and associated audit and Metadata.

Free Raw

Free Raw Capacity: The capacity that is free and


available for storing data or for self healing operations in
case of disk or node failures or for database growth and
failover.

151

Syntax
show capacity regenerationbuffer

Use
This command displays the regeneration buffer mode (alert or stop) and the regeneration
buffer size.

Output
Config# show capacity regenerationbuffer
Capacity Regeneration Buffer Mode: Alert {or} Hard Stop
Capacity Regeneration Buffer Limit: 1 disk

Description
The following information is displayed:
? Capacity Regeneration Buffer Mode:
o

Alert sends an alert when the regeneration buffer has been reached.

Hard Stop immediately ceases writing to the cabinet and sends an alert.

? Capacity Regeneration Buffer Limit: Size of the regeneration buffer.

152

Syntax
show capacity total

Use
This command displays an overview of used and free raw capacity in the cluster and the
percentage of disk space used in relation to the total raw capacity.

Output
Config# show capacity total
Number of nodes:
Number of nodes with storage role:
Total Raw Capacity:
Used Raw Capacity:
Free Raw Capacity:
Used Raw Capacity:
System Resources:
Offline Capacity:
Spare Capacity:
Used Capacity:
Audit & Metadata:
Protected User Data:
Used Object Count:

8
6
10,309 GB (100%)
7,719 GB
(75%)
2,610 GB
(25%)
7,719 GB
(75%)
57 GB
(1%)
3,844 GB
(37%)
2,563 GB
(25%)
1,255 GB
(12%)
397 MB
(0%)
1,255 GB
(12%)
2 M

Description
The following information is displayed:
Term

Definition

Total Raw
Capacity

The total physical capacity of the cluster/cube/node or disk.

Used Raw
Capacity

The capacity that is free and available for storing data or for self healing operations in case of
disk or node failures or for database growth and failover.

Free Raw
Capacity

The capacity that is used or otherwise not available to store data; this includes the capacity
reserved as system resources, not assigned for storage or offline, and capacity actually used to

153

store user data and associated audit and Metadata.


System Resources

The capacity that is used by the CentraStar software and is never available for storing data.

Offline Capacity

The capacity that is temporarily unavailable due to reboots, offline nodes, or hardware faults.
This capacity will be available as soon as the cause has been solved.

Spare Capacity

The capacity that is available on nodes that do not have the storage role assigned.

Used Capacity

The capacity that is in use to store data. This includes Protected User Data plus Audit &
Metadata.

Audit and
Metadata

The overhead capacity required to manage the stored data. This includes indexes, databases,
and internal queues.

Protected User
Data

The capacity taken by user data, including CDFs, reflections and protected copies of user files.

Used Object Count

The total object capacity already used.

154

Syntax
show config CLI

Use
This command displays the CLI status information.

Output
Config# show config cli
Centera interactive shell (Implementation CLI).
Version information for the CLI itself:
CLI
3.0.574, 2005-03-16
The CLI started running on Friday 25 March 2005 13:03:36 CET.
Connected to RemoteManager at 10.69.133.7:3682 using FRMP version 2.
Number of lines to print before pausing : 40
Number of columns to use for output
: 80

Description
The following information is displayed:
? CLI version information
? The date and time of the CLI launch
? The host and port number of the node (IP address:3682)
? The protocol used and the protocol version (FRMP version 2)
? Number of lines to print before pausing: the maximum number of lines that the CLI
command window successively displays. The default is set to 40.
The following text appears if the maximum number of lines in the CLI command window is
reached before all returned information is displayed: more? (yes, no) [yes]:
? Enter Yes or press Enter to display more info, Enter No to stop the display of info.
Abbreviations can be used, for example Y and N.
? Number of columns to use for output: The width of the CLI screen.

155

Syntax
show config license

Use
This command displays license information.

Output
Config# show config license
Centera license: xxxxxxxxxxxxxxxx
Centera license: xxxxxxxxxxxxxxxx

Description
All license numbers that have been entered are displayed. EMC assigns this information.

156

Syntax
show config notification

Use
This command displays the ConnectEMC settings. ConnectEMC sends daily health reports,
alerts and other notifications through an SMTP server. If an OnAlert station is used, it must
run an SMTP server to accept the information sent by ConnectEMC.

Output
Config# show config notification
ConnectEMC mode
off
ConnectEMC server 1
not configured
ConnectEMC server 2
not configured
Cluster domain
localdomain
ConnectEMC recipients
not configured
Reply address
not configured
Report interval
1 day

Description
The following information is displayed:
? ConnectEMC mode
On: The Centera cluster will send a daily health report, alerts and notifications.
Off: ConnectEMC has been disabled. The Centera cluster will not send health reports, alerts

and notifications.

? ConnectEMC Server 1: The IP address or host name of the SMTP server or customer
workstation where OnAlert is installed.
? ConnectEMC Server: The IP address or host name of the backup SMTP server or a
secondary customer workstation (not mandatory) where OnAlert is installed.
? Cluster Domain: The domain to which the cluster belongs.
? Last Report Sent On: The date and time at which the last health report was sent

157

? ConnectEMC Recipients : The email address to where the health reports will be sent.
? Last Report Number: The number of the last report made.
? Reply Address: The email address to where the recipient of the health report can
send a reply.
? Report Interval: The time interval between two reports.
? Last Report Generation: Displays the status of the previous report.

158

Syntax
show config owner

Use
This command displays owner-specific information.
Output
Config# show config owner
Cluster Name
Cluster Serial Number
Installation Location
Administrative contact
Admin email
Admin phone

Leuven
XXXXXXXXXX
Leuven
John Doe
John_Doe@emc.com
555 397

Description
The following information is displayed:
? Cluster Name: The cluster name assigned by the administrator
? Serial Number: The serial number assigned by EMC
? Installation Location: The physical location of the cluster
? Admin. Contact: The cluster administrators details (name, email address, phone
number)
Use the set owner command to configure the cluster identification and the cluster
administrator settings.

159

Syntax
show config service

Use
This command displays service related information.

Output
Config # show config service
Serial Number APN01234567890
Site ID 01234567
Service ID standard
Service Info Centera Product Info
Grooming visit 06/01/2005

Description
The following information is displayed:
? Serial number: serial number of the first cabinet in a cluster. This number must be
entered for EMC notification support.
? Site ID: Clarify Site ID. A 7, 8, or 9 digit numerical value that is unique for each EMC
installation site. This number must be entered for OnAlert support.
? Service ID: Enter grooming, standard or premium for the type of service response.
The default id is grooming.
? Service Info: Information relevant to the EMC engineer.
? Grooming visit: Date of the initial installation or of the last maintenance visit of the
on-site EMC engineer.

160

Syntax
show config status

Use
This command displays the number of nodes and the software versions running on them.

Output
Config# show config status
Number of nodes: 8
Active software version: 3.0.0-319-437-6671
Oldest software version found in cluster: 3.0.0-319-437-6671

Description
The following information is displayed:
? Number of Nodes: Number of nodes in the cluster.
? Active Software Version: Information on the active Centera software version.
? Oldest Software Version: Information on the oldest software version found amongst
the active running nodes in the cluster.

161

Syntax
show config version

Use
This command displays the version of the running software. If this command is used during
a software upgrade, it also provides the current state of the upgrade.
In the case of an upgrade, Active software version refers to the software version that was
running before the upgrade was started.
Oldest software version found in the cluster refers to the oldest software version running on
the upgraded nodes.
As a consequence, Active software version may be lower than Oldest software version found
in cluster during an upgrade.

Output
Config# show config version
Number of nodes: 5
Active software version: 3.0.0-454-511-7854
Oldest software version found in cluster: 3.0.0-454-511-7854
Activation status: complete
Number of nodes skipped: 0
Number of nodes affected: 8
Number of nodes upgraded: 8
Number of nodes standing by to upgrade: 0
Number of nodes currently upgrading: 0
Number of nodes that refused to upgrade: 0
Number of nodes that failed to upgrade: 0
Number of dead nodes: 0
Number of nodes that aborted upgrade: 0
Number of nodes still outstanding: 0
Activation finished at: Thursday 17 March 2005 13:26:51 CET
Version being activated: 3.0.0-454-511-7854

Description

162

The following configuration information is displayed:


Information

Definition

Number of nodes

Number of nodes on cluster

Number of nodes skipped

Number of nodes that were not upgraded because, for instance, the nodes
already have the correct software version or they are offline.

Number of nodes affected

Number of nodes that are suitable for an upgrade.

Number of nodes upgraded

Number of nodes that already have been upgraded.

Number of nodes standing by to


upgrade

Number of nodes that are waiting to be upgraded.

Number of nodes currently upgrading

Number of nodes that are upgrading at the moment.

Number of nodes that refused to


upgrade

Number of nodes that refused to upgrade because a version conflict


obstructs the upgrade on those nodes.

Number of nodes that failed to


upgrade

Number of nodes that encountered problems during the upgrade. Number of


dead nodes: number of nodes that you cannot contact anymore.

Number of nodes that aborted


upgrade

Number of nodes that aborted the upgrade.

Number of nodes still outstanding

Total number of nodes that still have to be upgraded.

Activation finished

Date and time the activation finished.

Version being activated

Centera software version that is being activated on the cluster.

163

Syntax
show constraint list

Use
This command lists all known constraints on the cluster's retention settings.

Output
Config# show constraint list
name
constraint
-----------------------------------------------------------check_retention_present
true
check_retention_range
min: 1 month - max: infinite

Description
There are two possible constraints:
? check_retention_present. This specifies whether retention is mandatory or not. This
can be set to True or False.
? check_retention_range: This specifies minimum and maximum retention settings.
The current retention settings are displayed using the show constraint command. If the
administrator has not set any constraints, then the default values are shown.

164

Syntax
show domain list

Use
This command displays all the domains that have been set up on a Centera and their
corresponding clusters. It also displays the nodes which have the access role.

Output
Config# show domain list
Domain name: Belgium
Cluster
Access Nodes
----------------------------------------------------------------Brussels
10.99.433.222:3218, 10.99.433.221:3218
Leuven
10.99.433.7:3218, 10.99.433.6:3218
Mechelen
10.99.433.192:3218, 10.99.433.191:3218
-----------------------------------------------------------------

Description
The following information is displayed:
? Domain Name: Name of the domain
? Cluster: Name of the cluster associated with the domain.
? Access Nodes: Nodes on each cluster which have the access role assigned.

165

Syntax
show features

Use
This command displays a list of all system features and their current state.

Output
Config# show features
Feature
data-shredding
storage-strategy
performance: full
threshold: 256 KB
storageonaccess
garbage-collection

State
off
performance

on
on

Description
The following information is displayed:
? Name of Feature: data-shredding, storage-strategy, storage on access and garbagecollection.
? State: Current state of the feature. The options are on, off or performance/capacity.
The data-shredding feature is set to off by default. The garbage-collection feature is
set to on by default. The storageonaccess feature is on by default.

166

Syntax
show health

Use
This command displays a summary of the clusters health information.

Output
Config# show health
Cluster Name
Leuven
Cluster Serial Number
XXXXXXXX
Installation Location
Leuven
Administrative contact
Admin
Admin email
admin@emc.com
Admin phone
401 397
Node
Roles
Status
Total
Free Failures/Exceptions
------------------------------------------------------------------------c001n01
access
on
1,289 GB
0 GB eth2:connected
c001n02
access
on
1,289 GB
0 GB eth2:connected
c001n03
storage
on
1,289 GB
968 GB
c001n04
storage
on
1,289 GB
688 GB
c001n05
storage
off
1,289 GB
0 GB
c001n06
storage
off
1,289 GB
0 GB
c001n07
storage
on
1,289 GB
955 GB
c001n08
storage
off
1,289 GB
0 GB
------------------------------------------------------------------------Cabinet Switch
Rail Status
-----------------------------------1
c001sw0
1
on
1
c001sw1
0
on
root
root0
0
off
root
root1
1
off
------------------------------------

Description
The following information is displayed:
The report starts with an overview of cluster details.

167

? Cluster Name: Name of the cluster as assigned by the administrator


? Serial Number: Unique serial number as assigned by EMC (the serial number from
the sticker P/N 005047603 or P/N 005048103 or P/N 005048326)
? Installation Location: Physical location of the cluster
? Admin Contact: Cluster administrators details (name, email address, phone
number)
The health report lists information on nodes and switches. The report first describes the
nodes:
? Node: Node ID.
? Roles: Node has an access and/or storage role or is spare.
? Status: Node status, this can be On or Off.
? Total: The total physical capacity of the node
? Free: The capacity that is free and available for storing data or for self healing
operations in case of disk or node failures or for database growth and failover.
? Failures/Exceptions: References to failed disks, volumes, or network interface cards.
It shows eth2: connected if the eth2 port is connected and works properly.
Nothing will be displayed if a node has an unconnected eth2 port, or if the port does not
work properly.
The second table describes the switches:
? Cabinet: Physical cabinet where the node is located.
One cabinet always contains two switches on two different rails, to provide high
availability in case of power failures.
? Switch: Switch ID.
? Rail: Power rail that feeds the switch: 0, 1, or ATS.

168

Syntax
show icmp

Use
This command shows the status of icmp

Output
ICMP enabled: [Enabled/Disabled]: Enabled

Description
ICMP is enabled or disabled.

169

Syntax
show ip detail

Use
This command displays detailed information of the external network configuration of the
nodes with the access role.

Output
Config# show ip detail
Node c001n01
Configuration mode (Manual/DHCP): M
External IP address:
Subnet mask:
IP address of default gateway:
IP address of Domain Name Server:
Status:
Link speed:
Duplex settings:
Eth0:
Eth1:
Eth2:
Media:

10.99.433.6
999.255.255.0
10.99.433.1
182.62.69.47
on
100 Mb/s (automatic)
F
00:10:81:61:11:01
00:10:81:61:11:00
00:10:0C:2D:18:38
not configured

Description
The following information is displayed:
? Configuration mode: The type of host configuration and can either be D (DHCP
Dynamic Host Configuration Protocol) or M (manual network configuration).
? External IP address: The IP address of a node with the access role that is used to
connect to an external network.
? A Subnet Mask: The address that refers to the network ID.
? IP Address of Default Gateway: The node with the access role uses this to gain
access to the network. A gateway is a network point that acts as an entrance to
another network.

170

? IP Address of Domain Name Server: The node with the access role uses this to gain
access to the server. A Domain Name Server (DNS) locates and translates real names
into IP addresses. For example, a cluster could be called "Centera" and this would be
mapped to the appropriate IP address.
? Status: This identifies if the node is on or off.
? Linkspeed: The available linkspeed options are 10f, 10h, 100f, 100h, and 1000f (1000f
for Gen3 hardware only). Auto is the preferred setting and refers to auto negotiation,
the NICs decide on the best linkspeed they can use.
? Duplex Settings: The Duplex settings can either be half (one way) or full (both
ways).
? Media Access Control: (MAC) addresses of the three interfaces (Eth0, Eth1, Eth2). A
MAC address is a unique hardware network identifier.

171

Syntax
show ip list

Use
This command displays external IP addresses and gateways used by nodes with the access
role.

Output
Config# show ip list
Node
Mode IP address
Gateway
Status
------------------------------------------------------c001n01
M
10.99.433.6
10.99.433.1
on
c001n02
M
10.99.433.7
10.99.433.1
on
-------------------------------------------------------

Description
The following information is displayed:
? Node: Node ID.
? Mode: Host configuration type. The options are D (DHCP - Dynamic Host
Configuration Protocol) or M (manual network configuration).
? IP address: IP address those nodes with the access role use for external connections.
? Gateway: Gateway that the node uses for external connections.
? Status: Status of Eth. This can be ' On' or 'Off'.

172

Syntax
show location <blob id>/<clip id>

Use
This command shows the location of data identified by a blob ID or a C-Clip ID.

Output
Config# show location AZERTYAZERTYZeWXCVBNWXCVBNQ
blob_id is located on nodes:
NodeName: c001n02
Fragment Blob Address:
47V2SN6Q7FLR3e81TFVFE01K8G2G4107P66I3Q0J5UGT5ISQNIR24~GM~M2
PartitionID:1107802989310

Description
The following information is displayed:
? Node Name: Which nodes in the cluster store the C-Clip. C-Clips are always stored
on two separate nodes to ensure data redundancy. The nodes are specified by their
node ID. The next lines indicate the blobs and the nodes where they are located.
? Fragment Blob Address/Clip ID Address: Unique address of the blob. The term blob
refers to the actual data.
? Partition ID: Unique partition identifier.
The following error is returned if the object (Clip ID or Blob ID) is not found.
? Error: That object was not found in the cluster

173

Syntax
show network detail

Use
This command displays detailed network switch information.

Output
Config# show network detail
Switch c001sw0
Cabinet:
1
Status:
on
Model:
Allied Telesyn AT-RP48
May-2003
Serial number: 58478131
Trunk info :
port 41:Up
port 42:Up
port 43:Up
Uplink info:
port 44:Not installed
Rail:
1
Switch operates okay
Switch root0
Off

Rapier 48i version 2.4.1-01 01-

Description
The following information is displayed:
? Switch: The switch is identified by the switch ID
? Cabinet: The physical cabinet where the switch is located. One cabinet always
contains two switches on two different power rails, to provide high availability in
case of power failures.
? Status: Identifies if the switch is on or off.
? Model: The switchs hardware specification, for example Allied Telesyn AT-RP48i
Rapier 48i version 2.2.2-12 05-Mar-2002.

174

? Serial Number: The switchs serial number.


? Trunk Info: The status of the switch cords in one cube: Up or Down.
? Uplink Info: The status of the switch cords between cubes: Up, Down, or Not
installed.
? Rail: the power rail that feeds the switch: 0, 1, or ATS. CxxSW0 switches are
connected to power rail 1 (to the right if you are facing the back of a Centera cabinet),
and CxxSW1 switches are connected to power rail 0. ATS is the Automatic (AC)
Transfer Switch.
? Switch Operates Okay: States the proper operation of the switch or the sentence
Switch needs replacement states the need to replace the switch.
? Switch Root: Indicates that a root switch is available and on or off identifies if the
switch is on or off.

175

Syntax
show network status

Use
This command displays the status of all network switches in the cluster.

Output
Config# show network status
Cabinet Switch
Rail Status
-----------------------------------1
c001sw0
1
on
1
c001sw1
0
on
root
root0
0
off
root
root1
1
off
------------------------------------

Description
The following information is displayed:
? Cabinet: Physical cabinet where the switch is located. One cabinet always contains
two switches on two different power rails, to provide high availability in case of
power failures.
? Switch: Switch ID.
? Rail: Power rail that feeds the switch: 0, 1, or ATS. CxxSW0 switches are connected to
power rail 1 (to the right if you are facing the back of a Centera cabinet), and CxxSW1
switches are connected to power rail 0. ATS is the Automatic (AC) Transfer Switch.
? Status: Switch is on or off.

176

Syntax
show node detail <scope>

Use
This command displays the details of all the nodes on the cluster. For example, the output
below displays all nodes on the cluster that are offline. The scope of this command is
extensive and provides the following options to display:
The following node details can be displayed.
? Access: All nodes with access role
? Accessprincipal: The principal node for the pool server service
? All: All nodes
? Cabinet X : X being 1 - n : returns all nodes in one cabinet
? Clusterservice X: X being the name of one of the cluster services
? Failures X: nodes needing intervention
? N: Nodes with network interface failures
? V: Nodes with volume or disk failures
? *: Nodes with any type of failure
? Mirrorgroup X : X being one of the mirror groups used in the cluster
? Model X : X being one of the models used in the cluster
? Node CxxxNyy: A specific (existing) node
? Nodelist X,X:X being specific (existing) nodes
? Off: All nodes whose status is off
? On: All nodes whose status is on
? Principal: The principal node
? Rail X: X being 1 or 0 (denoting power rail 1 or power rail 0)
? Spare: All spare nodes
? Storage: All nodes with storage role

177

Output
Config# show node detail off
Node c001n05
Serial Number:
1038200159
Status:
off
Roles:
storage
Model:
118032400-A06
Rail:
1
Software Version:
3.0.0-454-511-7854
Modem present:
false
Internal IP:
10.255.1.5
Total Capacity:
1,281 GB
Used Capacity:
0 GB
Free Capacity:
0 GB
Faulted Capacity:
1,281 GB
Total number of objects stored: 0
Regenerations:
1

Description
The following information is displayed:
? Node: The node name.
? Serial number: The serial number of the node.
? Status: of the node. This can be on or off.
? Roles: of the node. This can either be access and/or storage or spare.
? Model: This can either be 118032076 for nodes with a capacity of 0.6 TB, 118032306A0X for nodes with a capacity of 1.0 TB or 118032400-A0X for nodes with a capacity
of 1.25 TB.
? Rail: This refers to the node power status: 0, 1, or ATS. Odd Gen2 nodes are
connected to power rail 1 and even Gen2 nodes are connected to power rail 0. ATS is
the Automatic (AC) Transfer
? Software Version: The CentraStar software version.
? Modem Present: Whether a node has a modem (true) or not (false).
? Internal IP: The IP address of a node that is used to connect to the internal (cluster)
network.
? Total Capacity: The total physical capacity of the cluster/cube/node or disk.
? Used Capacity: The capacity that is used or otherwise not available to store data; this

178

includes the capacity reserved as system resources, not assigned for storage or
offline, and capacity actually used to store user data and associated audit and
Metadata.
? Free Capacity: The capacity that is free and available for storing data or for self
healing operations in case of disk or node failures or for database growth and
failover.
? Total number of objects: The total object capacity already used.
? Regenerations: Number of regeneration tasks being processed.

179

Syntax
show objects detail <scope>

Use
This command displays object capacity data about each node on the cluster.

Output
Config# show objects detail off
Node
Roles
Status
Total
Used
Free
-----------------------------------------------------c001n05 storage
off
30 M
0 M
30 M
c001n06 storage
off
30 M
0 M
30 M
c001n08 storage
off
30 M
0 M
30 M
-----------------------------------------------------Total (online nodes: 0)
90 M
0 M
90 M

Description
The following information is displayed:
? Node: Node ID.
? Roles: Roles assigned to the node. This can either be access and/or storage or spare.
? Status: Node state. This can be on or off.
? Total: The number of objects that can be stored. with M denoting "million".
? Used: The total object capacity already used.
? Free: The total number of objects that can still be written to the cluster.

180

Syntax
show pool capacity

Use
This command displays the pool capacity of all the pools on the cluster.

Output
Config# show pool capacity
Capacity / Pool
Quota
Used
Free
C-Clips Files
---------------------------------------------------------------------------Legal
5 GB
0 GB
5 GB
3
0
default
-92 GB
-2672
2586
Total
-92 GB
-2675
2586
----------------------------------------------------------------------------

Description
The following information is displayed:
? Capacity/Pool: Pool Name
? Quota: Current quota for the pool. This is the maximum amount of data that can be
written to the pool.
? Used: Current pool capacity being used.
? Free: Current available capacity until the quota is reached.
? C-Clips: Number of C-Clips stored in the pool.
? Files: Number of user files stored in the pool.

181

Syntax
show pool capacitytasks

Use
This command displays the capacity tasks currently scheduled or running.

Output
Config# show pool capacitytasks
From
To
Status
ETA
----------------------------------------------------------------1-03-2005
1-04-2005
scheduled
-----------------------------------------------------------------

Description
The following information is displayed:
? From/To: The capacity background tasks that are scheduled or running are
displayed.
? Status: Status of the task. This can be either scheduled, running or completed.
? ETA: Estimated time that the pool capacity task will be completed.

182

Syntax
show pool detail <name>

Use
This command displays the relationship a particular pool has with profiles. It
gives a complete view of the capabilities the profiles have in relation to the given
pool.

Output
Config# show pool detail Legal
Centera Pool Detail Report
-----------------------------------------------------Generated on Tuesday 29 March 2005 12:29:04 CEST
Pool Name:
Legal
Pool ID:
da2381fe-1dd1-11b2-84d7-a0319fc423a42
Pool Mask:
rdqeDcw
Cluster Mask:
rdqe-cw-mPool Quota:
5 KB
Used Pool Capacity:
0 GB
Free Pool Capacity:
5 KB
Number of C-Clips:
3
Number of Files:
-Number of scheduled tasks:
1
Granted Rights to Access Profiles:
Profile Name
Granted Effective Monitor Cap Enabled Home
Pool
-----------------------------------------------------------------------------legal
rdqeDcw rdqe-cwm
yes
yes
-----------------------------------------------------------------------------Pool Mappings:
Profile Name
Scratchpad Pool Mapping
Active Pool Mapping
------------------------------------------------------------------------legal
Legal
Legal
-------------------------------------------------------------------------

183

Description
Refer to show poolmapping for more information

184

Syntax
show pool list

Use
This command displays all the pools on the cluster.

Output
Config# show pool list
Pool Name
ID
Profiles
Mask
-------------------------------------------------------------------------------------------------cluster
cluster
0
rdqeDcp
default
default
2
rdqeDcw
ruled
1e481d2a-1dd2-11b2-a9d4-c98f382add85-4
1
rdqeDcw
ruler
1e481d2a-1dd2-11b2-a9d4-c98f382add85-3
1
rdqeDcw
writer
1e481d2a-1dd2-11b2-a9d4-c98f382add85-2
2
rdqeDcw
------------------------------------------------------------------------------

Description
The following information is displayed:
? Name: Name of the Pool. This is set by the system administrator and can be changed,
provided that it is unique on the cluster.
? ID: Internal unique Pool ID. This is generated when the Pool is created and does not
change. A custom application pool on a cluster is identified through a pool ID, which
is automatically calculated by the CentraStar server on which the pool is created. The
ID cannot be modified.
? Profiles: Number of profiles associated with the pool.
? Mask: Access rights to the pool.

185

Syntax
show poolmapping

Use
This command displays the poolmapping between all the profiles and pools on the cluster.

Output
Config# show pool mapping
Profile Name
Scratchpad Pool Mapping
Active Pool
Mapping
---------------------------------------------------------------------------FinanceProfile
Finance
Finance

Description
The following information is displayed:
? show poolmapping: Lists all the mappings between pools and profiles on the cluster.

186

Syntax
show pool migration

Use
This command displays the status of the migration processes running on the cluster.

Output
Config# show pool migration
Migration Status:
finished
Average Migration Progress:
100%
Completed Online Nodes:
5/5
ETA of slowest Node:
0 hours
--------------------------------------Active Pool Mappings
FinanceProfile
Finance

Description
The following information is displayed:
? Migration Status: Percentage of the migration process completed.
? Average Migration Process: Average time it takes for each node to complete the
migration process.
? Completed Online Nodes: Number of online nodes on which the migration process
has completed.
? ETA of slowest Node: Estimated time that the slowest node will complete the
migration process.

187

Syntax
show profile detail <profile name>

Use
This command displays the relationship a specific profile has with a pool in which it has been
granted rights.

Output
Config# show profile detail test
Centera Profile Detail Report
-----------------------------------------------------Generated on Friday 25 March 2005 17:08:36 CET
Profile Name:
test
Profile Enabled:
no
Monitor Capability:
yes
Profile Metadata Capability:
no
Home Pool:
default
Profile Type:
Access
Cluster Mask:
rdqe-cw-mGranted Rights in Application Pools:
Pool Name
Pool Mask
Granted
Effective
---------------------------------------------------------------------default
rdqeDcw
rdqeDcw
rdqe-cwm
---------------------------------------------------------------------Scratchpad Pool Mapping:
Active Pool Mapping:
-

Description
The following information is displayed:
? Profile Name: Name of the profile
? Profile Enabled: Profile can be enabled or disabled.
? Monitor Capability: Ability to retrieve Centera statistics.
? Profile Metadata Capability: Supports storing/retrieving per-profile metadata,
automatically added to the CDF. 'Enabled' or 'Disabled'.

188

? Home Pool: Default pool of the profile.


? Profile Type: Access or Cluster profile.
? Cluster Mask: Access rights associated with the profile.
? Granted Rights in Application Pools : Access rights the pool has granted the profile.
? Pool Name: Name of the pool.
? Pool Mask: Access rights
? Granted: Access rights the pool has granted the profile.
? Effective:
? Scratchpad Pool Mapping:
? Active Pool Mapping:

189

Syntax
show profile list

Use
This command displays all the profiles on the cluster.

Output
Config# show profile list
Profile Name
Home Pool
Type
Enabled Monitor Metadata
---------------------------------------------------------------------------anonymous
default
Access
yes
yes
no
console
default
Access
yes
yes
yes
ruled
ruled
Access
yes
yes
yes
ruler
ruler
Access
yes
yes
yes
test
writer
Access
yes
yes
yes
writer
writer
Access
yes
yes
yes
---------------------------------------------------------------------------

Description
The following information is displayed:
? Profile Name: Name of the profile.
? Home Pool: The profile's default home pool.
? Type: Access profile or Cluster profile.
? Enabled: Profile is enabled or disabled.
? Monitor: Profile has the monitor capability. This is the ability to retrieve Centera
statistics. This is set to Yes or No.
? Metadata: Profile has metadata capability. This is the ability to set Metadata rules.
This is set to Yes or No.

190

Syntax
show replication detail

Use
This command displays a detailed replication report.

Output
Centera Replication Detail Report
--------------------------------------------------Generated on Thursday January 13 2005 11:19:20 CET
Replication Enabled: Thursday January 13 2005 6:29:24 CET
Replication Paused: no
Replication Address: 10.99.129.427:3218
Replicate Delete: yes
Profile Name: <no profile>
Number of C-Clips to be replicated: 3,326
Number of Blobs to be replicated: 3,326
Number of MB to be replicated: 815
Number of Parked Entries: 0
Replication Speed: 0.06 C-Clip/s
Replication Speed: 12.70 Kb/s
Ingest: 2.89 C-Clip/s
Replication ETA: N/A
Replication Lag: 15 hours

Description
The following information is displayed:
? Replication Enabled: Date on which the replication was enabled. If replication has
not been setup the status is Replication Disabled.
? Replication Paused: Indicates whether replication has been paused or not. Possible
states are:
o

No

Yes (by user)

191

Yes (integrity problem)

Yes (regenerations ongoing)

Yes (remote cluster is full)

Yes (parking overflow)

Yes (authentication failure)

Yes (insufficient capabilities)

? Replication Address: IP address or the host name of the replication cluster.


? Replicate Delete: Indicates whether objects deleted on the source cluster will also be
deleted on the replication cluster.
? Profile Name: Name of the profile used on the replication cluster.
? Number of C-Clips: Number of C-Clips to be replicated indicates the number of Cclips waiting to be replicated.
? Number of Blobs: Number of Blobs to be replicated indicates the number of blobs
waiting to be replicated.
? Number of Mb: Number of Mb to be replicated indicates the size in MB of blobs and
C-Clips waiting to be replicated.
? Number of Parked Entries: Indicates the number of entries that have failed to
replicate.
? Replication Speed: Indicates the speed at which replication is proceeding (C-Clip/s
and Kb/s).
? Ingest: Indicates the speed at which C-Clips are added to the replication queue.
? Replication ETA: Indicates the estimated date and time at which replication will be
complete based on the average activity of the last 24 hours.
? Replication Lag: Indicates the time it takes for a C-Clip to be replicated once it has
entered the replication queue.

192

Syntax
show replication parking

Use
This command provides detailed statistics about the replication parking.
If the cluster is heavily loaded or if there are many parked entries this command does not
show all parked entries. Use the show replication detail command instead.

Output
Centera Replication Parking Report
------------------------------------------------------Generated on Thursday January 13 2005 11:19:27 CET
Replication Enabled: Thursday January 13 2005 6:29:24 CET
Replication Paused: no
Replication Address: 10.99.129.216:3218
Replicate Delete: yes
Profile Name: <no profile>
Number of Parked Entries: 3
Failed Writes: 1
Failed Deletes: 2
Failed Privileged Deletes: 0
Failed Unknown Action: 0

Description
The following information is displayed:
? Replication Enabled: Date on which the replication was enabled. If replication has
not been setup the status is Replication Disabled.
? Replication Paused: Replication has been paused or not. Possible states are:
o

No

Yes (by user)

Yes (integrity problem)

193

Yes (regenerations ongoing)

Yes (remote cluster is full)

Yes (parking overflow)

Yes (authentication failure)

Yes (insufficient capabilit ies)

? Replication Address: IP address or host name of the replication cluster.


? Replicate Delete: Objects deleted on the cluster will also be deleted on the
replication cluster.
? Profile Name: Name of the profile used on the replication cluster.
? Number of Parked Entries:
Failed Writes: Number of writes that have failed to be replicated.
Failed Deletes: Number of deletes that have failed to be replicated.
Failed Privileged Deletes: Number of privileged deletes on a Centera CE that have failed to
be replicated.
Failed Unknown Action: Number of C-Clips in the parking that could not be read.

194

Syntax
show report health <filename>

Use
This command downloads the current health report to a file

Output
Config# show report health C:\Health
The report was successfully saved into C:\Health.
If the filename already exists, the following message is displayed:
File C:\Health already exists, do you want to overwrite this file? (yes, no)
[no]: Y
The report was successfully saved into C:\Health.

Description
The following information is displayed:
Enter a valid filename in which to save the information.
This is an XML file which can be opened in Internet Explorer. The file displays a heath report
about the state of Centera.

195

Syntax
show restore detail

Use
This command displays the progress of the restore procedure.
When a restore is performed for a specific pool using the CLI, sometimes this command does
not display the full output.

Output
Config# show restore detail
Centera Restore Detail Report
-------------------------------------------------------Generated on Monday 4 April 2005 14:08:14 CEST
Restore Started:
Monday 4 April 2005 9:41:26 CEST
Restore Finished:
Monday 4 April 2005 9:52:06 CEST
Restore Address:
10.69.133.231:3218
Restored Pools:
all pools
Profile Name:
<no profile>
Restore Mode:
full
Restore Mode:
10/10/1990 to 10/11/1990
Restore Checkpoint:
N/A
Estimated number of C-Clips to be restored: N/A
Number of Parked Entries:
0
Restore Speed:
0.00 C-Clip/s
Restore Speed:
0.00 Kb/s
Restore ETA:
N/A

Description
The following information is displayed:
? Restore Started: Date and time at which the restore was started. If no restore has
been started the status is Restore Disabled.
? Restore Finished: Date and time at which the restore finished.
? Restore Address: Restore IP address and port number.
? Restored Pools: Name of the pools which have been restored.

196

? Profile Name: Name of the restore profile (no profile for anonymous profile).
? Restore Mode: Restore mode (options are full or partial).
? Restore Checkpoint: Timestamp of the C-Clip that the restore operation is currently
restoring. This gives an indication of the progress of the restore process.
? Estimated number of C-Clips to be restored: Number of C-Clips that will be
restored.
? Number of Parked Entries: Number of C-Clips placed in the parking queue.
? Restore Speed: Speed of Restore operation.
? Restore ETA: Time when the restore operation will complete.

197

Syntax
show restore parking

Use
This command displays all blobs and C-Clips that failed to restore. Blobs and C-Clips in
restore parking, automatically re-enter the restore process.

Output
Config# show restore parking
Centera Restore Parking Report
-------------------------------------------------------Generated on Monday 4 April 2005 14:45:49 CEST
Restore Started:
Monday 4 April 2005 9:41:26 CEST
Restore Finished:
Monday 4 April 2005 9:52:06 CEST
Restore Address:
10.69.133.231:3218
Restored Pools:
all pools
Profile Name:
Finance
Restore Mode:
full
Number of Parked Entries:
0
Failed Writes:
0
Failed Unknown Action:
0

Description
The following information is displayed:
? Restore Started: Date and time when the restore started.
? Restore Finished: Date and time when the restore process finished.
? Restore Address: Restore IP address and port number.
? Restored Pools: Name of the pools which have been restored.
? Profile Name: Name of the profile(s) restored.
? Restore Mode: Full or Partial restore.
? Number of Parked Entries: Number of C-Clips waiting to be restored.
? Failed Writes: Number of write operations that failed.

198

? Failed Unknown Action: Number of Unknown actions that have failed.


The output of this command may show later times for Restore Started and Restore Finished
than the generated time of the output. The generated time is based on the time of the local
machine while the other times are based on the cluster time.

199

Syntax
show retention <name> <all>

Use
This command displays all retention values (all) or the retention values for a specific
retention class (name).

Output
Retention class name Period
-----------------------------------------SEC_Rule_206
5 years
Saved_Emails
3 months
Tax_Records
10 years

Description
The following information is displayed:
? Retention class name: Name of the retention class. If no retention classes exist, show
retention all displays: No retention settings found. If the retention class does not
exist, show retention name displays the infinite retention period.
? Period: Length of the retention period of the retention class. Note that a month
equals 30 days, a year 365 days.

200

Syntax
show security <scope>

Use
This command displays the node's security status and other information, for example: the
cube on which the node resides, node role, locked status and compliance model. The scope of
this command is extensive and provides the following security options to display.
The output below is sample output, displaying the security details for nodes with the access
role.

Output
Cube
Node
Roles
Status
Locked
Compliance
-------------------------------------------------------------------1
c001n01
access
on
No
Basic
1
c001n02
access
on
No
Basic
1
c001n03
storage
on
No
Basic
1
c001n04
storage
on
No
Basic
1
c001n05
storage
on
No
Basic
1
c001n06
storage
on
No
Basic
1
c001n07
storage
on
No
Basic
1
c001n08
storage
on
No
Basic
--------------------------------------------------------------------

Description
The following information is displayed:
? Cube: Cube where the nodes are located.
? Node: Node IDs. Refer to Node IDs
? Roles: Roles assigned to the node: access and/or storage or spare.
? Status: Whether the node is on or off.
? Locked: Whether the node is locked (yes) or unlocked (no).When a node is locked, no
new service connections to that node are possible. Existing service connections will
not be closed. Only the administrator can make connections for manageability. When

201

a node is unlocked, service connections to that node are possible. All users can make
connections for manageability.
? Compliance: Whether it is a Basic Centera cluster node (Basic), a Compliance Edition
(CE) cluster node or a Compliance Edition Plus (CE+) cluster node.

202

Syntax
show snmp

Use
This command displays the current state of the SNMP configuration.
CentraStar version 2.0, 2.1, and 2.2 CE models support SNMP access. A CE+ model does not.
Version 2.3 and higher supports SNMP access on all compliance models.
Centera supports SNMP version 2 only.

Output
SNMP enabled: Disabled
Management station: 10.99.129.126:162
Community name: public
Heartbeat trap interval: 1 minute

Description
The following information is displayed:
? SNMP: enabled Disabled
? Management station: IP address and port number of the server where the SNMP
traps should be sent. The management station address can be entered as an IP
address in dotted quad format, for example 10.68.133.91:155, or as a hostname, for
example centera.mycompany.com:162.
? Community name: Password for the SNMP traps is used to authenticate and can be
considered as a password. The Community name cannot be longer than 255
characters and may contain any non-control character except , , /, <, >, &,
<newline>, and <cr>.
? Heartbeat Trap Interval: Interval for sending I am alive traps.

203

Update
Syntax
update domain add cluster

Use
This command adds a cluster to a domain. A domain is a logical grouping of clusters.
Clusters can be grouped logically, for example, according to physical location or on the type
of data that they are storing.
A maximum of four clusters can exist in a Centera universe.

Output
update domain add cluster
Domain name: Finance
Cluster name: New York
Cluster address: 10.88.999.6:3218
Issue the command? (yes, no) [no]: yes

Description
The following information is displayed:
? Domain name: Enter the domain name.
? Cluster name: Enter the cluster name to add to the previously entered domain.
? Cluster address: Enter the IP address of the cluster.
The following errors can be returned:
? Domain Name: An error is returned if the domain name does not exist or was missspelled (case-sensitive).
? Cluster Name: An error is returned if the cluster name does not exist or was missspelled (case sensitive).
? Cluster Address: An error is returned if the cluster address is incorrect.

204

Syntax
update domain remove cluster

Use
This command removes a cluster from a domain. A domain is a logical grouping of clusters.
Clusters can be grouped logically, for example, according to physical location or on the type
of data that they are storing.
A maximum of four clusters can exist in a Centera universe.

Output
update domain remove cluster
Domain name: Finance
Cluster name: New York
Cluster addresses: 10.88.999.6:3218
Issue the command? (yes, no) [no]: yes

Description
The following information is displayed:
? Domain name: Enter the domain name.
? Cluster name: Enter the cluster name to add to the previously entered domain.
? Cluster address: Enter the IP address of the cluster.
The following errors can be returned:
? Domain Name: An error is returned if the domain name does not exist or was missspelled (case-sensitive).
? Cluster Name: An error is returned if the cluster name does not exist or was missspelled (case sensitive).
? Cluster Address: An error is returned if the cluster address is incorrect.

205

Syntax
update pool <name>

Use
This command updates the name of a pool and associated mask.

Output
Config# update pool Centera
Pool Name [Centera]:
Pool Mask [rdqeDcw]:
Pool Quota [1 GB]:
Issue the command?
(yes, no) [no]:

Description
The following information is displayed:
? Pool Name: Name of the pool. This must be unique and is case sensitive.
? Pool Mask: This assigns access rights (capabilities). Press return to automatically
assign all access rights, otherwise enter the individual letters corresponding to each
capability.
? Pool Quota: Current quota for the pool. This is the maximum amount of data that
can be written to the pool. This sets the size of the pool. Press return to accept the
default of 1 GB or enter a new quota.
Once the pool is created, CentraStar generates a unique ID. The display name of a pool can be
changed at will using the update pool command. The pool ID cannot be modified.
An error is returned if the pool already exists.
The following error can be displayed:
If the pool does not exist, the following error is displayed:
? Command failed: The pool was not found on the cluster.

206

The access rights available are: read, write, delete, privileged-delete, C-Clip Copy, query,
exist and profile driven metadata:
Capabilities

Definition

Write (w)

Write to a C-Clip. WriteClip access must be enabled to write. 'Enabled' or 'Disabled'.

Read (r)

Read a C-Clip. 'Enabled' or 'Disabled'.

Delete (d)

Deletes C-Clips. 'Enabled' or 'Disabled'.

Exist (e)

Checks for the existence of a specified C-Clip. 'Enabled' or 'Disabled'.

Privileged Delete
(D)

Deletes all copies of the C-Clip and can overrule retention periods. 'Enabled' or 'Disabled'

Query (q)

Query the contents of a Pool. When set to 'Enabled', C-Clips can be searched for in the pool
using a time based query. 'Enabled' or 'Disabled'.

Clip-Copy (c)

Copy a C -Clip. 'Enabled' or 'Disabled'. This capability is needed for replication and restore
operations.

Purge (p)

Remove all traces of C-Clip from the cluster. 'Enabled' or 'Disabled'. Purge is only available for
cluster level profiles.

Monitor (m)

Retrieves statistics concerning Centera

Profile-Driven
Metadata

Supports storing/retrieving per-profile metadata, automatically added to the CDF. 'Enabled' or


'Disabled'.

207

Syntax
update profile <name>

Use
This command modifies a specific profile.
When changing the profile to map to another home pool, the application needs to do another
FPPoolOpen (typically restart) for the new profile definition to take effect.

Output
Config# update profile FinanceProfile
Profile Secret [unchanged]:
Enable profile: (yes, no): [no]
Monitor Capability? (yes, no) [no]:N
Profile Type (cluster, access) [access]: access
Home Pool [default]: FinanceProfile
Granted Rights for the profile in the home pool [rdqeDcw]: rde
Issue the command? (yes, no) [no]: Y
Establish a pool entry authorization for application use? (yes, no) [no]: Y
Please enter pool authorization creation information: C:\FinanceProfile.pea

Description
The following information is displayed:
? Profile Name: This must be unique and is case sensitive.
? Profile Secret: This is the unique credentials associated with the profile. This can
either be generated by CentraStar automatically or the user can define a file that
holds the profile secret. Enter a directory to store the profile secret. The profile secret
is stored in a .TXT file on a directory on the C:\ drive.
? Monitor Capability: Retrieves Centera statistics.
? Profile Type: This can be set to a Cluster or Access profile. In this case, it should be
an Access profile.
? Home Pool: This is the profile's default pool. Every profile has a default pool. A
profile can perform all operations on its home pool. [rdqeDcw]

208

? Granted Rights: This sets the operations that can be performed on the pool of data in
the cluster.
? Pool Entry Authorization: This file contains the profile and password. Enter a
directory to store the .pea file.
The following error can be displayed when trying to update a non-existing profile:
? Command failed: The profile was not found on the cluster
The operations that can be performed on a pool are read, write, delete, privileged-delete, CClip Copy, query, exist, monitor and profile-driven metadata:
Capabilities

Definition

Write (w)

Write to a C-Clip. WriteClip access must be enabled to write. 'Enabled' or 'Disabled'.

Read (r)

Read a C-Clip. 'Enabled' or 'Disabled'.

Delete (d)

Deletes C-Clips. 'Enabled' or 'Disabled'.

Exist (e)

Checks for the existence of a specified C-Clip. 'Enabled' or 'Disabled'.

Privileged Delete
(D)

Deletes all copies of the C-Clip and can overrule retention periods. 'Enabled' or 'Disabled'

Query (q)

Query the contents of a Pool. When set to 'Enabled', C-Clips can be searched for in the pool
using a time based query. 'Enabled' or 'Disabled'.

Clip-Copy (c)

Copy a C -Clip. 'Enabled' or 'Disabled'. This capability is needed for replication and restore
operations.

Purge (p)

Remove all trac es of C-Clip from the cluster. 'Enabled' or 'Disabled'. Purge is only available for
cluster level profiles.

Monitor (m)

Retrieves statistics concerning Centera

Profile-Driven
Metadata

Supports storing/retrieving per-profile metadata, automatically added to the CDF. 'Enabled' or


'Disabled'.

The access rights of a profile in a pool can also be updated using the set grants command.

209

Syntax
update retention <name> <period>

Use
This command is used to modify a retention period.

Output
Config# update retention Centera 10 weeks
Are you sure that you want to update this retention
period? (yes, no) [no]: Y
Issue the command?
(yes, no) [no]: Y

Description
The following information is displayed:
? Update Retention: This updates the retention period of the given retention class.
The retention period values can be entered in any combination of units of years, months,
days, hours, minutes and seconds.
For example:
Config# update retention Centera 10 weeks

The following errors can be displayed:


If the name of the retention class does not exist, the following error is displayed:
? Error: The specified retention class does not exist. The create retention command may
be used to create a new retention class.

210

If the class exists and the new period is smaller than the curren t one, a warning similar to
the following is displayed:
? WARNING: This retention period is shorter than the previous value of 3 months,
are you sure you want to update this retention period?
If the name of the class already exists, the following is displayed:
Are you sure that you want to update this retention period? (yes, no) [no]:
Y
Issue the command?
(yes, no) [no]: Y

On a Compliance CE+ Centera the retention period cannot be updated to a smaller value
than the existing period. If a smaller value is entered, an error similar to the following will
be displayed:
WARNING: This retention period is shorter than the previous value of 3
months.

211

How Do I..?
Overview
This section provides answers to common questions that the user may ask.
Each page in the Online Help contains 'How Do I' links. All of these links are contained
within this section and are logically divided, enabling the user to find the correct information
quickly and easily.

Change the Administrators Password


Syntax
set security password

Use
This command changes the admin password. Change the default admin password
immediately after first login for security reasons.

Output
Config# set security password
Old Password:
New Password:
New Password (confirm):

Description
The following information is displayed:
A valid admin password is required to connect to the cluster. The administrator's password

213

can be modified:
Enter the default password.
Enter a new password. The characters , , &, /, <, >, \, ^, %, and space are not valid in
passwords.
Confirm the new password.
The following error can be displayed:
The following error is displayed when the user confirms a new password which does not
match the previously entered new password.
? Passwords are not equal

214

Change the Administrators Details


Syntax
set owner

Use
This command changes the administrators details and checks the cluster identification.

Output
Config# set owner
Administrator name [not configured]:John Doe
Administrator email [not configured]: JohnDoe@EMC.com
Administrator phone [not configured]: 555 3678
Location of the cluster [not configured]: Mechelen
Name of the cluster: EMC_Centera_989893
Serial number of the cluster [not configured]: CL12345567SEW3
Issue the command? (yes, no) [no]: Y

Description
The following information is displayed:
? Cluster administrators details: Name, email address, phone number.
? Location of the cluster: Physical location of the cluster, for example, the location of
the customer site, floor, building number, and so on.
? Cluster name. Unique cluster name.
? Cluster serial number. EMC assigns this number. You can find it on the sticker with
P/N 005047603, 005048103, or 005048326 (located in the middle of the rear floor panel
directly inside the rear door).
Use the show config owner command to display these settings.

215

View Administrators Details


Syntax
show config owner

Use
This command displays owner-specific information.
Output
Config# show config owner
Cluster Name
Cluster Serial Number
Installation Location
Administrative contact
Admin email
Admin phone

Leuven
XXXXXXXXXX
Leuven
John Doe
John_Doe@emc.com
555 397

Description
The following information is displayed:
? Cluster Name: The cluster name assigned by the administrator
? Serial Number: The serial number assigned by EMC
? Installation Location: The physical location of the cluster
? Admin. Contact: The cluster administrators details (name, email address, phone
number)
Use the set owner command to configure the cluster identification and the cluster
administrator settings.

216

Configure the Regeneration Buffer


Syntax
set capacity regenerationbuffer

Use
This command sets a regeneration buffer per cabinet. This buffer reserves capacity to be used
for regenerating data after disk and/or node failures. The reservation can be a hard
reservation (stop), preventing write activity to use the space, or it can be a soft reservation
(alert), used for alerting only.

Output
Config# set capacity regenerationbuffer
Mode (alert, stop) [stop]:
Limit in disks per cube [1]:
Issue the command? (yes, no) [no]:

Description
The following information is displayed:
? Mode: The type of reservation you want to set when available space reaches the
regeneration buffer.
?

Alert: Sends an alert when the regeneration buffer has been reached.

Stop: Immediately ceases writing to the cabinet and sends an alert.

? Limit: The value per cabinet, in disks, of space that will be reserved for the purposes
of regeneration.
The default setting is 1 disk and stop.

217

View the Capacity Details of a Node


Syntax
show capacity detail <scope>

Use
This command displays an overview of used and free capacity for the defined nodes. The
scope of this command is extensive and provides the following capacity options to display.
The output below is sample output, displaying the capacity details for nodes with the access
role.
The CLI will take the offline nodes into account to calculate the total raw capacity.

Output
Config# show capacity detail access
Node
Roles
Status Total Raw System Offline
Used Free Raw
------------------------------------------------------------------------c001n01 access
on
1,289 GB
7 GB
0 GB
0 GB
0 GB
c001n02 access
on
1,289 GB
7 GB
0 GB
0 GB
0 GB
------------------------------------------------------------------------Total (online nodes: 2)
2,577 GB
14 GB
0 GB
0 GB
0 GB

Description
The following information is displayed:
Term

Definition

Total Raw

Total Raw Capacity: The total physical capacity of the


cluster/cube/node or disk.

218

System Resources

System Resources: The capacity that is used by the


CentraStar software and is never available for storing
data.

Offline

Offline Capacity: The capacity that is temporarily


unavailable due to reboots, offline nodes, or hardware
faults. This capacity will be available as soon as the
cause has been solved.

Used

Used Raw Capacity: The capacity that is used or


otherwise not available to store data; this includes the
capacity reserved as system resources, not assigned for
storage or offline, and capacity actually used to store
user data and associated audit and Metadata.

Free Raw

Free Raw Capacity: The capacity that is free and


available for storing data or for self healing operations in
case of disk or node failures or for database growth and
failover.

219

View the Capacity of All Pools


Syntax
show pool capacity

Use
This command displays the pool capacity of all the pools on the cluster.

Output
Config# show pool capacity
Capacity / Pool
Quota
Used
Free
C-Clips Files
---------------------------------------------------------------------------Legal
5 GB
0 GB
5 GB
30
default
-92 GB
-2672
Total
-92 GB
-2586
----------------------------------------------------------------------------

Description
The following information is displayed:
? Capacity/Pool: Pool Name
? Quota: Current quota for the pool. This is the maximum amount of data that can be
written to the pool.
? Used: Current pool capacity being used.
? Free: Current available capacity until the quota is reached.
? C-Clips: Number of C-Clips stored in the pool.
? Files: Number of user files stored in the pool.

220

View Cluster Capacity


Syntax
show capacity total

Use
This command displays an overview of used and free raw capacity in the cluster and the
percentage of disk space used in relation to the total raw capacity.

Output
Config# show capacity total
Number of nodes:
Number of nodes with storage role:
Total Raw Capacity:
Used Raw Capacity:
Free Raw Capacity:
Used Raw Capacity:
System Resources:
Offline Capacity:
Spare Capacity:
Used Capacity:
Audit & Metadata:
Protected User Data:
Used Object Count:

8
6
10,309 GB (100%)
7,719 GB
(75%)
2,610 GB
(25%)
7,719 GB
(75%)
57 GB
(1%)
3,844 GB
(37%)
2,563 GB
(25%)
1,255 GB
(12%)
397 MB
(0%)
1,255 GB
(12%)
2 M

Description
The following information is displayed:
Term

Definition

Total Raw
Capacity

The total physical capacity of the cluster/cube/node or disk.

Used Raw

The capacity that is free and available for storing data or for self healing operations in case of

221

Capacity

disk or node failures or for database growth and failover.

Free Raw
Capacity

The capacity that is used or otherwise not available to store data; this includes the capacity
reserved as system resources, not assigned for storage or offline, and capacity actually used to
store user data and associated audit and Metadata.

System Resources

The capacity that is used by the CentraStar software and is never available for storing data.

Offline Capacity

The capacity that is temporarily unavailable due to reboots, offline nodes, or hardware faults.
This capacity will be available as soon as the cause has been solved.

Spare Capacity

The capacity that is available on nodes that do not have the storage role assigned.

Used Capacity

The capacity that is in use to store data. This includes Protected User Data plus Audit &
Metadata.

Audit and
Metadata

The overhead capacity required to manage the stored data. This includes indexes, databases,
and internal queues.

Protected User
Data

The capacity taken by user data, including CDFs, reflections and protected copies of user files.

Used Object Count

The total object capacity already used.

222

View Node Capacity


Syntax
show node detail <scope>

Use
This command displays the details of all the nodes on the cluster. For example, the output
below displays all nodes on the cluster that are offline. The scope of this command is
extensive and provides the following options to display.

Output
Config# show node detail off
Node c001n05
Serial Number:
1038200159
Status:
off
Roles:
storage
Model:
118032400-A06
Rail:
1
Software Version:
3.0.0-454-511-7854
Modem present:
false
Internal IP:
10.255.1.5
Total Capacity:
1,281 GB
Used Capacity:
0 GB
Free Capacity:
0 GB
Faulted Capacity:
1,281 GB
Total number of objects stored: 0
Regenerations:
1

Description
The following information is displayed:
? Node: The node name.
? Serial number: The serial number of the node.
? Status: of the node. This can be on or off.

223

? Roles: of the node. This can either be access and/or storage or spare.
? Model: This can either be 118032076 for nodes with a capacity of 0.6 TB, 118032306A0X for nodes with a capacity of 1.0 TB or 118032400-A0X for nodes with a capacity
of 1.25 TB.
? Rail: This refers to the node power status: 0, 1, or ATS. Odd Gen2 nodes are
connected to power rail 1 and even Gen2 nodes are connected to power rail 0. ATS is
the Automatic (AC) Transfer
? Software Version: The CentraStar software version.
? Modem Present: Whether a node has a modem (true) or not (false).
? Internal IP: The IP address of a node that is used to connect to the internal (cluster)
network.
? Total Capacity: The total physical capacity of the cluster/cube/node or disk.
? Used Capacity: The capacity that is used or otherwise not available to store data; this
includes the capacity reserved as system resources, not assigned for storage or
offline, and capacity actually used to store user data and associated audit and
Metadata.
? Free Capacity: The capacity that is free and available for storing data or for self
healing operations in case of disk or node failures or for database growth and
failover.
? Total number of objects: The total object capacity already used.
? Regenerations: Number of regeneration tasks being processed.

224

View the Capacity of All Nodes


Syntax
show capacity availability

Use
This command displays the capacity availability of all the nodes on the cluster.

Output
Config# show capacity availability
Number of nodes:
Number of nodes with storage role:
Total Raw Capacity:
Used Raw Capacity:
Free Raw Capacity:
System Buffer:
Regeneration Buffer:
Available Capacity:
Total Object Count:
Used Object Count:
Free Object Count:

7
5
9,020 GB (100%)
2,578 GB
(29%)
6,442 GB
(71%)
323 GB
(3%)
644 GB
(7%)
5,474 GB
(61%)
210 M (100%)
58
(0%)
210 M (100%)

Description
The following information is displayed:
Term

Definition

Total Raw
Capacity

The total physical capacity of the cluster/cube/node or disk.

Used Raw
Capacity

The capacity that is used or otherwise not available to store data; this includes the capacity
reserved as system resources, not assigned for storage or offline, and capacity actually used to
store user data and associated audit and Metadata.

225

Free Raw
Capacity

The capacity that is free and available for storing data or for self healing operations in case of
disk or node failures or for database growth and failover.

System Buffer

Allocated space that allows internal databases and indexes to safely grow and failover. As the
system is filled with user data, and the Audit & Metadata capacity increases, the capacity
allocated to the System Buffer decreases.

Regeneration
Buffer

Regeneration Buffer = Space that is allocated for regeneration. Depending on the Regeneration
Buffer Policy, this allocation can be a soft (Alert Only) or hard (Hard Stop) allocation.

Availability
Capacity

The amount of capacity available to write. If the Regeneration Buffer Policy is set to Alert Only,
this equals Free Raw Capacity - System Buffer. If the Regeneration Buffer Policy is set to Hard
Stop, this equals Free Raw Capacity - System Buffer - Regeneration Buffer.

Total Object
Count

The number of objects that can be stored.

Used Object
Count

The total object capacity already used.

Free Object
Count

The total number of objects that can still be written to the cluster.

226

View the Capacity of the Regeneration Buffer


Syntax
show capacity regenerationbuffer

Use
This command displays the regeneration buffer mode (alert or stop) and the regeneration
buffer size.

Output
Config# show capacity regenerationbuffer
Capacity Regeneration Buffer Mode: Alert {or} Hard Stop
Capacity Regeneration Buffer Limit: 1 disk

Description
The following information is displayed:
? Capacity Regeneration Buffer Mode:
o

Alert sends an alert when the regeneration buffer has been reached.

Hard Stop immediately ceases writing to the cabinet and sends an alert.

? Capacity Regeneration Buffer Limit: Size of the regeneration buffer.

227

Configure Centera
Once the tools for the system operator have been installed, the system operator should follow
the following procedures to ensure proper access controls to the Centera:
? Disable the anonymous profile.
? Change the password for the admin account using the CLI.
? Grant the cluster mask the maximum capabilities any application should have on the
Centera.
? Disable the Purge capability for the cluster mask.
? Lock the cluster when done.
For each new application that is using Centera, the following steps must be followed.

Add-on 1 Replication Target


The customer wants to set up replication for one or more of these applications to another
cluster.
Refer to the procedure in Centera Online Help on how to set up replication for one or more
pools.

Add-on 2 Replication Target


If the customer wants the new cluster to be a replication target for another cluster, the
following steps must be followed.

Upgrade with Data in the Default Pool


If the customer has one or more applications using the cluster and has already data in the
default pool, the following steps should be followed.

228

Learn More About Centera


Centera is a networked storage system specifically designed to store and provide fast, easy
access to fixed content (information in its final form). It is the premier solution to offer online
availability with long-term retention and assured integrity for this fastest-growing category
of information.

Fixed Content Solution


Centera provides a simple, scalable, secure storage solution for cost-effective retention,
protection, and disposition of a wide range of fixed contentincluding X -rays, voice
archives, electronic documents, e-mail archives, check images, and CAD/CAM designs.
Exceptional performance, seamless integration, and proven reliability make Centera the
online enterprise archiving standard for virtually any application and data type.

Centera Features
? Content Addressed Storage: Centera generates a unique address for each stored
piece of data. Easily store and retrieve vast amounts of digital fixed content
archived e-mails, electronic documents and be sure of its authenticity and integrity.
? Content Protection: Centera regenerates data if a component fails. Data is mirrored
or segmented on other nodes in Centera, ensuring data redundancy. Centera
constantly monitors the health of the system and detects potential problems before
they become a problem.
? Compliance: Centera Compliance Edition Plus (CE+) is designed to meet the strict
requirements for electronic storage media as established by regulations from the
Securities and Exchange Commission and other national and international regulatory
groups. It draws on the core strengths of the Centera platform while adding
extensive compliance capabilities.
? Granular Access Control: Centera prevents unauthorized applications from storing
data on or retrieving data from a Centera. Data can be logically separated. It is
possible to prevent one application from accessing C-Clips written by another
application.
? Self Healing: Centera is a no single point of failure system; it enables dynamic
expansion when more storage is required and self healing if hardware failure occurs.

229

Centera detects new nodes and excludes any drives that fail, before regenerating
their data. Centera constantly checks data integrity and verifies that a copy of data is
always available.
? Disaster Recovery: Centera can transparently replicate all stored data to a remote
cluster to support disaster recovery. Data can be stored safely in a separate
geographical location to ensure data redundancy.
? Centera Monitoring: Centera Sensors continually run on a Centera to monitor the
state of its hardware and software components and raise alerts when appropriate.
Often, these alert messages are only informational for the system administrator and
no action is required by the user, due to Centera self healing and regeneration
abilities.

Monitor Centera
Centera can be monitored using the following channels:
Monitoring
Channel

Overview

Centera
Viewer

Centera Viewer provides a set of diagnostic and monitoring tools used to discover, manage, and
report the health of a Centera. It provides status information on all aspects of the system and its
components.

Centera
Tools

CenteraVerify and CenteraPing. CenteraVerify tests the connection to a Centera cluster and
displays cluster information. Refer to CenteraVerify Users Guide, P/N 300-002-055, for more
information. CenteraPing checks if the Centera cluster is properly connected to the local area
network. Refer to CenteraPing Users Guide, P/N 300-002-054, for more information.

CLI

The CLI (Command Line Interface) enables a system administrator to configure and monitor cluster
settings, send notification to customer services, manage application profiles, control replication and
restore and retrieve information from a cluster such as capacity, cluster settings, administrative
details and more.

ConnectEMC

If ConnectEMC is enabled, alerts are automatically sent to the EMC Customer support center where
it is determined if intervention by an EMC engineer is necessary. ConnectEMC uses an XML format
for sending alert messages to EMC. The XML messages are encrypted. ConnectEMC sends email
messages to the EMC Customer Support Center via the customer. SMTP infrastructure or via a
customer workstation with EMC OnAlertTM installed.

230

EMC OnAlert is an application that provides remote support functionality to networked EMC devices
and includes Automatic Error Reporting and Remote Technical Support. If an OnAlert station is
used, it must run an SMTP server to accept the health reports and alert messages. An on-site EMC
engineer can enable or disable ConnectEMC. In order to provide optimal customer service, EMC
strongly recommends that all clusters are configured with ConnectEMC.
SNMP

The Simple Network Management Protocol is an Internet-standard protocol for managing devices on
IP networks. The SNMP agent runs on all nodes with the access role and proactively sends
messages called SNMP traps to a network management station. Refer to Online Help for more
information.

ConnectEMC
Notification

The system administrator can receive email notification with an HTML formatted copy of the alert
message. Refer to Online Help for more information.

EMC
ControlCenter

The EMC ControlCenter users can monitor one or more Centera clusters in their storage
environment.

Monitoring
API

The Centera SDK (Software Developers Kit) can receive alerts using the MoPI.

Add a Cluster to a Domain


Syntax
update domain add cluster

Use
This command adds a cluster to a domain. A domain is a logical grouping of clusters.
Clusters can be grouped logically, for example, according to physical location or on the type
of data that they are storing.
A maximum of four clusters can exist in a Centera universe.

231

Output
update domain add cluster
Domain name: Finance
Cluster name: New York
Cluster address: 10.88.999.6:3218
Issue the command? (yes, no) [no]: yes

Description
The following information is displayed:
? Domain name: Enter the domain name.
? Cluster name: Enter the cluster name to add to the previously entered domain.
? Cluster address: Enter the IP address of the cluster.
The following errors can be returned:
? Domain Name: An error is returned if the domain name does not exist or was missspelled (case-sensitive).
? Cluster Name: An error is returned if the cluster name does not exist or was missspelled (case sensitive).
? Cluster Address: An error is returned if the cluster address is incorrect.

Define a Cluster Mask


Syntax
set cluster mask

Use
This command enables the administrator to change the cluster level mask. This level of access
control specifies which actions to block on the cluster, regardless of the profile and/or pool
that is being used.
For example, if the cluster denies read access, no application will be able to perform read

232

operations on any pool.

Output
Config# set cluster mask
Cluster Mask [rdqe-cw-m]:
Issue the command? (yes, no) [no]: n

Description
The access rights available are: The operations available are: read, write, delete, privilegeddelete, C-Clip Copy, purge, query, exist, and monitor:
Capabilities

Definition

Write (w)

Write to a C-Clip. WriteClip access must be enabled to write. 'Enabled' or 'Disabled'.

Read (r)

Read a C-Clip. 'Enabled' or 'Disabled'.

Delete (d)

Deletes C-Clips. 'Enabled' or 'Disabled'.

Exist (e)

Checks for the existence of a specified C-Clip. 'Enabled' or 'Disabled'.

Privileged Delete
(D)

Deletes all copies of the C-Clip and can overrule retention periods. 'Enabled' or 'Disabled'

Query (q)

Query the contents of a Pool. When set to 'Enabled', C-Clips can be searched for in the pool
using a time based query. 'Enabled' or 'Disabled'.

Clip-Copy (c)

Copy a C -Clip. 'Enabled' or 'Disabled'. This capability is needed for replication and restore
operations.

Purge (p)

Remove all traces of C-Clip from the cluster. 'Enabled' or 'Disabled'. Purge is only available for
cluster level profiles.

Monitor (m)

Retrieves statistics concerning Centera

Profile-Driven
Metadata

Supports storing/retrieving per-profile metadata, automatically added to the CDF. 'Enabled' or


'Disabled'.

233

Remove a Cluster from a Domain


Syntax
update domain remove cluster

Use
This command removes a cluster from a domain. A domain is a logical grouping of clusters.
Clusters can be grouped logically, for example, according to physical location or on the type
of data that they are storing.
A maximum of four clusters can exist in a Centera universe.

Output
update domain remove cluster
Domain name: Finance
Cluster name: New York
Cluster addresses: 10.88.999.6:3218
Issue the command? (yes, no) [no]: yes

Description
The following information is displayed:
? Domain name: Enter the domain name.
? Cluster name: Enter the cluster name to add to the previously entered domain.
? Cluster address: Enter the IP address of the cluster.
The following errors can be returned:
? Domain Name: An error is returned if the domain name does not exist or was missspelled (case-sensitive).
? Cluster Name: An error is returned if the cluster name does not exist or was missspelled (case sensitive).
? Cluster Address: An error is returned if the cluster address is incorrect.

234

View the Health of a Cluster


Syntax
show features

Use
This command displays a list of all system features and their current state.

Output
Config# show features
Feature
data-shredding
storage-strategy
performance: full
threshold: 256 KB
storageonaccess
garbage-collection

State
off
performance

on
on

Description
The following information is displayed:
? Name of Feature: data-shredding, storage-strategy, storage on access and garbagecollection.
? State: Current state of the feature. The options are on, off or performance/capacity.
The data-shredding feature is set to off by default. The garbage-collection feature is
set to on by default. The storageonaccess feature is on by default.

235

Define and Manage Retention Periods


Syntax
create retention <name> <period>

Use
This command creates a retention period class and sets the length of the retention period.

Output
create retention CL1 3 Years
WARNING: Once activated, a retention period class cannot be deleted without
returning your Centera to EMC manufacturing and deleting ALL existing data
on your cluster.
Are you sure you want to create this retention period class?
(yes, no) [no]: Yes
Issue the command?
(yes, no) [no]: Yes

Description
The following information is displayed:
? Name: Name to give to the retention period class. The name can contain a maximum
of 255 characters and must not contain the following characters ' " / & < > <tab>
<newline> or <space>. A retention period class enables changes to be made to a CClip without modifying the C-Clip itself. The retention class is a symbolic
representation of the retention period.
? Period: Length of time you want to set the retention period class to. This can be in
minutes, days, weeks, months or years. A retention period is the time that a data
object has to be stored before an application is allowed to delete it.
The following errors can be displayed:
If the name of the retention period class already exists the following error will be displayed:
? Error: The specified retention class already exists.

236

The update retention command may be used to change the retention period. The retention
period values can be entered in any combination of units of years, months, days, hours,
minutes and seconds. The default unit is seconds. The value infinite is also allowed.

Modify a Retention Period


Syntax
update retention <name> <period>

Use
This command is used to modify a retention period.

Output
Config# update retention Centera 10 weeks
Are you sure that you want to update this retention
period? (yes, no) [no]: Y
Issue the command?
(yes, no) [no]: Y

Description
The following information is displayed:
? Update Retention: This updates the retention period of the given retention class.
The retention period values can be entered in any combination of units of years, months,
days, hours, minutes and seconds.
For example:
Config# update retention Centera 10 weeks

The following errors can be displayed:

237

If the name of the retention class does not exist, the following error is displayed:
? Error: The specified retention class does not exist. The create retention command may
be used to create a new retention class.
If the class exists and the new period is smaller than the current one, a warning similar to the
following is displayed:
? WARNING: This retention period is shorter than the previous value of 3 months, are
you sure you want to update this retention period?
If the name of the class already exists, the following is displayed:
Are you sure that you want to update this retention period? (yes, no) [no]:
Y
Issue the command?
(yes, no) [no]: Y

On a Compliance CE+ Centera the retention period cannot be updated to a smaller value
than the existing period. If a smaller value is entered, an error similar to the following will be
displayed:
WARNING: This retention period is shorter than the previous value of 3 months.

238

Set and Change the Default Retention Period


Syntax
set security defaultretention <value>

Use
This command sets the default retention period for the entire cluster.
The default retention period is only applicable to C-Clips for which no retention period was
specified by the SDK. The default retention period is not added to the CDF and is only used
when a delete is issued.

Output
Config# set security defaultretention 1 year
Issue this command?
(yes, no) [no]:

Description
The following information is displayed:
The value of the default retention period can be expressed in (milli) seconds, minutes, days,
months, and years. This only applies for GE models.
The following errors can be displayed:
If a negative value is entered, the following error is displayed:
? An illegal retention period has been entered.

239

View ConnectEMC Settings


Syntax
show config notification

Use
This command displays the ConnectEMC settings. ConnectEMC sends daily health reports,
alerts and other notifications through an SMTP server. If an OnAlert station is used, it must
run an SMTP server to accept the information sent by ConnectEMC.

Output
Config# show config notification
ConnectEMC mode
off
ConnectEMC server 1
not configured
ConnectEMC server 2
not configured
Cluster domain
localdomain
ConnectEMC recipients
not configured
Reply address
not configured
Report interval
1 day

Description
The following information is displayed:
? ConnectEMC mode
On: The Centera cluster will send a daily health report, alerts and notifications.
Off: ConnectEMC has been disabled. The Centera cluster will not send health reports, alerts
and notifications.
? ConnectEMC Server 1: The IP address or host name of the SMTP server or customer
workstation where OnAlert is installed.
? ConnectEMC Server: The IP address or host name of the backup SMTP server or a
secondary customer workstation (not mandatory) where OnAlert is installed.

240

? Cluster Domain: The domain to which the cluster belongs.


? Last Report Sent On: The date and time at which the last health report was sent
? ConnectEMC Recipients : The email address to where the health reports will be sent.
? Last Report Number: The number of the last report made.
? Reply Address: The email address to where the recipient of the health report can
send a reply.
? Report Interval: The time interval between two reports.
? Last Report Generation: Displays the status of the previous report.

Verify Email Connectivity to EMC


Syntax
notify

Use
This command sends an email to the EMC Customer Support Center to verify email
connectivity. It can only be issued when ConnectEMC has been enabled on the cluster.
On a Compliance Edition Plus (CE+) model all remote serviceability procedures allow
modems to be connected to the Centera cabinet for the duration of the service intervention. It
is mandatory that the modems are disconnected when the service intervention is complete.

Output
Config# notify
Enter your name: admin
Enter a telephone number where you can be reached now: 01-234.2541
Describe maintenance activity:
End your input by pressing the Enter key twice
Please confirm:

241

Centera Maintenance Report


-------------------------------------------------------Generated on Tuesday February 1, 2005 12:10:43 CET
Cluster Name CUBE4-V3
Cluster Serial Number APM00205030103
Installation Location 171 South Street
Date of last maintenance Monday 31 January 2005 20:59:45 CET
Administrative contact admin
Admin email admin_contact@home.com
Admin phone 01-234.2541
Activity reported
----------------Performed by admin
Reachable at 01-234.2541
-------------------------------------------------------Can this message be sent (yes, no) [yes]: Y

Description
The following information is displayed:
? Enter your name: The name of the person who sends the email.
? Enter a telephone number: The telephone number where the person who sends the
email can be reached.
? Describe maintenance activity: The maintenance activity that has taken place.
End the input process by pressing Enter twice. The cluster now generates a report.
When ConnectEMC is not configured, the following error is displayed:
? Unable to send notification. Please verify that the cluster is correctly configured to
send email and that all necessary information has been correctly entered.
This command does not accept characters beyond ASCII127 for names and telephone
numbers.

242

Change ConnectEMC Parameters


Syntax
set notification

Use
This command changes the parameters of ConnectEMC. ConnectEMC sends daily health
reports, alerts, and other notifications through an SMTP server. If an OnAlert station is used,
it must run an SMTP server to accept the information sent by ConnectEMC.

Output
Config# set notification
Mandatory: what is the primary ConnectEMC smtp relay? [not configured]:
What is the secondary ConnectEMC smtp relay? [not configured]:
Domain to which the cluster belongs? [local]: New York
Issue the command? (yes, no) [no]:

Description
The following information is displayed:
? Primary ConnectEMC smtp relay: The IP address or host name of an SMTP server or
customer workstation on which OnAlert is installed. The SMTP address for OnAlert
is Centera -alert@alert-station.customer.net.
? Secondary ConnectEMC smtp relay: The IP address or host name of a backup SMTP
server or a secondary customer workstation (not mandatory) on which OnAlert is
installed. This parameter is optional.
? Domain: The domain to which the SMTP server belongs. For example, a cluster
installed at Acme Inc. would probably have the local domain set to "acme.com". This
is required even though the cluster itself cannot receive emails.
This command does not accept characters beyond ASCII127 for email and IP addresses.

243

Add a Cluster to a Domain


Syntax
update domain add cluster

Use
This command adds a cluster to a domain. A domain is a logical grouping of clusters.
Clusters can be grouped logically, for example, according to physical location or on the type
of data that they are storing.
A maximum of four clusters can exist in a Centera universe.

Output
update domain add cluster
Domain name: Finance
Cluster name: New York
Cluster address: 10.88.999.6:3218
Issue the command? (yes, no) [no]: yes

Description
The following information is displayed:
? Domain name: Enter the domain name.
? Cluster name: Enter the cluster name to add to the previously entered domain.
? Cluster address: Enter the IP address of the cluster.
The following errors can be returned:
? Domain Name: An error is returned if the domain name does not exist or was missspelled (case-sensitive).
? Cluster Name: An error is returned if the cluster name does not exist or was missspelled (case sensitive).
? Cluster Address: An error is returned if the cluster address is incorrect.

244

Create a Domain
Syntax
create domain <name>

Use
This command creates a new domain. A domain is a logical grouping of clusters. For

example, clusters could be grouped together based on their physical location or the type of
data they store.
When CentraStar is installed, a default domain is created on each cluster. This domain is

called Default. The domain takes the access nodes associated with the cluster and uses their IP
addresses as the access addresses for the cluster. The domain contains the one cluster called
DefaultCluster.

Output
Config# create domain Finance
Issue the command? (yes, no) [no]: Y

Description
The following information is required:
? Name of the domain: Domain name must be unique.
The command returns an error if the name already exists.
A domain name is case-sensitive.

245

Delete a Domain
Syntax
delete domain <name>

Use
This command deletes a domain.

Output
delete domain Finance
Issue the command? (yes, no) [no]: Y

Description
The following information is displayed:
? Delete Domain: Enter a domain name of an existing domain.
A domain name is case-sensitive.
The following error can be displayed:
The command returns an error if the domain name does not exist:
? This domain name is not defined

246

Remove a Cluster from a Domain


Syntax
update domain remove cluster

Use
This command removes a cluster from a domain. A domain is a logical grouping of clusters.
Clusters can be grouped logically, for example, according to physical location or on the type
of data that they are storing.
A maximum of four clusters can exist in a Centera universe.

Output
update domain remove cluster
Domain name: Finance
Cluster name: New York
Cluster addresses: 10.88.999.6:3218
Issue the command? (yes, no) [no]: yes

Description
The following information is displayed:
? Domain name: Enter the domain name.
? Cluster name: Enter the cluster name to add to the previously entered domain.
? Cluster address: Enter the IP address of the cluster.
The following errors can be returned:
? Domain Name: An error is returned if the domain name does not exist or was missspelled (case-sensitive).
? Cluster Name: An error is returned if the cluster name does not exist or was missspelled (case sensitive).
? Cluster Address: An error is returned if the cluster address is incorrect.

247

Set Up ICMP
Syntax
set icmp

Use
This command enables or disables ICMP.

Output
Enable ICMP (yes, no) [no]:
Issue the command? (yes, no) [no]:

Description
ICMP is now enabled or disabled.

View ICMP Settings


Syntax
show icmp

Use
This command shows the status of icmp

Output
ICMP enabled: [Enabled/Disabled]: Enabled

248

Description
ICMP is enabled or disabled.

Learn About Content Protection Schemes


Content Protection
An important characteristic of the CentraStar architecture is that there is a fixed overhead for
any given transaction regardless of the size of the data object that is being transferred. Due to
this overhead, throughput is highly dependant on the size of the data written to a Centera.
To assure continuous data availability and allow self healing, Centera uses Content
Protection Mirrored (CPM) or Content Protection Parity (CPP) all data on the cluster. Data
copies or fragments are stored on different nodes thus ensuring data redundancy in the event
of disk or node failure.

Regeneration
Regeneration prevents data loss by disk and node failures, and provides self-healing
functionality. It relies on the existence of mirrored copies Content Protection Mirrored (CPM)
or fragmented segments Content Protection Parity (CPP) of the data on different nodes in the
cluster.

Regeneration Levels
There are two levels of regeneration and two mechanisms to trigger regeneration tasks (node
and disk). Disk regeneration, for instance, occurs when a node detects a disk failure. The
node informs the other nodes in the cluster. This triggers a regeneration task on every node
that has a copy of the objects stored on the failed disk.
Node level regeneration is triggered during a periodic check, when the system cannot reach a
node for more than two hours (except on a 4 node system). This triggers a regeneration task
on every node that has a copy of the objects stored on the failed node.

249

Create a Pool Mapping


Syntax
create poolmapping <profile name> <pool name>

Use
This command create a mapping between a profile used to write C-Clips to the default pool
and a pool. For each profile with legacy data in the default pool, a pool mapping is required
to be created before data can be migrated.
Pool mappings are added to the scratchpad which is used to prepare mappings until they are
correct and ready to be migrated.

Output
Config# create poolmapping FinanceProfile FinancePool
Warning: The given pool is not the home pool of the given profile
Issue the Command? (yes, no) [no]: Y
Added the mapping to the mapping scratchpad.

Description
The following information is displayed:
? Create poolmapping: Enter the name of the profile followed by the pool on which to
create a mapping to.
The following errors can be returned:
? Command failed: The pool was not found on the cluster.
? Command failed: It is not possible to map a profile to the cluster pool.
? Command failed: It is not possible to map a profile to the default pool.
A warning is given if the pool is not th e home pool of the given profile.

250

Delete a Pool Mapping


Syntax
delete poolmapping <profile name> <pool name>

Use
This command deletes a mapping between the pool and the profile.

Output
Config# delete poolmapping FinanceProfile FinancePool
Issue the Command? (yes, no) [no]: Y
Deleted the mapping to the mapping scratchpad.

Description
The following information is displayed:
? Delete Poolmapping: Enter the name of the profile followed by the pool on which to
delete a mapping from.
The following errors can be displayed:
? Command failed: The mapping was not found in the scratchpad of the cluster.

251

Migrate Legacy Data


The purpose of this procedure is to explain how to migrate legacy data from the default pool
to a custom application pool. In some cases, applications need access to legacy data held in
the default pool to function correctly.
C-Clips written using version 2.2 SP2 or earlier will always belong to the default pool. It is
not possible to migrate these C-Clips to an application pool.
A migration task does however exist for C-Clips written using the SDK 2.3 or higher. These
C-Clips can be moved into an application pool, provided that an appropriate access profile
was used when writing the C-Clip.
Note: This procedure consumes system resources and you may notice an impact on
performance whilst it is running.
1.
2.

3.
4.

5.

Launch the CLI and connect to the source cluster.


Create a mapping between the profile used to write C-Clips to the default pool and a
pool using the create poolmapping command. The mappings are placed on the
scratchpad which is used to prepare mappings until the user has the correct
mappings to start migration.
View the pool mapping to check that it is correct using the show poolmapping
command.
Once the mappings are in place, run a migration task using the migrate
poolmapping start command. This command copies all scratchpad mappings to the
active mappings and re-starts the migration process.
Display the migration task using the show pool migration command.

For each profile with legacy data in the default pool, perform the above procedure.
The following example is used to explain the above procedure.

252

Migrating Legacy Data Example


In this example, a mapping is made between a pool and a profile to migrate legacy data from
the default pool into a custom application pool.
Any C-Clips held in the default pool that use the FinanceProfile are migrated to the defined
custom application pool which in this case is called FinancePool.
Centera 1
Config# create poolmapping FinanceProfile FinancePool
Config# migrate poolmapping start
View Migration Process
Config# show pool migration
Migration Status:
finished
Average Migration Progress:
100%
Completed Online Nodes:
5/5
ETA of slowest Node:
0 hours

Refer to the CLI Reference Guide for the complete version of each CLI command.

Provide Legacy Protection


The purpose of this procedure is to explain how to provide legacy protection to applications
using Centera. In some cases, applications need access to legacy data held in the default pool
to function correctly.
The default pool contains every C-Clip that is not contained in a custom application pool. The
main purpose of this pool is backwards compatibility and protection of historical data.
C-Clips written using version 2.2 sp2 or earlier will always belong to the default pool. It is not
possible to migrate these C-Clips to an application pool. a migration task does however exist
for C-Clips written using the SDK 2.3 or higher. These C-Clips can be moved into pools,
provided that an appropriate access profile was used when writing the C-Clip.
1.

Launch the CLI and connect to the source cluster.

253

2.
3.
4.
5.

6.
7.

Create a pool using the create pool command. Assign rights (capabilities) to the pool
and set the quota.
Create an access profile using the create profile command.
Create or generate a profile secret and assign rights for the profile in its home pool.
Set the capabilities the access profile should have in its home pool.
Enter Pool Entry Authorization creation information. Enter the pathname for a file
where the specified file is accessible on the local system and where the Pool Entry
Authorization file will be saved. If the file exists, then the user is prompted to
overwrite the file.
Grant the profile minimum access rights to the home pool to enable the application
to function properly. These may be: read, exist, query and/or delete.
Grant the profile read only access to the default pool. No data can be added because
the application does not have write access.

The following example is used to explain the above procedure.

Provide Legacy Protection Example


In this example two pools are created, each of which has an access profile associated with it.
Legacy data is always held in the default pool which is automatically created. Any objects
that exist on the cluster before replication is enabled are held in the default pool. To copy
data to a target cluster that was created after replication was set up, use the restore start
command.

Centera 1
Config#
Config#
granted
Config#
Config#
granted
Config#
granted

create pool App2.0


create profile Application
rights for the profile in the home pool [rdqeDcw]: rw
create pool Media
create profile MediaApplication
rights for the profile in the home pool [rdqeDcw]: rwd
set grants Default Application
pool rights for profile [rdqeDcw]: r

Refer to the CLI Reference Guide for th e complete version of each CLI command.

254

Revert Pool Mapping


Syntax
migrate poolmapping revert

Use
This command reverts the scratchpad mappings to the active mappings. It copies the active
mappings over the scratchpad mappings. It can be used to make the scratchpad in sync with
the active mappings.

Output
Config# migrate poolmapping revert
Do you want to lose the scratchpad mappings and revert the scratchpad to the
active mappings? (yes, no) [no]: Y
Issue the command? (yes, no) [no: Y

Description
The scratchpad mappings revert to the active settings.

Start Migration Process


Syntax
migrate poolmapping start

Use
This command starts the migration process. This copies data from the default pool to a
custom pool. It copies the scratchpad mappings to the active mappings and starts the

255

migration process on each node asynchronously.

Output
Config# migrate poolmapping start
Do you want to start a migration process based on the scratchpad mappings?
(yes, no) [no]: Y
Capacity information might become incorrect
Do you want to recalculate it? (yes, no) [yes]: Y
Issue the Command? (yes, no) [no]: Y
Committed the scratchpad mappings and started the migration process.

Description
The following information is displayed:
? Start Migration Process: The mapping set up between the defined pools and profiles
is started. Use the create poolmapping command to map profiles to pools.
If there is no change to either the scratchpad mappings or the active mappings, the following
is displayed:
? No changes detected between active and scratchpad mappings.
? Trigger the migration process to restart? (yes, no) [no]: Y
The following errors can be returned:
If the pool cannot be found, the following error is displayed:
? Command failed: The pool was not found on the cluster.
If the profile cannot be found, the following error is displayed:
? Command failed: The profile was not found on the cluster.

256

View Pool Mapping


Syntax
show poolmapping

Use
This command displays the poolmapping between all the profiles and pools on the cluster.

Output
Config# show pool mapping
Profile Name
Scratchpad Pool Mapping
Active Pool
Mapping
---------------------------------------------------------------------------FinanceProfile
Finance
Finance

Description
The following information is displayed:
? show poolmapping: Lists all the mappings between pools and profiles on the cluster.

257

View Pool Migration


Syntax
show pool migration

Use
This command displays the status of the migration processes running on the cluster.

Output
Config# show pool migration
Migration Status:
finished
Average Migration Progress:
100%
Completed Online Nodes:
5/5
ETA of slowest Node:
0 hours
--------------------------------------Active Pool Mappings
FinanceProfile
Finance

Description
The following information is displayed:
? Migration Status: Percentage of the migration process completed.
? Average Migration Process: Average time it takes for each node to complete the
migration process.
? Completed Online Nodes: Number of online nodes on which the migration process
has completed.
? ETA of slowest Node: Estimated time that the slowest node will complete the
migration process.

258

Change the Settings of an Access Node


Syntax
set node ip <node ID>

Use
This command changes the connection details for a node with the access role. Node ID refers
to a specific node. The syntax of a node ID is cxxxnyy, where cxxx identifies the cabinet and
nyy the node in that cabinet (xxx and yy are numeric).

Output
Config# set node c001n05
Use DHCP (yes, no) [no]: no
IP address [10.88.999.91]:
Subnet mask [255.333.255.0]:
IP address of default gateway [10.88.999.1]:
IP address of Domain Name Server [152.99.87.47]:
Issue the command? (yes, no) [no]: yes

Description
The following information is displayed:
? Use DHCP: Dynamic Host Configuration Protocol (DHCP) automatically assigns IP
addresses to devices attached to a Local Area Network (LAN). If you select yes for
this option then the following fields are not required.
? IP address: IP address of a node with the access role. Ping the IP address to make
sure that it is not in use before assigning it to a node with the access role.
? Subnet mask: Subnet mask is the address that refers to the network ID.
? IP address of default gateway: Gateway is a network point that acts as an entrance
to another network.
? IP address of Domain Name Server: Domain Name Server (DNS) locates and
translates real names into IP addresses. For example, a cluster could be called
Centera and this would be mapped to the appropriate IP address.

259

The following errors can be displayed:


If the command is executed on a node which does not have the access role, then the following
error message appears:
? Command failed: The local node does not have the Access role.

View Node Network Configurations


Syntax
show ip detail

Use
This command displays detailed information of the external network configuration of the
nodes with the access role.

Output
Config# show ip detail
Node c001n01
Configuration mode (Manual/DHCP): M
External IP address:
Subnet mask:
IP address of default gateway:
IP address of Domain Name Server:
Status:
Link speed:
Duplex settings:
Eth0:
Eth1:
Eth2:
Media:

10.99.433.6
999.255.255.0
10.99.433.1
182.62.69.47
on
100 Mb/s (automatic)
F
00:10:81:61:11:01
00:10:81:61:11:00
00:10:0C:2D:18:38
not configured

260

Description
The following information is displayed:
? Configuration mode: The type of host configuration and can either be D (DHCP
Dynamic Host Configuration Protocol) or M (manual network configuration).
? External IP address: The IP address of a node with the access role that is used to
connect to an external network.
? A Subnet Mask: The address that refers to the network ID.
? IP Address of Default Gateway: The node with the access role uses this to gain
access to the network. A gateway is a network point that acts as an entrance to
another network.
? IP Address of Domain Name Server: The node with the access role uses this to gain
access to the server. A Domain Name Server (DNS) locates and translates real names
into IP addresses. For example, a cluster could be called "Centera" and this would be
mapped to the appropriate IP address.
? Status: This identifies if the node is on or off.
? Linkspeed: The available linkspeed options are 10f, 10h, 100f, 100h, and 1000f (1000f
for Gen3 hardware only). Auto is the preferred setting and refers to auto negotiation,
the NICs decide on the best linkspeed they can use.
? Duplex Settings: The Duplex settings can either be half (one way) or full (both
ways).
? Media Access Control: (MAC) addresses of the three interfaces (Eth0, Eth1, Eth2). A
MAC address is a unique hardware network identifier.

261

View Detailed Network Switch Information


Syntax
show network detail

Use
This command displays detailed network switch information.

Output
Config# show network detail
Switch c001sw0
Cabinet:
1
Status:
on
Model:
Allied Telesyn AT-RP48
May-2003
Serial number: 58478131
Trunk info :
port 41:Up
port 42:Up
port 43:Up
Uplink info:
port 44:Not installed
Rail:
1
Switch operates okay
Switch root0
Off

Rapier 48i version 2.4.1-01 01-

Description
The following information is displayed:
? Switch: The switch is identified by the switch ID
? Cabinet: The physical cabinet where the switch is located. One cabinet always
contains two switches on two different power rails, to provide high availability in
case of power failures.
? Status: Identifies if the switch is on or off.
? Model: The switchs hardware specification, for example Allied Telesyn AT-RP48i

262

Rapier 48i version 2.2.2-12 05-Mar-2002.


? Serial Number: The switchs serial number.
? Trunk Info: The status of the switch cords in one cube: Up or Down.
? Uplink Info: The status of the switch cords between cubes: Up, Down, or Not
installed.
? Rail: the power rail that feeds the switch: 0, 1, or ATS. CxxSW0 switches are
connected to power rail 1 (to the right if you are facing the back of a Centera cabinet),
and CxxSW1 switches are connected to power rail 0. ATS is the Automatic (AC)
Transfer Switch.
? Switch Operates Okay: States the proper operation of the switch or the sentence
Switch needs replacement states the need to replace the switch.
? Switch Root: Indicates that a root switch is available and on or off identifies if the
switch is on or off.

View the Status of Network Switches


Syntax
show network status

Use
This command displays the status of all network switches in the cluster.

Output
Config# show network status
Cabinet Switch
Rail Status
-----------------------------------1
c001sw0
1
on
1
c001sw1
0
on

263

root
root0
0
off
root
root1
1
off
------------------------------------

Description
The following information is displayed:
? Cabinet: Physical cabinet where the switch is located. One cabinet always contains
two switches on two different power rails, to provide high availability in case of
power failures.
? Switch: Switch ID.
? Rail: Power rail that feeds the switch: 0, 1, or ATS. CxxSW0 switches are connected to
power rail 1 (to the right if you are facing the back of a Centera cabinet), and CxxSW1
switches are connected to power rail 0. ATS is the Automatic (AC) Transfer Switch.
? Status: Switch is on or off.
? Root: Root switch is available.

Change the Network Settings of an Access Node


Syntax
set node ip <node ID>

Use
This command changes the connection details for a node with the access role. Node ID refers
to a specific node. The syntax of a node ID is cxxxnyy, where cxxx identifies the cabinet and
nyy the node in that cabinet (xxx and yy are numeric).

Output
Config# set node c001n05
Use DHCP (yes, no) [no]: no
IP address [10.88.999.91]:
Subnet mask [255.333.255.0]:
IP address of default gateway [10.88.999.1]:
IP address of Domain Name Server [152.99.87.47]:
Issue the command? (yes, no) [no]: yes

264

Description
The following information is displayed:
? Use DHCP: Dynamic Host Configuration Protocol (DHCP) automatically assigns IP
addresses to devices attached to a Local Area Network (LAN). If you select yes for
this option then the following fields are not required.
? IP address: IP address of a node with the access role. Ping the IP address to make
sure that it is not in use before assigning it to a node with the access role.
? Subnet mask: Subnet mask is the address that refers to the network ID.
? IP address of default gateway: Gateway is a network point that acts as an entrance
to another network.
? IP address of Domain Name Server: Domain Name Server (DNS) locates and
translates real names into IP addresses. For example, a cluster could be called
Centera and this would be mapped to the appropriate IP address.
The following errors can be displayed:
If the command is executed on a node which does not have the access role, then the following
error message appears:
? Command failed: The local node does not have the Access role.

265

Lock Nodes for Remote Service


Syntax
set security lock

Use
This command locks all nodes and restores the default network access security on all nodes
that are accessible. By issuing this command only admin accounts can make connections to
the cluster for manageability. Any current service connections will not be closed but future
service connections will not be possible anymore.

Output
Config# set security lock
Issue the command? (yes, no) [no]:

Description
When issuing this command to lock all nodes and some of the nodes are down, those nodes
will not be locked when they are up again.

Set the Speed of a Network Controller


Syntax
set node linkspeed <node ID>

Use
This command sets the speed of the external network controller on a node with the access
role. Node ID refers to a specific node. The syntax of a node ID is cxxxnyy, where cxxx

266

identifies the cabinet and nyy the node in that cabinet (xxx and yy are numeric).

Output
Config# set node linkspeed c001n02
Link speed [auto]:
Issue the command?
(yes, no) [no]:

Description
The following information is displayed:
? Node linkspeed: Current connection speed of the external network link. The
available linkspeed options are 10Mbit, 100Mbit and 1000Mbit (1000Mbit for V3
hardware only).
? Link Speed: Autoneg or Force. Auto is the preferred setting. When the user requests
force, it does not connect with any other speed. When the user requests autoneg, it
will sense if the speed is available or it will try a lower speed.
When set to 1000Mbit, force behavior is not available. When speed is set to 1000Mbit and
force the platform will report auto.
If the command is executed on a node which does not have the access role, then the following
error is returned:
? Command failed: The local node does not have the Access role.

267

View the Number of Nodes on a Cluster


Syntax
show config status

Use
This command displays the number of nodes and the software versions running on them.

Output
Config# show config status
Number of nodes: 8
Active software version: 3.0.0-319-437-6671
Oldest software version found in cluster: 3.0.0-319-437-6671

Description
The following information is displayed:
? Number of Nodes: Number of nodes in the cluster.
? Active Software Version: Information on the active Centera software version.
? Oldest Software Version: Information on the oldest software version found amongst
the active running nodes in the cluster.

268

Unlock Nodes for Remote Service


Syntax
set security unlock <node ID>

Use
This command unlocks a specific node in the system. Service connections to a node are only
possible if that node is unlocked. EMC engineers may require specific nodes to be unlocked
prior to a service intervention. Once a node is unlocked all users can connect to it.
Use the show security command to display the security settings.
You cannot use this command to unlock a node with the access role on a CE+ model.

Output
Config# set security unlock <node id>
Warning! Cluster security can be compromised if you unlock a node.
Issue the command? (yes/no) [no]:

Description
The following information is displayed:
? Node ID: A specific node.
The syntax of a node ID is cxxxnyy, where cxxx identifies the cabinet and nyy the node in
that cabinet (xxx and yy are numeric).

269

View the Details of all Nodes


Syntax
show node detail <scope>

Use
This command displays the details of all the nodes on the cluster. For example, the output
below displays all nodes on the cluster that are offline. The scope of this command is
extensive and provides the following options to display.

Output
Config# show node detail off
Node c001n05
Serial Number:
1038200159
Status:
off
Roles:
storage
Model:
118032400-A06
Rail:
1
Software Version:
3.0.0-454-511-7854
Modem present:
false
Internal IP:
10.255.1.5
Total Capacity:
1,281 GB
Used Capacity:
0 GB
Free Capacity:
0 GB
Faulted Capacity:
1,281 GB
Total number of objects stored: 0
Regenerations:
1

Description
The following information is displayed:
? Node: The node name.
? Serial number: The serial number of the node.
? Status: of the node. This can be on or off.
? Roles: of the node. This can either be access and/or storage or spare.

270

? Model: This can either be 118032076 for nodes with a capacity of 0.6 TB, 118032306A0X for nodes with a capacity of 1.0 TB or 118032400-A0X for nodes with a capacity
of 1.25 TB.
? Rail: This refers to the node power status: 0, 1, or ATS. Odd Gen2 nodes are
connected to power rail 1 and even Gen2 nodes are connected to power rail 0. ATS is
the Automatic (AC) Transfer
? Software Version: The CentraStar software version.
? Modem Present: Whether a node has a modem (true) or not (false).
? Internal IP: The IP address of a node that is used to connect to the internal (cluster)
network.
? Total Capacity: The total physical capacity of the cluster/cube/node or disk.
? Used Capacity: The capacity that is used or otherwise not available to store data; this
includes the capacity reserved as system resources, not assigned for storage or
offline, and capacity actually used to store user data and associated audit and
Metadata.
? Free Capacity: The capacity that is free and available for storing data or for self
healing operations in case of disk or node failures or for database growth and
failover.
? Total number of objects: The total object capacity already used.
? Regenerations: Number of regeneration tasks being processed.

View Node Network Configurations


Syntax
show ip detail

Use
This command displays detailed information of the external network configuration of the

271

nodes with the access role.

Output
Config# show ip detail
Node c001n01
Configuration mode (Manual/DHCP): M
External IP address:
Subnet mask:
IP address of default gateway:
IP address of Domain Name Server:
Status:
Link speed:
Duplex settings:
Eth0:
Eth1:
Eth2:
Media:

10.99.433.6
999.255.255.0
10.99.433.1
182.62.69.47
on
100 Mb/s (automatic)
F
00:10:81:61:11:01
00:10:81:61:11:00
00:10:0C:2D:18:38
not configured

Description
The following information is displayed:
? Configuration mode: The type of host configuration and can either be D (DHCP
Dynamic Host Configuration Protocol) or M (manual network configuration).
? External IP address: The IP address of a node with the access role that is used to
connect to an external network.
? A Subnet Mask: The address that refers to the network ID.
? IP Address of Default Gateway: The node with the access role uses this to gain
access to the network. A gateway is a network point that acts as an entrance to
another network.
? IP Address of Domain Name Server: The node with the access role uses this to gain
access to the server. A Domain Name Server (DNS) locates and translates real names
into IP addresses. For example, a cluster could be called "Centera" and this would be
mapped to the appropriate IP address.
? Status: This identifies if the node is on or off.
? Linkspeed: The available linkspeed options are 10f, 10h, 100f, 100h, and 1000f (1000f
for Gen3 hardware only). Auto is the preferred setting and refers to auto negotiation,
the NICs decide on the best linkspeed they can use.
? Duplex Settings: The Duplex settings can either be half (one way) or full (both

272

ways).
? Media Access Control: (MAC) addresses of the three interfaces (Eth0, Eth1, Eth2). A
MAC address is a unique hardware network identifier.

Create a .pea File


The purpose of this example is to explain how to create a .pea (Pool Entry Authorization) file.
Application authentication is the process whereby the application has to provide
authentication information to a Centera before access is granted. This information is created
in a .pea file. The .pea file contains the following:
? Username: Name identifying the application that wants to access the Centera.
? Secret: Password. Each Username has a secret or password that is used to
authenticate the application.
When replicating pools and profiles, the pools and profiles created on the source cluster must
also exist on the target cluster to support failover.
In the example below, a .pea file is created and placed into the directory
C:\temp\Finance.txt.

Example
Config# create profile Finance
Profile Secret [generate]: C:\temp\Secret.txt
Granted Rights for the Profile in the Home Pool [rdqeDcw]:
Establish a Pool Entry Authorization for application use? (yes, no) [n]: Yes
Enter Pool Entry Authorization creation information: C:\temp\Finance.pea

1.

Enter Pool Entry Authorization creation information. Enter the pathname for a file
where the specified file is accessible on the local system and where the Pool Entry
Authorization file will be saved. If the file exists, then the user is prompted to

273

overwrite the file.


The update profile command can also be used to update the profile secret. This command
can also be used to update a profiles access rights while leaving the profile secret unchanged.

Merge Two or More .pea Files


The purpose of this procedure is to explain how to merge two or more .pea files generated on
different clusters into one .pea file to support replication and application failover:
1. Launch a text editor on your local machine and open the .pea file of the
profile that was generated on the source cluster. The content of this file
should be similar to:
-----------------------------------------------------------------------<.pea version=1.0.0>
<defaultkey name=App1>
<credential id=csp1.secret enc=base64>
MyApplicationSecret
</credential>
</defaultkey>
<key type=cluster id=12345-12345-12345-12345 name=App1>
<credential id=csp1.secret enc=base64>
MySpecialApplicationSecretForClusterA
</credential>
</key>
</.pea>

2.

Open the .pea file that was generated on the target cluster for the same profile and
copy the <key>-section with the profile-cluster information from this file into the first
one:

-----------------------------------------------------------------------<key type=cluster id=56789-56789-56789-56789 name=App1>


<credential id=csp2.secret enc=base64>
MySpecialApplicationSecretForClusterB
</credential>
</key>

274

3.
4.
5.

Repeat step 2 for each .pea file that has been created for the same profile on a
different cluster.
Close all .pea files and save the concatenated .pea file. Quit the text editor.
Copy the concatenated .pea file to the application server and set the environment
variable CENTERA_PEA_LOCATION to point to this file. For more information on
initializing PAI modules and parsing of .pea files refer to the Centera Programmers
Guide, P/N 069001127.

Copy a Pool to Another Cluster


The purpose of this procedure is to explain how to copy one or more pools to another cluster.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.
It is assumed that the necessary pools and profiles have already been created on the source
cluster.
1.
2.
3.
4.
5.
6.
7.

Launch the CLI and connect to the source cluster.


Determine which pools to copy using the show pool list command.
Export pool and profile information of the pools to copy to a file using the export
poolprofilesetup command.
Do not export the full setup when prompted. Export based on pools.
Do not export all pools. Enter the names of the pools to copy and a location and
name for the generated file to be saved (l ocal machine).
Launch another CLI session and connect to the target cluster.
Import the profile information of the pools using the import poolprofilesetup
command. Enter the location and name of the file (local machine) as given in step 5.

Refer to the CLI Reference Guide for the complete version of each CLI command.

275

Copy all pools and profiles to another cluster


The purpose of this procedure is to explain how to copy all pools and their associated profiles
to another cluster.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.
It is assumed that the necessary pools and profiles have already been created on the source
cluster.
1.
2.
3.
4.
5.
6.
7.

Launch the CLI and connect to the source cluster.


Determine which pools and profiles to copy using the show pool list command.
Export pool and profile information of the pools to copy to a file using the export
poolprofilesetup command.
Export the complete setup when prompted by entering Yes.
Enter the pathname where the exported pools and profile information should be
saved.
Launch another CLI session and connect to the target cluster.
Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 5.

Refer to the CLI Reference Guide for the complete version of each CLI command.

276

Create a Pool
Syntax
create pool <name>

Use
This command creates a new pool on the cluster. This command also sets the pool mask and
pool quota.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate same pool id which is unique for each pool, hence replication/restore
will fail. Use the Export/Import commands to copy a pool to another cluster.

Output
Config# create pool Finance
Pool Mask [rdqeDcw] :
Pool Quota [1GB]:3000
Issue the Command?
(yes, no)[no]: yes
Created pool Finance with ID 9de08de2-1dd1-11b2-8a50-c408ebafa0c8-2

Description
The following information is displayed:
? Pool Name: Name of the pool. This must be unique and is case sensitive.
? Pool Mask: This assigns access rights (capabilities). Press return to automatically
assign all access rights, otherwise enter the individual letters corresponding to each
capability.
? Pool Quota: Current quota for the pool. This is the maximum amount of data that
can be written to the pool. This sets the size of the pool. Press return to accept the
default of 1 GB or enter a new quota.

277

Once the pool is created, CentraStar generates a unique ID. The display name of a pool can be
changed at will using the update pool command. The pool ID cannot be modified.
An error is returned if the pool already exists.

Create a Pool Mapping


Syntax
create poolmapping <profile name> <pool name>

Use
This command creates a mapping between a profile used to write C-Clips to the default pool
and a pool. For each profile with legacy data in the default pool, a pool mapping is required
to be created before data can be migrated.
Pool mappings are added to the scratchpad which is used to prepare mappings until they are
correct and ready to be migrated.

Output
Config# create poolmapping FinanceProfile FinancePool
Warning: The given pool is not the home pool of the given profile
Issue the Command? (yes, no) [no]: Y
Added the mapping to the mapping scratchpad.

Description
The following information is displayed:
? Create poolmapping: Enter the name of the profile followed by the pool on which to
create a mapping to.

278

The following errors can be returned:


? Command failed: The pool was not found on the cluster.
? Command failed: It is not possible to map a profile to the cluster pool.
? Command failed: It is not possible to map a profile to the default pool.
A warning is given if the pool is not the home pool of the given profile.

Define a Pool Mask


Syntax
update pool <name>

Use
This command updates the name of a pool and associated mask.

Output
Config# update pool Centera
Pool Name [Centera]:
Pool Mask [rdqeDcw]:
Pool Quota [1 GB]:
Issue the command?
(yes, no) [no]:

Description
The following information is displayed:
? Pool Name: Name of the pool. This must be unique and is case sensitive.
? Pool Mask: This assigns access rights (capabilities). Press return to automatically
assign all access rights, otherwise enter the individual letters corresponding to each
capability.

279

? Pool Quota: Current quota for the pool. This is the maximum amount of data that
can be written to the pool. This sets the size of the pool. Press return to accept the
default of 1 GB or enter a new quota.
Once the pool is created, CentraStar generates a unique ID. The display name of a pool can be
changed at will using the update pool command. The pool ID cannot be modified.
An error is returned if the pool already exists.
The following error can be displayed:
The access rights available are: read, write, delete, privileged-delete, C-Clip Copy, query,
exist and profile driven metadata:
Capabilities

Definition

Write (w)

Write to a C-Clip. WriteClip access must be enabled to write. 'Enabled' or 'Disabled'.

Read (r)

Read a C-Clip. 'Enabled' or 'Disabled'.

Delete (d)

Deletes C-Clips. 'Enabled' or 'Disabled'.

Exist (e)

Checks for the existence of a specified C-Clip. 'Enabled' or 'Disabled'.

Privileged Delete
(D)

Deletes all copies of the C-Clip and can overrule retention periods. 'Enabled' or 'Disabled'

Query (q)

Query the contents of a Pool. When set to 'Enabled', C-Clips can be searched for in the pool
using a time based query. 'Enabled' or 'Disabled'.

Clip-Copy (c)

Copy a C -Clip. 'Enabled' or 'Disabled'. This capability is needed for replication and restore
operations.

Purge (p)

Remove all traces of C-Clip from the cluster. 'Enabled' or 'Disabled'. Purge is only available for
cluster level profiles.

Monitor (m)

Retrieves statistics concerning Centera

Profile-Driven
Metadata

Supports storing/retrieving per-profile metadata, automatically added to the CDF. 'Enabled' or


'Disabled'.

280

Export Pools and Profiles to Another Cluster


Syntax
export poolprofilesetup

Use
This command exports a pool and/or its profile definitions to another cluster.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.

Output
Config# export poolprofilesetup
Export complete setup? (yes, no): no
Export based on pool or profile selection? (pool, profile) [pool]: pool
Export all pools? (yes, no) [no]: No
Pools to export: Finance
Enter the pathname where the export file should be saved: C:\Finance
Exported 1 pool and 1 profile

Description
The following information is displayed:
? Export complete setup: Every pool and their associated profiles can be exported or a
specific pool or profile.
? Export pool or profile: Select a pool or profile to export.
? Export all pools/profiles: Export all pools/profiles on the cluster or a specific
pool/profile.
? Pathname: Enter the full pathname of the location where the exported file should be
saved.

281

The following errors can be displayed:


If the pool cannot be found on the cluster, the following error is displayed:
? The pool <name> was not found on the cluster.
If the profile cannot be found on the cluster, the following error is displayed:
? The profile <name> was not found on the cluster.

Remove a Pool
Syntax
delete pool <name>

Use
This command deletes a pool. Pools can be created and removed by an administrator
according to certain rules.

Output
Config# delete pool Finance
WARNING: Are you sure you want to delete the pool and lose its granted
rights configuration?
(yes, no) [no]: Y
Issue the Command? (yes, no) [no]: Y

282

Description
The following information is displayed:
? Pool Name: Enter the name of the pool to delete.
A pool can only be deleted if it meets the following requirements:
? The pool does not contain any C-Clips or reflections.
? The pool is not the home pool of any profile.
? No mappings are defined from a profile to this pool.
? No database operations are ongoing involving this pool.
? All nodes are online and the pool is not being replicated or restored.
The following errors can be displayed:
? Command failed: The pool was not found on the cluster.

Segregate Application Data and Selective Replication


The purpose of this example is to explain how the system administrator can use pools to
assign applications access rights and logically segregate data.
Pools assign access rights to profiles enabling the system administrator to control the
capabilities of an application.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate same pool id which is unique for each pool, hence replication/restore
will fail. Use the Export/Import commands to copy a pool to another cluster.

283

1.

Launch the CLI and connect to the source cluster

Create separate pools for each application using the create pool command. Assign rights
(capabilities) to the pool and set the quota.
2.

Create access profiles for each application using the create profile command. An
access profile is bound to its home pool. All C-Clips created with this profile belong
to the same pool as the profile. Select the home pool for the profile and assign
relevant access rights to the profile.
3. Enable the profile and grant the profile the necessary access rights in its home pool.
This is part of the create profile command.
4. Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command. Enter the pool names and a location and name for the
generated file (local machine).
5. Launch another CLI session and connect to the target cluster (data will be replicated
to the target cluster).
6. Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 5.
7. Create an access profile that will be used by the source cluster to replicate data to the
target cluster using the create profile command followed by the name of the new
profile. Establish a .pea file and enter the location and name for this file (local
machine).
8. Grant the replication profile the clip-copy right for the pools on the target cluster to
which data will be replicated using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools, including the
default pool if needed.
9. Return to the CLI on the source cluster.
10. Enable replication using the set cluster replication command. Enter the IP address
and port number of the target cluster, enter the pool names to replicate, and enter the
location and name of the replication .pea file (local machine) as given in step 8.
The following example is used to explain the above steps.

284

Segregate Application Data


This example explains how pools can be used to logically segregate data stored on the same
Centera.
Two pools are created: (Finance and ITOps) each of which has an access profile associated
with it. Application 1 has read and write capability while Application 2 has read, write and
delete capabilities.
Refer to the CLI Reference Guide for the complete version of each CLI command.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate same pool id which is unique for each pool, hence replication/restore
will fail. Use the Export/Import commands to copy a pool to another cluster.
Centera 1
Create pool and profile for Finance data
Config# create pool Finance
Config# create profile Application1
Home Pool [default]: Finance
Granted rights for the profile in the home pool [rdqeDcw]: rw
Establish a pool entry authorization for application use? (yes, no) [no]: Y
Enter pool entry authorization creation information: C:\PEA1
Create pool and profile for ITOps data
Config# create pool ITOps
Config# create profile Application2
Home Pool [default]: ITOps
Granted rights for the profile in the home pool [rdqeDcw]:
Export all pools and profiles on the source cluster
Config# export poolprofilesetup
Config# Export complete setup? (yes, n) [no]: Y
Enter the pathname where the export file should be saved: C:\Config_Centera
Centera 2
Import Data
Config# import poolprofilesetup
Enter the pathname for the pool to import data: C:\Config_Centera
Create Profile for Replication
Config# create profile
Config# create profile Finance
Profile Secret [generate]:
Enable Profile? (yes, no) [no]: Y
Monitor Capability? (yes, no) [no]: Y

285

Profile-Metadata Capability? (yes, no) [no]: Y


Profile Type (access, cluster) [access]: access
Home Pool [default]:
Granted Rights for the Profile in the Home Pool [rdqeDcw]: c
Issue the command?
(yes, no) [no]: Y
Establish a Pool Entry Authorization for application use? (yes, no) [no]: Y
Enter Pool Authorization creation information: C:\Finance.pea
Set Replication
Config# set cluster replication
Replication Enabled? (yes, no) [no]: Y
Replication Address [10.88.999.191:3218]: 10.88.999.191:3218,
10.68.999.111:3218
Replicate Delete? (yes, no) [yes]: Y
Replicate incoming replicated Objects? (yes, no) [yes]: Y
Replicate all Pools? (yes, no) [yes]: Y
Profile Name:
Issue the command? (yes, no) [no]: Yes

Start Pool Migration


Syntax
migrate poolmapping start

Use
This command starts the migration process. This copies data from the default pool to a
custom pool. It copies the scratchpad mappings to the active mappings and starts the
migration process on each node asynchronously.

Output
Config# migrate poolmapping start
Do you want to start a migration process based on the scratchpad mappings?
(yes, no) [no]: Y
Capacity information might become incorrect
Do you want to recalculate it? (yes, no) [yes]: Y
Issue the Command? (yes, no) [no]: Y
Committed the scratchpad mappings and started the migration process.

286

Description
The following information is displayed:
? Start Migration Process: The mapping set up between the defined pools and profiles
is started. Use the create poolmapping command to map profiles to pools.
If there is no change to either the scratchpad mappings or the active mappings, the following
is displayed:
? No changes detected between active and scratchpad mappings.
? Trigger the migration process to restart? (yes, no) [no]: Y
The following errors can be returned:
If the pool cannot be found, the following error is displayed:
Command failed: The pool was not found on the cluster.
If the profile cannot be found, the following error is displayed:
Command failed: The profile was not found on the cluster.

287

Update Pool Details


Syntax
update pool <name>

Use
This command updates the name of a pool and associated mask.

Output
Config# update pool Centera
Pool Name [Centera]:
Pool Mask [rdqeDcw]:
Pool Quota [1 GB]:
Issue the command?
(yes, no) [no]:

Description
The following information is displayed:
? Pool Name: Name of the pool. This must be unique and is case sensitive.
? Pool Mask: This assigns access rights (capabilities). Press return to automatically
assign all access rights, otherwise enter the individual letters corresponding to each
capability.
? Pool Quota: Current quota for the pool. This is the maximum amount of data that
can be written to the pool. This sets the size of the pool. Press return to accept the
default of 1 GB or enter a new quota.
Once the pool is created, CentraStar generates a unique ID. The display name of a pool can be
changed at will using the update pool command. The pool ID cannot be modified.
An error is returned if the pool already exists.

288

The following error can be displayed.


The access rights available are: read, write, delete, privileged-delete, C-Clip Copy, query,
exist and profile driven metadata:
Capabilities

Definition

Write (w)

Write to a C-Clip. WriteClip access must be enabled to write. 'Enabled' or 'Disabled'.

Read (r)

Read a C-Clip. 'Enabled' or 'Disabled'.

Delete (d)

Deletes C-Clips. 'Enabled' or 'Disabled'.

Exist (e)

Checks for the existence of a specified C-Clip. 'Enabled' or 'Disabled'.

Privileged Delete
(D)

Deletes all copies of the C-Clip and can overrule retention periods. 'Enabled' or 'Disabled'

Query (q)

Query the contents of a Pool. When set to 'Enabled', C-Clips can be searched for in the pool
using a time based query. 'Enabled' or 'Disabled'.

Clip-Copy (c)

Copy a C -Clip. 'Enabled' or 'Disabled'. This capability is needed for replication and restore
operations.

Purge (p)

Remove all traces of C-Clip from the cluster. 'Enabled' or 'Disabled'. Purge is only available for
cluster level profiles.

Monitor (m)

Retrieves statistics concerning Centera

Profile-Driven
Metadata

Supports storing/retrieving per-profile metadata, automatically added to the CDF. 'Enabled' or


'Disabled'.

289

View Pool Capacity


Syntax
show pool capacity

Use
This command displays the pool capacity of all the pools on the cluster.

Output
Config# show pool capacity
Capacity / Pool
Quota
Used
Free
C-Clips Files
---------------------------------------------------------------------------Legal
5 GB
0 GB
5 GB
3
0
default
-92 GB
-2672
2586
Total
-92 GB
-2675
2586
----------------------------------------------------------------------------

Description
The following information is displayed:
? Capacity/Pool: Pool Name
? Quota: Current quota for the pool. This is the maximum amount of data that can be
written to the pool.
? Used: Current pool capacity being used.
? Free: Current available capacity until the quota is reached.
? C-Clips: Number of C-Clips stored in the pool.
? Files: Number of user files stored in the pool.

290

View Pool Migration


Syntax
show pool migration

Use
This command displays the status of the migration processes running on the cluster.

Output
Config# show pool migration
Migration Status:
finished
Average Migration Progress:
100%
Completed Online Nodes:
5/5
ETA of slowest Node:
0 hours
--------------------------------------Active Pool Mappings
FinanceProfile
Finance

Description
The following information is displayed:
? Migration Status: Percentage of the migration process completed.
? Average Migration Process: Average time it takes for each node to complete the
migration process.
? Completed Online Nodes: Number of online nodes on which the migration process
has completed.
? ETA of slowest Node: Estimated time that the slowest node will complete the
migration process.

291

View Relationship between a Pool and Profile


Syntax
show profile detail <profile name>

Use
This command displays the relationship a specific profile has with a pool in which it has been
granted rights.

Output
Config# show profile detail test
Centera Profile Detail Report
-----------------------------------------------------Generated on Friday 25 March 2005 17:08:36 CET
Profile Name:
test
Profile Enabled:
no
Monitor Capability:
yes
Profile Metadata Capability:
no
Home Pool:
default
Profile Type:
Access
Cluster Mask:
rdqe-cw-mGranted Rights in Application Pools:
Pool Name
Pool Mask
Granted
Effective
---------------------------------------------------------------------default
rdqeDcw
rdqeDcw
rdqe-cwm
---------------------------------------------------------------------Scratchpad Pool Mapping:
Active Pool Mapping:
-

Description
The following information is displayed:
? Profile Name: Name of the profile
? Profile Enabled: Profile can be enabled or disabled.
? Monitor Capability: Ability to retrieve Centera statistics.
? Profile Metadata Capability: Supports storing/retrieving per-profile metadata,

292

automatically added to the CDF. 'Enabled' or 'Disabled'.


? Home Pool: Default pool of the profile.
? Profile Type: Access or Cluster profile.
? Cluster Mask: Access rights associated with the profile.
? Granted Rights in Application Pools : Access rights the pool has granted the profile.
o
o
o
o
o
o

Pool Name: Name of the pool.


Pool Mask: Access rights
Granted: Access rights the pool has granted the profile.
Effective:
Scratchpad Pool Mapping:
Active Pool Mapping:

Assign a Profile Access Rights to a Pool


Syntax
set grants <pool name> <profile name>

Use
Pools explicitly grant rights to profiles. Use this command to modify the specific rights
allowed by a pool to a specific profile.

Output
Config# set grants Finance Accounts
Granted pool rights for profile [r------]: r
Issue the command? (yes, no) [no]: yes

The new effective rights are displayed.

293

Description
The following information is displayed:
? Set grants : Name of the pool and profile to which rights are being granted.
? Granted pool rights: Enter the operations that can be performed on the pool.
The operations available are: read, write, delete, privileged-delete, C-Clip Copy, query, and
exist:
Capabilities

Definition

Write (w)

Write to a C-Clip. WriteClip access must be enabled to write. 'Enabled' or 'Disabled'.

Read (r)

Read a C-Clip. 'Enabled' or 'Disabled'.

Delete (d)

Deletes C-Clips. 'Enabled' or 'Disabled'.

Exist (e)

Checks for the existence of a specified C-Clip. 'Enabled' or 'Disabled'.

Privileged Delete
(D)

Deletes all copies of the C-Clip and can overrule retention periods. 'Enabled' or 'Disabled'

Query (q)

Query the contents of a Pool. When set to 'Enabled', C-Clips can be searched for in the pool
using a time based query. 'Enabled' or 'Disabled'.

Clip-Copy (c)

Copy a C -Clip. 'Enabled' or 'Disabled'. This capability is needed for replication and restore
operati ons.

Purge (p)

Remove all traces of C-Clip from the cluster. 'Enabled' or 'Disabled'. Purge is only available for
cluster level profiles.

Monitor (m)

Retrieves statistics concerning Centera

Profile-Driven
Metadata

Supports storing/retrieving per-profile metadata, automatically added to the CDF. 'Enabled' or


'Disabled'.

The following errors can be returned:

294

If the pool does not exist the following error is displayed:


? Command failed: The pool was not found on the cluster.
If the profile does not exist the following error is displayed:
? Command failed: The profile was not found on the cluster.
The access rights of a profile in a pool can also be updated using the update profile
command.

Copy a Profile Definition to Another Cluster


The purpose of this procedure is to explain how to copy one or more profiles to another
cluster.
It is assumed that the necessary pools and profiles have already been created on the source
cluster.
1.
2.
3.
4.
5.
6.
7.

Launch the CLI and connect to the source cluster.


Determine which profiles to copy using the show profile list command.
Export pool and profile information of the pools to copy to a file using the export
poolprofilesetup command.
Do not export the full setup when prompted. Export based on profiles.
Do not export all profiles. Enter the name(s) of the profiles to copy and a location and
name for the generated file to be saved (local machine).
Launch another CLI session and connect to the target cluster.
Import the profile information of the profiles using the import poolprofilesetup
command. Enter the location and name of the file (local machine) as given in step 5.

Refer to the CLI Reference Guide for the complete version of each CLI command.

295

Copy all pools and profiles to another cluster


The purpose of this procedure is to explain how to copy all pools and their associated profiles
to another cluster.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.
It is assumed that the necessary pools and profiles have already been created on the source
cluster.
1.
2.
3.
4.
5.
6.
7.

Launch the CLI and connect to the source cluster.


Determine which pools and profiles to copy using the show pool list command.
Export pool and profile information of the pools to copy to a file using the export
poolprofilesetup command.
Export the complete setup when prompted by entering Yes.
Enter the pathname where the exported pools and profile information should be
saved.
Launch another CLI session and connect to the target cluster.
Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 5.

Refer to the CLI Reference Guide for the complete version of each CLI command.

296

Create a Profile
Syntax
create profile <profile name>

Use
This command creates a new profile.

Output
Config# create profile Finance
Profile Secret [generate]:
Enable Profile? (yes, no) [no]: Y
Monitor Capability? (yes, no) [no]: Y
Profile-Metadata Capability? (yes, no) [no]: Y
Profile Type (access, cluster) [access]: access
Home Pool [default]:
Granted Rights for the Profile in the Home Pool [rdqeDcw]:
Issue the command?
(yes, no) [no]: Y
Establish a Pool Entry Authorization for application use? (yes, no) [no]: Y
Enter Pool Authorization creation information: C:\Finance.pea
File C:\Finance already exists, do you want to overwrite this file? (yes,
no) [no]: Y

Description
The following information is displayed:
? Profile Name: This must be unique and is case sensitive.
? Profile Secret: This is the unique credentials associated with the profile. This can
either be generated by CentraStar automatically or the user can create a .txt file and
save the secret in it. Press Enter to generate it automatically or reference the directory
containing the file, for example: C:\temp\secret.txt. The profile secret is stored in a
.TXT file on a directory on the selected drive.
? Monitor Capability: Retrieves Centera statistics.
? Profile Metadata Capability: This displays the total disk space taken up by indexes,

297

databases and queues.


? Profile Type: This can be set to a Cluster or Access profile. In this case, it should be
an Access profile.
? Home Pool: This is the profile's default pool. Every profile has a default pool. A
profile can perform all operations on its home pool. [rdqeDcw]
? Granted Rights: This sets the operations that can be performed on the pool of data in
the cluster. Authenticated operations are performed on all C-Clips in the cluster, also
on those that have been written for another access profile. The access rights can be
adapted at any time; the server immediately enforces the newly defined rights. A
profile does not have all rights by default in its home pool. The rights need to be
configured using the set grants command.
? Pool Entry Authorization: This file contains the profile name and credentials. Enter a
directory to store the .pea file.
The following errors can be displayed:
? Error: Could not create the Pool Entry Authorization file. The specified file could not
be created or is not writable.
? Command failed: The profile already exists on the cluster.
? Command failed: The maximum number of profiles is exceeded.

Create a Profile Secret


The purpose of this example is to explain how to create a profile secret or password by
entering it into a .TXT file and saving it to the local drive.
There are two ways to create a profile secret:
1.

Generate Secret: When prompted to enter a profile secret, press Enter. This
automatically creates a strong secret, so the user does not have to manually create it.
2. Create a File: When prompted to enter a profile secret, enter the name of the file that
holds the profile secret. This file holds a human readable password. Enter File fist, for
example: File C:\DirectoryName\FileName.txt.

298

Example
Config# create profile Centera
Profile Secret [generate]: File C:\temp\Secret.txt
Enable Profile? (yes, no) [no]: Y
Monitor Capability? (yes, no) [no]: Y
Profile-Metadata Capability? (yes, no) [no]: Y
Profile Type (access, cluster) [access]: access
Home Pool [default]:
Granted Rights for the Profile in the Home Pool [rdqeDcw]: rwd
Issue the command?
(yes, no) [no]: n
The update profile command can also be used to update the profile secret. This command

can also be used to update a profiles access rights while leaving the profile secret unchanged.

Delete a Profile
Syntax
delete profile <profile name>

Use
This command deletes a profile.

Output
profile delete App1
Issue the command? (yes, no) [no]:

Description
The following information is displayed:
? Profile Name: Enter the profile name to delete.

299

The profile must exist on the cluster or an error message is returned.


The following errors can be returned:
? The profile <profile name> was not found on th e server.
The root profile and the anonymous profile cannot be deleted. Attempting to delete either of
them will result in the following error message:
? Command failed: The user does not have permission to delete this profile.

Disable the Anonymous Profile


Syntax
update profile <name>

Use
This command modifies a specific profile.
When changing the profile to map to another home pool, the application needs to do another
FPPoolOpen (typically restart) for the new profile definition to take effect.

Output
Config# update profile FinanceProfile
Profile Secret [unchanged]:
Enable profile: (yes, no): [no]
Monitor Capability? (yes, no) [no]:N
Profile Type (cluster, access) [access]: access
Home Pool [default]: FinanceProfile

300

Granted Rights for the profile in the home pool [rdqeDcw]: rde
Issue the command? (yes, no) [no]: Y
Establish a pool entry authorization for application use? (yes, no) [no]: Y
Please enter pool authorization creation information: C:\FinanceProfile.pea

Description
The following information is displayed:
? Profile Name: This must be unique and is case sensitive.
? Profile Secret: This is the unique credentials associated with the profile. This can
either be generated by CentraStar automatically or the user can define a file that
holds the profile secret. Enter a directory to store the profile secret. The profile secret
is stored in a .TXT file on a directory on the C:\ drive.
? Monitor Capability: Retrieves Centera statistics.
? Profile Type: This can be set to a Cluster or Access profile. In this case, it should be
an Access profile.
? Home Pool: This is the profile's default pool. Every profile has a default pool. A
profile can perform all operations on its home pool. [rdqeDcw]
? Granted Rights: This sets the operations that can be performed on the pool of data in
the cluster.
? Pool Entry Authorization: This file contains the profile and password. Enter a
directory to store the .pea file.

The following error can be displayed when trying to update a non-existing profile:
? Command failed: The profile was not found on the cluster
The operations that can be performed on a pool are read, write, delete, privileged-delete, CClip Copy, query, exist, monitor and profile-driven metadata. The access rights of a profile in a
pool can also be updated using the set grants command.

301

View the Relationship between a Pool and a Profile


Syntax
show profile detail <profile name>

Use
This command displays the relationship a specific profile has with a pool in which it has been
granted rights.

Output
Config# show profile detail test
Centera Profile Detail Report
-----------------------------------------------------Generated on Friday 25 March 2005 17:08:36 CET
Profile Name:
test
Profile Enabled:
no
Monitor Capability:
yes
Profile Metadata Capability:
no
Home Pool:
default
Profile Type:
Access
Cluster Mask:
rdqe-cw-mGranted Rights in Application Pools:
Pool Name
Pool Mask
Granted
Effective
---------------------------------------------------------------------default
rdqeDcw
rdqeDcw
rdqe-cwm
---------------------------------------------------------------------Scratchpad Pool Mapping:
Active Pool Mapping:
-

Description
The following information is displayed:
?
?
?
?

Profile Name: Name of the profile


Profile Enabled: Profile can be enabled or disabled.
Monitor Capability: Ability to retrieve Centera statistics.
Profile Metadata Capability: Supports storing/retrieving per-profile metadata,
automatically added to the CDF. 'Enabled' or 'Disabled'.

302

?
?
?
?

Home Pool: Default pool of the profile.


Profile Type: Access or Cluster profile.
Cluster Mask: Access rights associated with the profile.
Granted Rights in Application Pools : Access rights the pool has granted the profile.
o Pool Name: Name of the pool.
o Pool Mask: Access rights
o Granted: Access rights the pool has granted the profile.
o Effective:
o Scratchpad Pool Mapping:
o Active Pool Mapping:

Enable Replication of Delete


The purpose of this example is to explain how to enable replication of delete. Replication of
delete means that if a file is deleted on the source cluster, then this delete operation will also
be performed on the target cluster.
Enable replication using the set cluster replication command.
Delete has to be enabled on the source and target cluster to delete C-Clips. If the
configuration settings are different between the source and target cluster, then delete will fail
and replication will be paused.

Example
Config# set cluster replication
Replication Enabled? (yes, no) [no]: Y
Replication Address : 10.69.136.126:3218,10.69.136.127:3218
Replicate Delete? (yes, no) [no]: Y
Replicate incoming replicated Objects? (yes, no) [yes]: N
Replicate all Pools? (yes, no) [yes]: N
Pools to Replicate: action
Profile Name: New_York
Location of .pea file: C:\Console1
Issue the command?
(yes, no) [no]: Y

303

Learn More About Replication Topologies


Centera provides application failover access through replication. The following replication
topologies are supported:
? Unidirectional: Data written to a source cluster is automatically replicated to a target
cluster. In case of disaster, the application server will failover to the cluster. Failover
means that no data is lost due to Centera self healing abilities.
? Unidirectional (Hot Standby): Data written to a source cluster is automatically
replicated to a target cluster. In case of disaster, there are two application servers
available. Data written to cluster A will automatically be replicated to cluster B. In
case of a disaster, application server 2 will failover to cluster B. When cluster A and
application server 1 are available again, data from cluster B has to be restored to
cluster A before application server 1 starts reading and writing data to cluster A. The
restore operation only has to write the data that is not stored on the source cluster.
The restore guarantees that both clusters contain the same data.
? Bidirectional: Data written to a source cluster is automatically replicated to a target
cluster. In case of disaster, there are two application servers available. Data written to
cluster A will automatically be replicated to cluster B and data written to cluster B
will automatically be replicated to cluster A. In case of a disaster, cluster A will
failover to cluster B for read operations and cluster B will failover to cluster A. There
is no need for a restore after a disaster.
? Chain: Data written to a source cluster is automatically replicated to a target cluster.
The target cluster then replicates this data to a third cluster. A maximum of three
clusters can replicate to a source cluster. More
? Incoming Star: A cluster is a destination for multiple replication processes. A
maximum of three clusters can replicate to a source cluster.

304

Monitor Replication
Syntax
show replication detail

Use
This command displays a detailed replication report.

Output
Centera Replication Detail Report
--------------------------------------------------Generated on Thursday January 13 2005 11:19:20 CET
Replication Enabled: Thursday January 13 2005 6:29:24 CET
Replication Paused: no
Replication Address: 10.99.129.427:3218
Replicate Delete: yes
Profile Name: <no profile>
Number of C-Clips to be replicated: 3,326
Number of Blobs to be replicated: 3,326
Number of MB to be replicated: 815
Number of Parked Entries: 0
Replication Speed: 0.06 C-Clip/s
Replication Speed: 12.70 Kb/s
Ingest: 2.89 C-Clip/s
Replication ETA: N/A
Replication Lag: 15 hours

Description
The following information is displayed:
? Replication Enabled: Date on which the replication was enabled. If replication has
not been setup the status is Replication Disabled.
? Replication Paused: Indicates whether replication has been paused or not. Possible
states are:
o

No

Yes (by user)

305

Yes (integrity problem)

Yes (regenerations ongoing)

Yes (remote cluster is full)

Yes (parking overflow)

Yes (authentication failure)

Yes (insufficient capabilities)

? Replication Address: IP address or the host name of the replication cluster.


? Replicate Delete: Indicates whether objects deleted on the source cluster will also be
deleted on the replication cluster.
? Profile Name: Name of the profile used on the replication cluster.
? Number of C-Clips: Number of C-Clips to be replicated indicates the number of Cclips waiting to be replicated.
? Number of Blobs: Number of Blobs to be replicated indicates the number of blobs
waiting to be replicated.
? Number of Mb: Number of Mb to be replicated indicates the size in MB of blobs and
C-Clips waiting to be replicated.
? Number of Parked Entries: Indicates the number of entries that have failed to
replicate.
? Replication Speed: Indicates the speed at which replication is proceeding (C-Clip/s
and Kb/s).
? Ingest: Indicates the speed at which C-Clips are added to the replication queue.
? Replication ETA: Indicates the estimated date and time at which replication will be
complete based on the average activity of the last 24 hours.
? Replication Lag: Indicates the time it takes for a C-Clip to be replicated once it has
entered the replication queue.

306

Pause Replication
Syntax
replication pause

Use
This command pauses replication. When replication is paused, the system continues to queue
newly written data for replication, but will not write any data to the target cluster until
replication is resumed.
Always pause replication on the source cluster before shutting down the target cluster.
Resume replication on the source cluster when the target cluster is up and running again.

Output
Config# replication pause
Issue the command? (yes, no) [no]:

Description
The following error is displayed if replication is not enabled.

Prepare for Disaster Recovery


Replication and Restore
This section explains how to prepare for disaster recovery by replicating and when necessary
restoring data to other clusters, ensuring redundant copies of data are always available.

Replication
Replication complements Content Protection Mirrored (CPM) and Content Protection Parity

307

(CPP) by putting copies of data in geographically separated sites. If a problem renders an


entire cluster inoperable, the target cluster can support the application server until the
problem is fixed.
To copy data from one cluster to another Centera can perform a restore operation. Restores
can be either full (all data) or partial (data stored between a specified start and end date).

Restore
Restore is a single operation that restores or copies data from a source cluster to a target
cluster and is only performed as needed by the system operator. Restores can be either Full
(all data) or Partial (data stored between a specified start and end data).
EMC recommends setting up replication before starting the application. Centera will only
replicate data that is written from the moment replication is enabled. To copy data that was
written before replication was enabled, use Restore.
To enable replication:
?

?
?

Port 3218 must be available through a firewall or proxy for UDP and TCP for all
replication paths between the source and target cluster. For port 3218 this includes all
replication paths. Port 3682 can be enabled as well to allow remote manageability
connections (CV/CLI). This does not apply to CE+ models.
A valid EMC replication license is required to enable replication. Replication can
only be setup by qualified EMC service personnel.
To guarantee authorized access to the target cluster, EMC recommends using an
access profile in a replication setup.

308

Report Replication Performance


Syntax
show replication detail

Use
This command displays a detailed replication report.

Output
Centera Replication Detail Report
--------------------------------------------------Generated on Thursday January 13 2005 11:19:20 CET
Replication Enabled: Thursday January 13 2005 6:29:24 CET
Replication Paused: no
Replication Address: 10.99.129.427:3218
Replicate Delete: yes
Profile Name: <no profile>
Number of C-Clips to be replicated: 3,326
Number of Blobs to be replicated: 3,326
Number of MB to be replicated: 815
Number of Parked Entries: 0
Replication Speed: 0.06 C-Clip/s
Replication Speed: 12.70 Kb/s
Ingest: 2.89 C-Clip/s
Replication ETA: N/A
Replication Lag: 15 hours

Description
The following information is displayed:
? Replication Enabled: Date on which the replication was enabled. If replication has
not been setup the status is Replication Disabled.
? Replication Paused: Indicates whether replication has been paused or not. Possible
states are:

309

No

Yes (by user)

Yes (integrity problem)

Yes (regenerations ongoing)

Yes (remote cluster is full)

Yes (parking overflow)

Yes (authentication failure)

Yes (insufficient capabilities)

? Replication Address: IP address or the host name of the replication cluster.


? Replicate Delete: Indicates whether objects deleted on the source cluster will also be
deleted on the replication cluster.
? Profile Name: Name of the profile used on the replication cluster.
? Number of C-Clips: Number of C-Clips to be replicated indicates the number of Cclips waiting to be replicated.
? Number of Blobs: Number of Blobs to be replicated indicates the number of blobs
waiting to be replicated.
? Number of Mb: Number of Mb to be replicated indicates the size in MB of blobs and
C-Clips waiting to be replicated.
? Number of Parked Entries: Indicates the number of entries that have failed to
replicate.
? Replication Speed: Indicates the speed at which replication is proceeding (C-Clip/s
and Kb/s).
? Ingest: Indicates the speed at which C-Clips are added to the replication queue.
? Replication ETA: Indicates the estimated date and time at which replication will be
complete based on the average activity of the last 24 hours.
? Replication Lag: Indicates the time it takes for a C-Clip to be replicated once it has
entered the replication queue.

310

Resume Replication
Syntax
replication resume

Use
This command resumes replication when paused. When resuming, all queued data is written
to the target cluster.
Always pause replication on the source cluster before shutting down the target cluster.
Resume replication on the source cluster when the target cluster is up and running again.

Output
Config# replication resume
Issue the command?
(yes, no) [no]:

Description
The following error is displayed if replication is disabled.

Set Up Bidirectional Replication of One or More Pools


The purpose of this procedure is to explain how to set up bidirectional replication between
two clusters. The source cluster contains one or more pools that need to be replicated to the
target cluster and the target cluster contains one or more pools that need to be replicated to
the source cluster.
Refer to the CLI Reference Guide for the complete version of each CLI command.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a

311

pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.
The following procedure is graphically represented here.
1.
2.
3.

4.
5.
6.

7.

8.

9.

10.
11.

12.

13.

Launch the CLI and connect to the source cluster.


Determine which pools to replicate using the show pool list command.
Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command. Enter the pool names and a location and name for the
generated file (local machine).
Launch another CLI session and connect to the target cluster.
Determine which pools to replicate using the show pool list command.
Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command. Enter the pool names and a location and name for the
generated file (local machine). Do not include the default pool because it already
exists on the target cluster.
Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 3.
Create an access profile that will be used by the source cluster to replicate data to the
target cluster using the create profile command followed by the name of the new
profile. Establish a .pea file and enter the location and name for this file (local
machine).
Grant the replication profile the clip-copy right for the pools on the target cluster to
which data will be replicated using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools, including the
default pool if needed.
Go to the CLI session that is connected to the source cluster.
Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 6.
Create an access profile that will be used by the target cluster to replicate data to the
source cluster using the create profile command followed by the name of the new
profile. Establish a .pea file and enter the location and name for this file (local
machine).
Grant the replication profile the clip-copy right for the pools on the source cluster to
which data will be replicated using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools, including the
default pool if needed.

312

14. Enable replication using the set cluster replication command. Enter the IP address
and port number of the target cluster, enter the pool names to replicate, and enter the
location and name of the replication .pea file (local machine) as given in step 8.
Do not select Replicate incoming replicated objects.
15. Go to the CLI session that is connected to the target cluster.
16. Enable replication using the set cluster replication command. Enter the IP address
and port number of the source cluster, enter the pool names to replicate, and enter
the location and name of the replication .pea file (local machine) as given in step 12.
Do not select Replicate incoming replicated objects.
17. Quit both CLI sessions.

Set Up Chain Replication of One or More Pools


The purpose of this procedure is to explain how to set up chain replication between a source
cluster and two target clusters. The source cluster contains one or more pools that need to be
replicated to the first target cluster and the first target cluster contains one or more pools that
need to be replicated to the second target cluster.
Refer to the CLI Reference Guide for the complete version of each CLI command.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.
The setup of chain replication is the same as for unidirectional replication between a source
and one target cluster. Additionally the user can set up unidirectional replication between the
first and second target cluster.

313

The following procedure is graphically represented here.


1.
2.
3.

4.
5.
6.

7.

8.

9.

10.
11.

12.

13.

14.
15.

Launch the CLI and connect to the source cluster.


Determine which pools to replicate using the show pool list command.
Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command. Enter the pool names and a location and name for the
generated file (local machine).
Launch another CLI session and connect to the first target cluster.
Determine which pools to replicate using the show pool list command.
Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command. Enter the pool names and a location and name for the
generated file (local machine). Do not include the default pool because it already
exists on the target cluster.
Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 3.
Create an access profile that will be used by the source cluster to replicate data to the
target cluster using the create profile command followed by the name of the new
profile. Establish a .pea file and enter the location and name for this file (local
machine).
Grant the replication profile the clip-copy right for the pools on the target cluster to
which data will be replicated using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools, including the
default pool if needed.
Launch another CLI session and connect to the second target cluster.
Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 6.
Create an access profile that will be used by the first target cluster to replicate data to
the second target cluster using the create profile command followed by the name of
the new profile. Establish a .pea file and enter the location and name for this file
(local machine).
Grant the replication profile the clip-copy right for the pools on the second target
cluster to which data will be replicated using the set grants command. Enter c when
asked for pool rights. Issue this command for each of the replication pools, including
the default pool if needed.
Go to the CLI session that is connected to the source cluster.
Enable replication using the set cluster replication command. Enter the IP address
and port number of the first target cluster, enter the pool names to replicate, and
enter the location and name of the replication .pea file (local machine) as given in

314

step 8.
16. Go to the CLI session that is connected to the first target cluster.
17. Enable replication using the set cluster replication command. Enter the IP address
and port number of the second target cluster, enter the pool names to replicate, and
enter the location and name of the replication .pea file (local machine) as given in
step 12.
When asked for Replicate incoming replicated objects select Yes.
18. Quit all CLI sessions.

Set Up Replication
Syntax
set cluster replication

Use
This command sets the IP address of the target (target) cluster and enables or disables
replication. Replication should be configured with multiple IP addresses. If only one IP
address has been configured, replication will stop when the node with the defined IP address
goes down.
Furthermore, do not leave IP addresses in the configured list of nodes that are unavailable for
an extended period of time. Pause replication before changing IP addresses. More
Replicating a 2.4 cluster to a 1.2 cluster is not supported. Replicating from a 2.4 cluster to a
2.0.1 or earlier version cluster is not supported if storage strategy performance is set. The
blobs with naming scheme GUID-MD5 (if storage strategy performance is set) or MD5-GUID
(if storage strategy capacity and Content Address Collision Avoidance is set) will block the
replication.

315

Replicated Compliance Clusters need to be time synchronized when using the Global Delete
Option

Output
Config# set cluster replication
Replication Enabled? (yes, no) [no]: Y
Replication Address [10.88.999.191:3218]: 10.88.999.191:3218,
10.68.999.111:3218
Replicate Delete? (yes, no) [yes]: Y
Replicate incoming replicated Objects? (yes, no) [yes]: Y
Replicate all Pools? (yes, no) [yes]: Y
Profile Name:
Issue the command?
(yes, no) [no]:

Description
The following information is displayed:
? Replication Enabled: If replication is enabled (yes), all new data will be replicated to
the specified address. Replication requests are generated as data is written to the
cluster. If replication is disabled, no data will be replicated. Multiple IP addresses can
be added in dotted notation or FQDN. In order to use FQDNs, a DNS server must be
configured.
? Replication Address: Cluster IP address. This is the target cluster where data is
being replicated to. The address consists of the host name or IP address, followed by
the port number of the replication cluster. Separate multiple addresses with a
comma. The following is an example of the different replication addresses:
Host name: NewYork_183:3218, NewYork_184:3218
IP address: 10.68.999.111:3218, 10.70.129.123:3218
? Replicate Delete: Propagates deletes and privileged deletes of C-Clips on the source
cluster to the target cluster. The corresponding pool must exist on the target cluster
and replication/restore is automatically paused if a C-Clip is replicated for which the
pool does not exist on the target cluster.

316

? Replicate Incoming replicated objects : Enter Yes to enable this. This must be
enabled on the middle cluster in a Chain topology. This does not have to be enabled
in a Unidirectional or Bidirectional topology.
? Replicate all Pools: One or more pools can be replicated. Enter Yes to replicate all
pools on the cluster to the target cluster. Enter No to select the pool(s) to replicate.
? Profile Name: Profile name that was created on the target cluster. The export/import
poolprofilesetup command enables the user to export pools and/or profiles to
another cluster.
? Location of .pea file: Location of the .pea file on your local machine. Enter the full
pathname to this location.
The following errors can be returned:
When multiple credentials are present in the .pea file, the following error will be displayed:
? Error: Multiple credentials for the given profile name defined in .pea file.
If no credential is defined in the .pea file, the following error will be displayed:
? Error: No credential for the given profile name defined in .pea file.
Make sure that your application server has access to both Access Profiles.

Set Up Star Replication on One or More Pools


The purpose of this procedure is to explain how to set up star replication between two source
clusters and one target cluster. The two source clusters contain one or more pools that need
to be replicated to one target cluster.
Refer to the CLI Reference Guide for the complete version of each CLI command.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another

317

cluster.
The setup of star replication is the same as for unidirectional replication between one source
and one target cluster. In this use case we additionally set up unidirectional replication
between a second source cluster and the target cluster.
To support star replication with a third source cluster, set up unidirectional replication
between the third source cluster and the target cluster.
1.
2.
3.

Launch the CLI and connect to the first source cluster.


Determine which pools to replicate using the show pool list command.
Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command. Enter the pool names and a location and name for the
generated file (local machine).
4. Launch another CLI session and connect to the second source cluster.
5. Determine which pools to replicate using the show pool list command.
6. Export pool and profile information of the pools to replicate to a file using the export
poolprofilesetup command. Enter the pool names and a location and name for the
generated file (local machine). Do not include the default pool because it already
exists on the target cluster.
7. Launch another CLI session and connect to the target cluster.
8. Import the pool and profile information of the pools that will be replicated from the
first source cluster using the import poolprofilesetup command. Enter the location
and name of the file (local machine) as given in step 3.
9. Create an access profile that will be used by the first source cluster to replicate data
to the target cluster using the create profile command followed by the name of the
new profile. Establish a .pea file and enter the location and name for this file (local
machine).
10. Grant the replication profile the clip-copy right for the pools on the target cluster to
which data will be replicated using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools, including the
default pool if needed.
11. Import the pool and profile information of the pools that will be replicated from the
second source cluster using the import poolprofilesetup command. Enter the
location and name of the file (local machine) as given in step 6.
12. Create an access profile that will be used by the second source cluster to replicate
data to the target cluster using the create profile command followed by the name of
the new profile. Establish a .pea file and enter the location and name for this file
(local machine).

318

Use the same access profile as created in step 9 and establish a .pea file for that profile on this
cluster.
13. Grant the replication profile the clip-copy right for the pools on the target cluster to
which data will be replicated using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools, including the
default pool if needed.
14. Go to the CLI session that is connected to the first source cluster.
15. Enable replication using the set cluster replication command. Enter the IP address
and port number of the target cluster, enter the pool names to replicate, and enter the
location and name of the replication .pea file (local machine) as given in step 9.
16. Go to the CLI session that is connected to the second source cluster.
17. Enable replication using the set cluster replication command. Enter the IP address
and port number of the target cluster, enter the pool names to replicate, and enter the
location and name of the replication .pea file (local machine) as given in step 12.
18. Quit all CLI sessions.

319

Set up Unidirectional Replication of one or more pools


The purpose of this procedure is to explain how to set up unidirectional replication between
a source cluster and a target cluster. The source cluster contains one or more custom pools
that need to be replicated to the target cluster.
Additionally, the default pool can be selected for replication.
Refer to the CLI Reference Guide for the complete version of each CLI command.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.
1.
2.
3.

Launch the CLI and connect to the source cluster.


Determine which pools to replicate using the show pool list command.
Export the pool and profile information of the pool to replicate to file using the
export poolprofilesetup command. Do not include the default pool because it
already exists on the target cluster.
4. Launch another CLI session and connect to the target cluster.
5. Import the pool and profile information of the pools that will be replicated using the
import poolprofilesetup command.
6. Enter the location and name of the file (local machine) as given in step 3.
7. Create an access profile using the create profile command. This profile will be used
by the source cluster to replicate data to the target cluster.
8. Establish a .pea file and enter the location and name for this file (local machine).
9. Grant the replication profile the clip-copy right for the pools on the target cluster to
which data will be replicated using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools, including the
default pool if needed.
10. Return to the CLI session that is connected to the source cluster.
11. Enable replication using the set cluster replication command.
12. Enter the IP address and port number of the target cluster, enter the pool names to
replicate, and enter the location and name of the replication .pea file (local machine)
as given in step 6.

320

View Detailed Replication Statistics


Syntax
show replication parking

Use
This command provides detailed statistics about the replication parking.
If the cluster is heavily loaded or if there are many parked entries this command does not
show all parked entries. Use the show replication detail command instead.

Output
Centera Replication Parking Report
------------------------------------------------------Generated on Thursday January 13 2005 11:19:27 CET
Replication Enabled: Thursday January 13 2005 6:29:24 CET
Replication Paused: no
Replication Address: 10.99.129.216:3218
Replicate Delete: yes
Profile Name: <no profile>
Number of Parked Entries: 3
Failed Writes: 1
Failed Deletes: 2
Failed Privileged Deletes: 0
Failed Unknown Action: 0

Description
The following information can be displayed:
? Replication Enabled: Date on which the replication was enabled. If replication has
not been setup the status is Replication Disabled.
? Replication Paused: Replication has been paused or not. Possible states are:
o

No

Yes (by user)

321

Yes (integrity problem)

Yes (regenerations ongoing)

Yes (remote cluster is full)

Yes (parking overflow)

Yes (authentication failure)

Yes (insufficient capabilities)

? Replication Address: IP address or host name of the replication cluster.


? Replicate Delete: Objects deleted on the cluster will also be deleted on the replication
cluster.
? Profile Name: Name of the profile used on the replication cluster.
? Number of Parked Entries:
? Failed Writes: Number of writes that have failed to be replicated.
? Failed Deletes: Number of deletes that have failed to be replicated.
? Failed Privileged Deletes: Number of privileged deletes on a Centera CE that have
failed to be replicated.
? Failed Unknown Action: Number of C-Clips in the parking that could not be read.

322

Support Application Failover


This purpose of this procedure is to explain how to support multicluster failover for all
operations that fail on the source cluster by enabling the same list of capabilities on the target
cluster.
To support multicluster failover for applications the user has to make sure that the access
profile(s) that the application uses to access data on the target cluster(s) have the necessary
rights to support the failover strategy on the target cluster(s).
In a replication setup, the application automatically fails over to the target cluster(s) to read a
C-Clip if that C-Clip cannot be found on the source cluster. The SDK supports failover
strategies for operations other than read (write, delete, exist, and query).
Check detailed application settings with the application vendor. Refer to the Centera API
Reference Guide, P/N 069001185, and the Centera Programmers Guide, P/N 069001127, for
more information on multicluster failover.
Refer to the CLI Reference Guide for the complete version of each CLI command.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster.
1.
2.
3.
4.
5.
6.
7.

Launch the CLI and connect to the source cluster.


Determine which access profile(s) need to failover to the target cluster using the
show profile list command.
Enter export poolprofilesetup. Select No when asked to export the complete setup or
Yes for all profiles to failover.
Select Profile to export and enter the name(s) of the profile(s).
Enter a location and name for the file that will store the export information. The
export file contains information on the selected profiles. (Profiles name and secret).
Launch another CLI session and connect to the target (=failover) cluster.
Enter import poolprofilesetup. Enter the pathname of the export file created in step

323

3.
The effective rights for the same access profile on the two clusters might differ because the
cluster masks can be different.
8.

Enter update profile profile_name for all profiles that have been exported. Accept
the default settings and enter a path and file name for the .pea file. This command is
issued to generate .pea files for the exported profiles on the target cluster.
9. Quit both CLI sessions.
10. Merge the .pea file(s) that were generated in step 6 into one .pea file that is accessible
by the application.
This procedure shows how to support multicluster failover for all operations that fail on the
source cluster. Depending on the applications settings it might not be necessary to support
failover for all operations. Check detailed application settings with your application vendor.
Refer to the Centera API Reference Guide, P/N 069001185, and the Centera Programmers Guide,
P/N 069001127, for more information on multicluster failover.

Monitor Restore
Syntax
show restore detail

Use
This command displays the progress of the restore procedure.
When a restore is performed for a specific pool using the CLI, sometimes this command does
not display the full output.

Output
Config# show restore detail
Centera Restore Detail Report
--------------------------------------------------------

324

Generated on Monday 4 April 2005 14:08:14 CEST


Restore Started:
Monday 4 April 2005 9:41:26 CEST
Restore Finished:
Monday 4 April 2005 9:52:06 CEST
Restore Address:
10.69.133.231:3218
Restored Pools:
all pools
Profile Name:
<no profile>
Restore Mode:
full
Restore Mode:
10/10/1990 to 10/11/1990
Restore Checkpoint:
N/A
Estimated number of C-Clips to be restored: N/A
Number of Parked Entries:
0
Restore Speed:
0.00 C-Clip/s
Restore Speed:
0.00 Kb/s
Restore ETA:
N/A

Description
The following information is displayed:
? Restore Started: Date and time at which the restore was started. If no restore has
been started the status is Restore Disabled.
? Restore Finished: Date and time at which the restore finished.
? Restore Address: Restore IP address and port number.
? Restored Pools: Name of the pools which have been restored.
? Profile Name: Name of the restore profile (no profile for anonymous profile).
? Restore Mode: Restore mode (options are full or partial).
? Restore Checkpoint: Timestamp of the C-Clip that the restore operation is currently
restoring. This gives an indication of the progress of the restore process.
? Estimated number of C-Clips to be restored: Number of C-Clips that will be
restored.
? Number of Parked Entries: Number of C-Clips placed in the parking queue.
? Restore Speed: Speed of Restore operation.
? Restore ETA: Time when the restore operation will complete.

325

Pause Restore
Syntax
restore pause

Use
This command pauses the restore process.
When a restore operation is launched just after a restart of the target cluster, the operation
may be paused, reporting that the target cluster is full. Wait a couple of minutes before
issuing the restore resume command.

Output
Config# restore pause
Issue the command? (yes, no) [no]:

Description
Restore is now paused.

Restore Specific Pools


The purpose of this procedure is to explain how to restore only selective pools.
A pool is regarded as identical from the momen t the ID is the same. Manually re-creating a
pool does not generate the same pool id which is unique for each pool; hence
replication/restore will fail. Use the Export/Import commands to copy a pool to another
cluster. Monitor capabilities are required on the source cluster.
1.
2.

Launch the CLI and connect to the source cluster.


Determine which pools to restore using the show pool list command.

326

3.
4.
5.
6.
7.

8.
9.
10.

11.

12.
13.

Export pool and profile information of the pools to restore to a file using the export
poolprofilesetup command.
Do not export the complete setup. Export based on pool selection when prompted.
Enter the pool names and a location and name for the generated file (local machine).
On the target cluster, launch the CLI.
Import the pool and profile information of the pools that will be restored using the
import poolprofilesetup command. Enter the location and name of the file (local
machine) as given in step 5.
Import based on pools when prompted.
Enter the name of the pool(s) to import.
Create an access profile that will be used by the source cluster to restore data to the
target cluster using the create profile command followed by the name of the new
profile. Establish a .pea file and enter the location and name for this file (local
machine).
Grant the restore profile the clip-copy right for the pools on the target cluster to
which data will be restored using the set grants command. Enter c when asked for
pool rights. Issue this command for each of the replication pools.
Return to the CLI on the source cluster.
Enable restore using the restore start command. Enter the IP address and port
number of the target cluster. Enter either Full or Partial mode, enter the Start and
End dates for the restore operation. All pools and associated profiles will be restored
to the target cluster between this period.

The following example is used to explain the above steps.


Refer to the CLI Reference Guide for the complete version of each CLI command.

Restore Specific Pools Example


In this example, two pools are selected to be restored. The export/import poolprofilesetup
commands are used to manage pools and profiles across clusters.
A pool is regarded as identical from the moment the ID is the same. Manually re-creating a
pool does not generate same pool id which is unique for each pool, hence replication/restore
will fail. Use the Export/Import commands to copy a pool to another cluster.
Centera 1
Config# export poolprofilesetup
Export complete setup? (yes, no) [no]: n
Export based on pool or profile selection? (pool, profile) [pool]: pool

327

Export all Pools? (yes, no) [no]: n


Pools to Export: Centera, Console
Please enter the pathname where the export file should be saved:
C:\ExportFile
Exported 2 pools and 1 profile.
Centera 2
Config# import poolprofilesetup
Enter the pathname for the pool import data: C:\ExportFile
Found 2 pools and 1 profile.
Import pools? (yes, no) [yes]: Y
Start Restore
Config# restore start
Restore Address: 10.68.999.111:3218, 10.69.133.3:3218
Mode (full, partial):
{if partial}
Start Date (MM/dd/yyyy):
Stop Date (MM/dd/yyyy):
Profile Name:
{if profile is given}
Location of .pea file: C:\ExportFile
Issue the command? (yes, no) [yes]:

Resume Restore
Syntax
restore resume

Use
This command resumes the paused restore process.
When a restore operation is launched just after a restart of the target cluster, the operation
may be paused, reporting that the target cluster is full. Wait a couple of minutes before
issuing the restore resume command.
The number of restored C-Clips reported by the CLI is an estimate only. This number is

328

calculated by halving the total number of C-Clips exported (as reported by each storage
node).

Output
Config# restore resume
Issue the command?
(yes, no) [no]:

Description
The restore process is now resumed.

Perform a Restore
Syntax
restore start

Use
This command restores data from the cluster to the given restore address. Restore does not
perform a delete for the reflections it encounters. A C-Clip and a reflection for that C-Clip can
both be present on the same cluster. For every C-Clip that needs to be copied, a cluster wide
location lookup is carried out. If this reveals that there is a reflection with a newer write
timestamp, the C-Clip is not copied.
Restore should be configured with multiple IP addresses. If only one IP address has been
configured, restore will stop when the node with the defined IP address goes down. More
When a restore operation is launched just after a restart of the target cluster, the operation
may be paused, reporting that the target cluster is full. Wait a couple of minutes before
issuing the restore resume command.
The number of restored C-Clips reported by the CLI is an estimate only. This number is

329

calculated by halving the total number of C-Clips exported (as reported by each storage
node).

Output
Config# restore start
Restore Address:10.68.999.111:3218, 10.69.188.4:3218
Mode (full, partial):
{if partial}
Start Date (MM/dd/yyyy): 04/14/2005
Stop Date (MM/dd/yyyy): 04/25/2005
Profile Name: Centera
{if profile is given}
Location of .pea file:
Issue the command? (yes, no) [yes]:

Description
The following information is displayed:
? Restore Address: The IP address and port number (port number is 3218) to which
objects will be restored.
? Mode: Indicates whether a full restore of all data on the cluster will be processed
(full) or whether the restore will only process data that is stored between the
specified start and end date (partial).
? Start Date: The date from which the restore range should begin.
? End Date: The date at which the restore range should end.
? Profile Name: The name of the profile on the target cluster.
? Location of .pea file: The location of the .pea file on your local machine. Enter the
full pathname to this location.
There are a number of possible errors that can be received:
If restore is already running, the following error will be displayed:
Error: Restore is active. First cancel active restoration.
When multiple credentials for the profile name are present in the .pea file, the following error

330

will be displayed:
? Error: Multiple credentials for the given profile name defined in .pea file.
If no credential is defined in the .pea file, the following error will be displayed:
? Error: No credential for the given profile name defined in .pea file.

Change the Password of the Administrator


Syntax
set security password

Use
This command changes the admin password. Change the default admin password
immediately after first login for security reasons.

Output
Config# set security password
Old Password:
New Password:
New Password (confirm):

Description
The following information is displayed:
A valid admin password is required to connect to the cluster. The administrator's password

331

can be modified:
1. Enter the default password.
2. Enter a new password. The characters , , &, /, <, >, \, ^, %, and space are not valid
in passwords.
3. Confirm the new password.
The following error can be displayed:
The following error is displayed when the user confirms a new password which does not
match the previously entered new password.
? Passwords are not equal

Change the Administrators Details


Syntax
set owner

Use
This command changes the administrators details and checks the cluster identification.

Output
Config# set owner
Administrator name [not configured]:John Doe
Administrator email [not configured]: JohnDoe@EMC.com
Administrator phone [not configured]: 555 3678
Location of the cluster [not configured]: Mechelen
Name of the cluster: EMC_Centera_989893
Serial number of the cluster [not configured]: CL12345567SEW3
Issue the command? (yes, no) [no]: Y

332

Description
The following information is displayed:
? Cluster administrators details: Name, email address, phone number.
? Location of the cluster: Physical location of the cluster, for example, the location of
the customer site, floor, building number, and so on.
? Cluster name. Unique cluster name.
? Cluster serial number. EMC assigns this number. You can find it on the sticker with
P/N 005047603, 005048103, or 005048326 (located in the middle of the rear floor panel
directly inside the rear door).
Use the show config owner command to display these settings.

Change the Password to the Default


Syntax
set security default

Use
This command restores the default password of all users, except that of the administrator.

Output
Config# set security default
Issue the command? (yes, no) [no]:

Description
The password is set to the default for all users.

333

Lock All Cluster Nodes


Syntax
set security lock

Use
This command locks all nodes and restores the default network access security on all nodes
that are accessible. By issuing this command only admin accounts can make connections to
the cluster for manageability. Any current service connections will not be closed but future
service connections will not be possible anymore.

Output
Config# set security lock
Issue the command? (yes, no) [no]:

Description
When issuing this command to lock all nodes and some of the nodes are down, those nodes
will not be locked when they are up again.

334

Manage Access Control


Access Security Model
The Centera access security model prevents unauthorized applications from storing data on
or retrieving data from a Centera.
The security model operates on the application level, not on the level of individual end users.
A Centera authenticates applications before giving them access with application specific
permissions.

Pools and Profiles


The Centera security model is based on the concept of pools and application profiles
Pools grant access rights (capabilities) to applications using application profiles. Access
profiles are assigned to applications by the system administrator.
Authenticated operations are performed on all C-Clips in the cluster. Enter the capabilities in
a single string to assign to a profile, for example rw enables read and write. rwd, enables
read, write and delete.
Access to a pool can be controlled at different levels: Cluster level (Cluster Mask), Pool level
(Pool Mask) and Profile Level (ACL or Access Control List).

335

Analogy
To understand the concepts of controlling access rights at different levels, the analogy of a file
system is used below
Level

File
System
Level

Use Case Example

Cluster Level
(Cluster
Mask)

File
System

The delete operation can be denied by putting the file system as a whole read only.
No user has delete access to any file within any directory.

Pool Level
(Pool Mask)

Direc tory

A particular directory can be put on read only. No user will be able to delete any file
within this directory, even if the ACL would allow it. However, the user would be able
to delete files in other directories.

Profile Level
(ACL or
Access
Control List)

ACL's
On
Directory

By default, no user has access to any directory even if the two previous settings did
not specifically disallow any operation.
The user has delete access on files within a directory if the ACL explicitly allows it.

Manage Centera
Syntax
set security management

Use
This command enables an administrator to specify the IP address that will be used by
Centera to validate the manageability connection. If the address does not match, access will
be denied. When enabled, only the machine with the specified IP address can manage a
Centera.

336

Output
Config# set security management
Restrict management access (enabled, disabled) [disabled]: enabled
Management address: 10.69.155.6
Issue the command?
(yes, no) [no]: Y

Description
The following information is displayed:
? Restrict Management Access: Enabled or Disabled. When enabled only the machine
with the specified IP address can manage Centera.
? Management Address: IP address used to manage Centera.
If the administrator unlocks the access node through CV or using the front panel, ssh
connections can be made by somebody who knows the service or l3 password, from any
station.
Unlocking of access nodes is not possible in CE+. In that case the ssh traffic would be limited
to stations that have access to a storage node eth2 card.

Unlock a Specific Node


Syntax
set security unlock <node ID>

Use
This command unlocks a specific node in the system. Service connections to a node are only
possible if that node is unlocked. EMC engineers may require specific nodes to be unlocked
prior to a service intervention. Once a node is unlocked all users can connect to it.

337

Use the show security command to display the security settings.


You cannot use this command to unlock a node with the access role on a CE+ model.

Output
Config# set security unlock <node id>
Warning! Cluster security can be compromised if you unlock a node.
Issue the command? (yes/no) [no]:

Description
The following information is displayed:
? Node ID: A specific node.
The syntax of a node ID is cxxxnyy, where cxxx identifies the cabinet and nyy the node in
that cabinet (xxx and yy are numeric).

View Security Status of a Node


Syntax
show security <scope>

Use
This command displays the node's security status and other information, for example: the
cube on which the node resides, node role, locked status and compliance model. The scope of

338

this command is extensive and provides the following security options to display.
The output below is sample output, displaying the security details for nodes with the access
role.

Output
Cube
Node
Roles
Status
Locked
Compliance
-------------------------------------------------------------------1
c001n01
access
on
No
Basic
1
c001n02
access
on
No
Basic
1
c001n03
storage
on
No
Basic
1
c001n04
storage
on
No
Basic
1
c001n05
storage
on
No
Basic
1
c001n06
storage
on
No
Basic
1
c001n07
storage
on
No
Basic
1
c001n08
storage
on
No
Basic
--------------------------------------------------------------------

Description
The following information is displayed:
? Cube: Cube where the nodes are located.
? Node: Node IDs. Refer to Node IDs
? Roles: Roles assigned to the node: access and/or storage or spare.
? Status: Whether the node is on or off.
? Locked: Whether the node is locked (yes) or unlocked (no).When a node is locked, no
new service connections to that node are possible. Existing service connections will
not be closed. Only the administrator can make connections for manageability. When
a node is unlocked, service connections to that node are possible. All users can make
connections for manageability.
? Compliance: Whether it is a Basic Centera cluster node (Basic), a Compliance Edition
(CE) cluster node or a Compliance Edition Plus (CE+) cluster node.

339

Configure SNMP
The Simple Network Management Protocol is an Internet standard protocol for managing
devices on IP networks. The Centera SNMP agent runs on all nodes with the access role and
proactively sends messages called Traps to a network management station.
A Compliance Edition Plus (CE+) model does not support SNMP access.
SNMP compliant devices called 'agents' store data about themselves in Management
Information Bases (MIB's) and return this data to the SNMP requesters.
This is a sample MIB file.
The SNMP agent runs on all nodes with the access role to retrieve information from an
SNMP enabled device, in this case, Centera.
Any SNMP tool can monitor any device defined in the MIB. An SNMP agent such as Centera
can thus send messages to a network management station (NMS)
These messages are called Traps. There are two traps defined in Centera: Heartbeat and
Alarm.

Heartbeat Traps
The heartbeat trap is sent on a regular basis. The primary function of the heartbeat trap is to report
the actual state of the system. The cause of the problem is not part of the message.
The heartbeat trap can send three types of messages to the network management station, each
describing a different level of problem severity:

? Informational: Displays the health of the Centera


? Warning: Displays a non-critical error warning.
? Critical: Displays a critical error message and that urgent action is required.
The severity level of the heartbeat trap is the worst actual system failure detected. Use the
heartbeat trap to quickly assess if there is a problem. The absence of the heartbeat signal can
also be an indicator of a problem.

340

Alarm Traps
The Alarm trap is sent when a problem occurs on the Centera cluster. This trap is only sent
when a problem occurs. There are two error levels in this trap:
? Warning: A non-critical error has occurred
? Critical: The Centera cluster is in a critical state and urgent action is required.
The severity level reported by the alarm trap equals the most severe failure reported. The
description field of the alarm trap contains a short concise message stating the nature of the
problem.

Set Up SNMP
Syntax
set snmp

Use
This command sets the parameters necessary to use SNMP against a Centera cluster.
The set snmp command is unique to Centera and should not be confused with the more
widely known snmp set command. This CLI does not support the snmp set command.
CentraStar version 2.0, 2.1, and 2.2 CE models support SNMP access. A CE+ model does not.
Version 2.3 and higher supports SNMP access on all compliance models.
Centera supports SNMP version 2 only.

Output
Config# set snmp
Enable SNMP (yes, no)? [no]
Management station [10.99.1.129:453]:

341

Community name [public]:


Heartbeat trap interval [1 minute]:
Issue the command? (yes, no) [no]:

Description
The following information is displayed:
? Enable SNMP: Yes or No.
? Management station IP: IP address and port number of the server to which the
SNMP traps should be sent. The management station address can be entered as an IP
address in dotted quad format, for example 10.72.133.91:155, or as a hostname, for
example centera.mycompany.com:162.
The first time the set snmp command is issued, the default IP address of the management
station is that of the system on which the CLI is running. If the SNMP management station is
not set, the system will display unknown.
? Community name: Password for the SNMP traps is used to authenticate and can be
considered as a password. The Community name cannot be longer than 255
characters and may contain any non-control character except , , /, <, >, &,
<newline>, and <cr>.
? Heartbeat trap: Interval for sending I am alive traps.
The Heartbeat trap interval is an integer value of the form [0-9] + [mhd], where m indicates
minutes, h indicates hours, and d indicates days. If no suffix is specified, then minutes is
assumed.
? A value of 0 will disable the Heartbeat trap.
? The maximum value for the Heartbeat trap interval is 30 days.
The following errors can be returned:
? Command failed: The management station address is not valid.
? Command failed: The heartbeat interval has to be between 1 minute and 30 days.

342

View the Current Storage Strategy


Syntax
show features

Use
This command displays a list of all system features and their current state.

Output
Config# show features
Feature
data-shredding
storage-strategy
performance: full
threshold: 256 KB
storageonaccess
garbage-collection

State
off
performance

on
on

Description
The following information is displayed:
? Name of Feature: data-shredding, storage-strategy, storage on access and garbagecollection.
? State: Current state of the feature. The options are on, off or performance/capacity.
The data-shredding feature is set to off by default. The garbage-collection feature is
set to on by default. The storageonaccess feature is on by default.

343

Glossary
Access Control List
A list with specific access rights for profiles. The ACL grants specific access rights to
applications. For example, an application will not be able to perform an operation on a pool
unless the pool has explicitly granted access to the corresponding profile.
Access Profile
A profile used by an application to access a Centera.
Access Security Model
Prevents unauthorized applications from storing data on or retrieving data from a Centera. The
security model operates on the application level, not on the level of individual end users. A
Centera authenticates applications before giving them access with application specific
permissions.
Alerts
A message contained in XML format containing information on the cluster's state to indicate a
warning, error or critical situation.
Audit & Metadata
The overhead capacity required to manage the stored data. This includes indexes, databases,
and internal queues.
Automatic Transfer Switch (ATS)
An AC power transfer switch. Its basic function is to deliver output power from one of two
customer facility AC sources. It guarantees that the device will continue to function if a power
failure occurs on one of the power sources by automatically switching to the secondary source.

344

Available Capacity
The amount of capacity available to write. If the Regeneration Buffer Policy is set to Alert Only,
this equals Free Raw Capacity - System Buffer. If the Regeneration Buffer Policy is set to Hard
Stop, this equals Free Raw Capacity - System Buffer - Regeneration Buffer.
Available Capacity until Alert
The amount of capacity available until the regeneration buffer is reached and an alert is raised.
Irrespective of the Regeneration Buffer Policy, this equals Free Raw Capacity - System Buffer Regeneration Buffer.
Basic Compliance
Centera model that provides all the functionality of other models but data retention is not
enforced. Data can be deleted.
Blob
The Distinct Bit Sequence (DBS) of a users file is referred to as a blob in the Centera system. The
DBS represents the actual content of a file and is independent of the filename and physical
location. Refer to Distinct Bit Sequence.
CentraStar
Centera Operating System which delivers critical features including self-management, autoconfiguration, self-healing, non-disruptive maintenance and content replication.
C-Clip
A package containing the user's data and associated Metadata. When a user saves a file to
Centera, the system calculates a unique Content Address (CA) for the data. It then stores this
address in a new XML file with the C-Clip Descriptor File (CDF) together with applicationspecific Metadata.
C-Clip Descriptor File (CDF)
XML file that the system creates when making a C-Clip. This file includes the Content
Addresses for all referenced blobs and associated metadata.

345

C-Clip ID
The Content Address of the C-Clip that the system returns to the client. It is also referred to as a
C-Clip handle or C-Clip reference.
Cluster
One or more Centera Cubes which form a single storage area.
Cluster Authorization Mask
A set of cluster rights that overrides the pool masks. For example, if the cluster mask denies
read access, no application will be able to perform read operations on any pool.
Cluster Time
The synchronized time of all the nodes within a cluster.
Command Line Interface (CLI)
A set of predefined commands that the user can enter via a command line. The Centera CLI
allows a user to manage a cluster and monitor its performance.
Compliance
The fulfillment of regulatory requirements that are externally-imposed by regulators/legislation
such as SEC (Securities and Exchange Commission) in the United States. There are three Centera
Compliance models: Basic, Governance (GE) and Compliance Edition Plus (CE+).
Compliance Edition Plus (CE+)
A Centera model designed to meet the strict requirements for electronic storage media as
established by regulations from the Securities and Exchange Commission (United States) and
other national and international regulatory groups.

346

ConnectEMC
A tool/application that enables a Centera to communicate with the EMC Customer Support
Center. ConnectEMC sends email messages with health information via the customer SMTP
infrastructure or via a customer workstation with EMC OnAlertTM installed.
Content Address
An identifier that uniquely addresses the content of a file and not its location. Unlike locationbased addresses, Content Addresses are inherently stable and, once calculated; they never
change and always refer to the same content.
Content Address Resolution
The process of discovering the IP address of a node containing a blob with a given Content
Address.
Content Addressed Storage (CAS)
Automated networked storage whereby each data object is identified by a unique Content
Address (CA).
Content Address Verification
The process of checking data integrity by comparing the CA calculations that are made on the
application server (optional) and the nodes that store the data.
Content Protection Mirrored (CPM)
The content protection scheme where each stored object is copied to another node on a Centera
cluster to ensure data redundancy.
Content Protection Parity (CPP)
The content protection scheme where each object is fragmented into several segments that are
stored on separate nodes with a parity segment to ensure data redundancy.

347

Cube
Centera unit containing between 4 and 32 nodes including a set of switches and an ATS.
Cube Switch
A switch that connects all the nodes in a Centera cube. There are two cube switches.
Distinct Bit Sequence (DBS)
The actual content of a file independent of the filename and physical location.
Dynamic Host Configuration Protocol (DHCP)
An internet protocol used to assign IP addresses to individual workstations and peripherals in a
LAN.
End-to-end checking
The process of verifying data integrity from the application end down to the second node with
the storage role. See also Content Address Verification.
Extensible Markup Language (XML)
A flexible way to create common information formats and share both the format and the data on
the World Wide Web, Intranet and elsewhere. Refer to http://www.xml.com for more
information.
Failover
Commonly confused with failure. It actually means that a failure is transparent to the user
because the system will fail over to another process to ensure completion of the task; for
example, if a disk fails, then the system will automatically find another one to use instead.
Free object count
The total number of objects that can still be written to the cluster.

348

Free Raw Capacity


The capacity that is free and available for storing data or for self healing operations in case of
disk or node failures or for database growth and failover.
Full Garbage Collection
A background task that continually runs on the cluster and ensures the deletion of unreferenced
blobs and C-Clips will always be successful in the event of a power failure and that storage
space is continually optimized.
Garbage Collection
Part of the Centera deletion process and runs as a background process ensuring the deletion of
unreferenced blobs and C-Clips.
Governance Edition (GE)
A Centera model that concerns the responsible management of an organization- including its
electronic records. Governance Edition draws on the power of content addressed storage (CAS).
Self-configuring, self-managing, and self-healing, it captures and preserves original content,
protecting the context and structure of electronic records.
Incremental Garbage Collection
A task that only runs after a delete or purge operation to remove unreferenced blobs and CClips.
Legacy Profiles
Profiles that were created in earlier releases of Centera.
Load Balancing
The process of selecting the least-loaded node for communication. Load balancing is provided
in two ways: first, an application server can connect to the cluster by selecting the least-loaded
node with the access role; second, this node selects the least loaded node with the storage role to
read or write data.

349

Local Area Network (LAN)


A set of linked computers and peripherals in a restricted area such as a building or company.
Message Digest 5 (MD5)
A unique 128-bit number that is calculated by the Message Digest 5-hash algorithm from the
sequence of bits (DBS) that constitute the content of a file. If a single byte changes in the file then
any resulting MD5 will be different.
MultiCast Protocol (MCP)
A network protocol used for communication between a single sender and multiple receivers.
Multi-threading
The process that enables operating systems to execute different parts of a program or threads
simultaneously.
Node
Network entity that is uniquely identified through a system ID and an internal set of IP
addresses. Physically, a node is the basic building and storage unit of a Centera.
Node with the Access Role
The nodes in a cluster that communicate with the outside world. They must have public IP
addresses. For clusters with CentraStar 2.3 and lower this was referred to as an Access Node.
Node with the Storage Role
The nodes in a cluster that store data. For clusters with CentraStar 2.3 and lower this was
referred to as a Storage Node.
Offline Capacity
The capacity that is temporarily unavailable due to reboots, offline nodes, or hardware faults.
This capacity will be available as soon as the problem has been solved.

350

Pool
A virtual layer on top of the cluster. Pools are used to segregate data into logical groups.
Pool C-Clip Count
Number of C-Clips stored in the pool.
Pool Free Capacity
Current available pool capacity until the quota is reached.
Pool Quota
The maximum amount of data that can be written to the pool.
Pool Used Capacity
Current pool capacity being used.
Pool User File Count
Number of user files stored in the pool.
Protection Scheme
A storage strategy that protects data on a Centera. There are two Centera protection schemes:
CPP and CPM.
Protected User Data
The capacity taken by user data, including CDF's, reflections and protected copies of user files.
Redundancy
A process where data objects are duplicated or encoded such that the data can be recovered
given any single failure. Refer to Content Protection Mirrored (CPM), Content Protection Parity
(CPP), and Replication for specific redundancy schemes used in Centera.

351

Reflection
Pieces of data that record the state of a blob/C-Clip. Typically they are generated when a file is
deleted or moved, to state that the file was deleted, or the location to where it moved.
Reflections are small files and stored and protected in the same way as a C-Clip.
Regeneration
The process of creating a data copy if a mirror copy or fragmented segment of that data is no
longer available.
Replication
The process of copying blob data to another cluster. This complements Content Protection
Mirrored and Content Protection Parity. If a problem renders an entire cluster inoperable, then
the replica cluster can keep the system running while the problem is fixed and restore can be
used if needed to heal the cluster.
Restore
A single operation that restores data from a source cluster to a target cluster.
Retention Period
The time that a C-Clip and the underlying blobs have to be stored before the application is
allowed to delete them.
Retention Period Class
A symbolic representation of a retention period. Enables changes to be made to a C-Clip's
retention period without modifying the C-Clip itself.
Root Switch
A switch that connects two cubes in a multi-cube cluster. There are two internal root switches
for multi-cube cabinet clusters.

352

Segmentation
The process of splitting very large files or streams into smaller chunks before storing them.
Segmentation is an invisible client side feature and supports storage of very large files such as
rich multimedia.
Self Healing
A process that detects and excludes from the cluster any drives that fail and regenerates their
data to ensure that a fully redundant copy of the data is always available.
Shredding
A process that removes all traces of data that have been deleted by an application from a
Centera. Data shredding takes disk/file system data blocks that previously held data objects
and overwrites them seven times with random and fixed pattern data to thoroughly remove the
media's magnetic memory of the previously stored data.
Simple Network Management Protocol (SNMP)
An Internet standard protocol for managing devices on IP networks.
Spare Capacity
The capacity that is available on nodes that do not have the storage role assigned.
Spare node
A node without role assignment. This node can become a node with the access and/or storage
role.
Storage Node
See Node with the Storage Role.
System Resources
The capacity that is used by the CentraStar software and is never available for storing data.

353

System Buffer
Allocated space that allows internal databases and indexes to safely grow and failover. As the
system is filled with user data, and the Audit & Metadata capacity increases, the capacity
allocated to the System Buffer decreases.
Total object count
The number of objects that can be stored.
Total Raw Capacity
The total physical capacity of the cluster/cube/node or disk.
UniCast Protocol (UCP)
A network protocol used for communication between multiple senders and one receiver.
Used Capacity
The capacity that is in use to store data. This includes Protected User Data plus Audit &
Metadata.
Used object count
The total object capacity already used.
Used Raw Capacity
The capacity that is used or otherwise not available to store data; this includes the capacity
reserved as system resources, capacity not assigned for storage or offline capacity and capacity
actually used to store user data and associated audit and metadata.
User Datagram Protocol (UDP)
A standard Internet protocol used for the transport of data.
Wide Area Network (WAN)

354

A set of linked computers and peripherals that are not in one restricted area but that can be
located all over the world.
Write Once Read Many (WORM)
A technique that stores data that will be accessed regularly, for example, a tape device.

355

You might also like