You are on page 1of 66

Managing Celerra

File Systems
P/N 300-002-120
Rev A01

Version 5.4
April 2005

Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Access to Celerra File Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
Restriction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
Cautions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
Celerra File System Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
File System Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
File System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
File System Planning Considerations. . . . . . . . . . . . . . . . . . . . . . . . . .8
About Mounting a Celerra File System on a Data Mover . . . . . . . . .10
Monitoring the Amount of Space in Use in a File System . . . . . . . .11
File Systems and Automatic Volume Management . . . . . . . . . . . . . .12
Celerra WORM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
Checking File System Consistency . . . . . . . . . . . . . . . . . . . . . . . . . .13
Nested Mount File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
EMC NAS Interoperability Matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
User Interface Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
Managing Celerra File Systems Roadmap . . . . . . . . . . . . . . . . . . . . . . . .18
Create a Celerra File System on a Celerra Network Server . . . . . . . . . . .20
Create a Mount Point for a Celerra File System on a Data Mover . . . . . .22
Mount a Celerra File System on a Mount Point on a Data Mover . . . . . .23
Enabling Access to Celerra File Systems . . . . . . . . . . . . . . . . . . . . . . . . .24
Enable NFS Access to a Celerra File System. . . . . . . . . . . . . . . . . . .24
Enable CIFS Access to a Celerra File System . . . . . . . . . . . . . . . . . .24
Enable FTP Access to a Celerra File System . . . . . . . . . . . . . . . . . . .24
Enable TFTP Access to a Celerra File System. . . . . . . . . . . . . . . . . .25
Enable MPFS Access to a Celerra File System . . . . . . . . . . . . . . . . .25
Managing Celerra File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
Listing All Celerra File Systems on a Celerra Network Server . . . . .26
Listing Configuration Information for a Celerra File System . . . . . .27
Checking Celerra File System Capacity . . . . . . . . . . . . . . . . . . . . . . .27
Finding Out If a File System Has an Associated Storage Pool . . . .30
Extending a Celerra File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31
Exporting a Celerra File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35

1 of 66
Renaming a Celerra File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Deleting a Celerra File System and Freeing Its Disk Space . . . . . . . 36
Listing Mount Points on a Data Mover . . . . . . . . . . . . . . . . . . . . . . . . 39
Listing All Celerra File Systems Mounted on a Data Mover. . . . . . . 39
Mounting a File System for Read/Write Access . . . . . . . . . . . . . . . . 40
Mounting a File System for Read-Only Access. . . . . . . . . . . . . . . . . 40
Unmounting a Celerra File System from a Data Mover . . . . . . . . . . 41
Unmounting All Celerra File Systems from a Data Mover . . . . . . . . 43
Changing the Size Threshold for All File Systems . . . . . . . . . . . . . . 45
Changing the Size Threshold for a Data Mover. . . . . . . . . . . . . . . . . 45
Enhancing File Read and Write Performance . . . . . . . . . . . . . . . . . . 46
Finding and Fixing File System Storage Errors . . . . . . . . . . . . . . . . . . . . 49
Starting and Monitoring a File System Check . . . . . . . . . . . . . . . . . . 49
Starting ACL Check on a File System . . . . . . . . . . . . . . . . . . . . . . . . 49
Listing Current File System Checks. . . . . . . . . . . . . . . . . . . . . . . . . . 50
Displaying Information on a Single File System Check . . . . . . . . . . 50
Displaying Information on all Current File System Checks . . . . . . . 51
Managing Celerra Nested Mount File Systems . . . . . . . . . . . . . . . . . . . . 52
Creating a Nested Mount File System . . . . . . . . . . . . . . . . . . . . . . . . 52
Mounting a Nested Mount File System . . . . . . . . . . . . . . . . . . . . . . . 53
Deleting the Nested Mount File System . . . . . . . . . . . . . . . . . . . . . . . 53
Adding an Existing File System to the NMFS . . . . . . . . . . . . . . . . . . 55
Exporting a Component File System . . . . . . . . . . . . . . . . . . . . . . . . . 56
Removing a Component File System . . . . . . . . . . . . . . . . . . . . . . . . . 56
Moving a Nested Mount File System . . . . . . . . . . . . . . . . . . . . . . . . . 56
Troubleshooting File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Known Problems and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Related Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Want to Know More? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Appendix: GID Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Restrictions for GID Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

2 of 66 Version 5.4 Managing Celerra File Systems


Introduction
The purpose of a file server is to make file systems available to users. The Celerra®
Network Server is a file server that offers the administrator flexibility in creating file
systems to respond to customer needs.
This technical module is part of the Celerra Network Server information set. This
module is intended for the system administrator responsible for configuring and
managing Celerra volumes and file systems.

Access to Celerra File Systems


The Celerra Network Server supports file system access by NFS and CIFS users,
as well as by other file system access protocols. In order to configure the file
systems and access protocols for different users, you must understand the types of
users you have and how they need to access files.
The Celerra Network Server lets you configure a file system for the following access
protocols:
◆ NFS user access: For how to set up and manage NFS client and user access
to a Celerra file system, refer to the Managing NFS Access to the Celerra
Network Server technical module on the user information CD.
◆ CIFS user access: For how to set up and manage CIFS client and user access
to a Celerra file system, refer to the guides Configuring CIFS on Celerra and
Managing Celerra for the Windows Environment on the user information CD.
◆ Simultaneous CIFS and NFS user access: For how to set up and manage
simultaneous CIFS and NFS client and user access to a Celerra file system,
refer to the Configuring CIFS on Celerra for a Multiprotocol Environment
technical module on the user information CD. Note that the Celerra Network
Server lets you map Windows users and groups to UNIX UIDs (user IDs) and
GIDs (group IDs) to provide users with seamless access to shared file system
data; for details, refer to the Configuring External Usermapper for Celerra
technical module on the user information CD.
◆ File Transfer Protocol (FTP) access: FTP is a client/server protocol that
operates over TCP/IP and allows file uploading and downloading across
heterogeneous systems. FTP includes functions to log on to the remote system,
list directories, and copy files. For how to set up and manage FTP access to a
Celerra file system, refer to the Using FTP on Celerra Network Server technical
module on the user information CD.
◆ Trivial File Transfer Protocol (TFTP) access: A simple, UDP-based protocol
to read and write files. TFTP can be used to boot remote clients from a network
server. TFTP does not authenticate users or provide access control.

Managing Celerra File Systems Version 5.4 3 of 66


◆ Multiplex File System (MPFS) access: MPFS adds a thin, lightweight protocol
known as the File Mapping Protocol (FMP) that works with NAS protocols such
as NFS and CIFS to control metadata operations. FMP is responsible for
exchanging file layout information and management of sharing conflicts
between clients and Celerra Network Servers. The clients use the file layout
information to read and write file data directly from and to the storage system.
For how to set up and manage MPFS access to a Celerra file system, refer to
the Using HighRoad on Celerra technical module on the user information CD.

Note: All referenced documents are available on the Celerra Network Server User
Information CD.

Terminology
This section defines terms that you need to understand in order to manage volumes
and file systems on the Celerra Network Server. Refer to the Celerra Network
Server User Information Glossary on the user information CD for a complete list of
Celerra terminology.
Automatic Volume Management (AVM): A feature of the Celerra Network Server that
creates and manages volumes automatically, without manual volume management
by an administrator. AVM organizes volumes into pools of storage that can be
allocated to file systems.
Business continuance volume (BCV): A Symmetrix® volume used as a mirror that
attaches to and fully synchronizes with a production (source) volume on the Celerra
Network Server. The synchronized BCV is then separated from the source volume
and is addressed directly from the host to serve in backup and restore operations,
decision support, and application testing.
Celerra WORM: Celerra Write-Once Read-Many feature.
Component file system: Refers to the file system/s mounted on the nested mount
root file system and is part of the nested mount file system.
Disk volume: On Celerra systems, a physical storage unit as exported from the
storage array. All other volume types are created from disk volumes. See also
Metavolume, Slice volume, Stripe volume, Volume.
File system: A method of cataloging and managing the files and directories on a
storage system.
LUN (Logical unit number): The identifying number of a SCSI object that processes
SCSI commands. The LUN is the last part of the SCSI address for a SCSI object.
The LUN is an ID for the logical unit, but the term is sometimes used to refer to the
logical unit itself.
Metavolume: On a Celerra system, a concatenation of volumes, which can consist
of disk, slice, or stripe volumes. Also called a hypervolume or hyper. Every file
system must be created on top of a unique metavolume. See also Disk volume,
Slice volume, Stripe volume, Volume.
Nested export: An export of a component file system. The export has its own
specified access controls.
Nested mount file system (NMFS): File system that contains the nested mount root
file system and component file systems.

4 of 66 Version 5.4 Managing Celerra File Systems


Nested mount file system root: File system on which the component file systems
are mounted; the NMFS root is mounted as read/only.
Slice volume: On a Celerra system, a logical piece or a specified area of a volume
used to create smaller, more manageable units of storage. See also Disk volume,
Metavolume, Stripe volume, Volume.
Storage pool: Automated Volume Management (AVM), a Celerra feature, organizes
available disk volumes into groupings called storage pools. Storage pools are used
to allocate available storage to Celerra file systems. Storage pools can be created
automatically by AVM or manually by the user.
Storage system: An array of physical disk drives and their supporting processors,
power supplies, and cables.
Stripe volume: An arrangement of volumes that appear as a single volume. Allows
for stripe units, which cut across the volume and are addressed in an interlaced
manner. Stripe volumes make load balancing possible. See also Disk volume,
Metavolume, Stripe volume, Volume.
Volume: On a Celerra system, a virtual disk into which a file system, database
management system, or other application places data. A volume can be a single
disk partition or multiple partitions on one or more physical drives. See also Disk
volume, Metavolume, Slice volume, Stripe volume.
Worm: A file's status in a Celerra WORM file system. When a file is in worm state, it
cannot be moved, modified, extended, removed, or renamed.

Managing Celerra File Systems Version 5.4 5 of 66


References
◆ If you are CWORM enabled, refer to the Using Celerra WORM technical module
on the user information CD.
◆ For additional information on file system user/group quotas management and
tree quota enhancements, refer to the Using Quotas on Celerra technical
module on the user information CD.

Restriction
◆ When building volumes on a Celerra Network Server attached to a Symmetrix
storage system, use standard Symmetrix volumes (also called hypervolumes),
not Symmetrix metavolumes.

Cautions
◆ All parts of a Celerra file system must use the same type of disk storage and must
be stored on a single storage system.
◆ If you plan to set quotas on a file system to control the amount of space that
users and groups can consume, you should turn on quotas immediately after
creating the file system. Turning on quotas later, when the file system is in use,
can cause temporary file system disruption, including slow file system access.
For information about quotas and how to turn on quotas, refer to the technical
module Using Quotas on Celerra on the user information CD.
◆ If your user environment requires international character support (i.e., support of
non-English character sets or Unicode characters), EMC strongly recommends
configuring your Celerra Network Server to support this feature before creating
file systems. For detailed information about international character support and
how to configure it on a Celerra Network Server, refer to the technical module
Using International Character Sets with Celerra on the user information CD.
◆ If you plan to create TimeFinder®/FS (local, NearCopy, or FarCopy) snapshots of a
production file system (PFS), do not use slice volumes (nas_slice) when creating
the PFS. Instead, use the full disk presented to the Celerra Network Server, since
TimeFinder copies and restores volumes in their entirety and does not recognize
sliced partitions created by the host (in this example, the Celerra Network
Server).
◆ Do not manually edit the nas_db database without consulting EMC Customer
Support. Any changes to this file could cause problems with your Celerra
installation.

6 of 66 Version 5.4 Managing Celerra File Systems


Celerra File System Concepts
A file system lets you arrange and manage files and directories on a storage
system. File systems on a Celerra Network Server are stored in volumes configured
to meet your business requirements.
A metavolume is required to create a file system. Metavolumes provide the storage
capacity that can be used to expand file systems. A metavolume is also a way to
form a logical volume that is larger than a single disk. For information on volumes
and how to configure metavolumes, refer to the Managing Celerra Volumes
technical module on the user information CD.
After you configure metavolumes, you are ready to create your file systems.
In this section, we discuss file system:
◆ Types
◆ Requirements
◆ Planning considerations

File System Types


The Celerra Network Server creates different file systems depending on how they
are used. You might see the types listed in Table 1.
Table 1 File System Types

File System Type ID Description

uxfs 1 Default file system

sfs -- Share file system

rawfs 5 Raw file system

mirrorfs 6 Mirrored file system

ckpt 7 Checkpoint file system

mgfs 8 Migration file system

group 100 Stores members of a file system group

vpfs -- Volume pool file system (multiple services sharing a


single metavolume)

rvfs -- Local configuration volume used to store replication-


specific internal information

nmfs 102 Nested mount file system

Managing Celerra File Systems Version 5.4 7 of 66


File System Requirements
The following lists the requirements for creating a file system:
◆ You can create a file system on only nonroot metavolumes that are not in use.
◆ A metavolume must be at least 2 megabytes to accommodate a file system.
◆ A file system name must be unique on a particular Celerra Network Server. It
can include up to 254 characters, including hyphens ( - ), underscores ( _ ), and
periods ( . ), but cannot begin with a hyphen or period.

! !
CAUTION
All parts of a Celerra file system should use the same type of disk storage and should
be stored on a single storage system.

File System Planning Considerations


Consider the following when you are planning file systems for your Celerra Network
Server:
Table 2 File System Planning Considerations

Issues Considerations

Size of the production file Plan your backup and restore strategy. Larger file systems
systems might take more time to back up and restore.
If a file system becomes inconsistent (rare occurrence), larger
file systems can take more time for a consistency check to run.

Data growth rates Be sure to consider your space needs as your data increases.

Use of TimeFinder/FS or remote If you plan to create multiple copies of your production file
TimeFinder/FS system, you need to plan for that number of BCVs. For
example, from one production file system you may create 10
copies. Therefore, you need to plan for 10 BCVs, not one.
TimeFinder/FS uses the physical disk, not the logical volume,
when it creates BCV copies. The copy is done track by track,
so unused capacity is carried over to the BCVs.
Volumes used for BCVs need to be the same size as the
standard volume.
For information about TimeFinder/FS, refer to the Using
TimeFinder/FS with Celerra and Using TimeFinder/FS Near
Copy and Far Copy with Celerra technical modules on the user
information CD.

8 of 66 Version 5.4 Managing Celerra File Systems


Table 2 File System Planning Considerations (continued)

Issues Considerations

Use of SRDF® disaster recovery All file systems on the Data Mover must be built on SRDF
volumes. For information about SRDF, refer to the Using
SRDF/S with Celerra for Disaster Recovery technical module
or the Using SRDF/A with Celerra technical module on the
user information CD.
If you use the AVM feature to create the file systems, you must
specify the symm_std_rdf_src storage pool. This storage
pool directs AVM to allocate space from volumes configured at
installation time for remote mirroring using SRDF.

Use of HighRoad® You cannot enable HighRoad access to file systems with a
stripe depth of less than 32 KB. For more information, refer to
the Using HighRoad on Celerra technical module on the user
information CD.

Use of CWORM When you create a new file system, you have the option of
applying the CWORM feature (Celerra Write-Once Read-
Many) to the file system. Selecting the CWORM option
designates the file system as CWORM; the file system is
persistently marked as such until it is deleted.
CWORM is a licensed feature of Celerra. You must have a
CWORM software license to create a CWORM file system
The CWORM feature is disabled by default.

File System Size Guidelines


The size of your file systems depends on a variety of factors within your own
business organization. Contact your EMC representative or refer to the Celerra
Network Server Release Notes, which are available at Powerlink at http://
powerlink.emc.com.
On a per-Data-Mover basis, the total size of all file systems, and the size of all
SavVols used by SnapSure, and the size of all SavVols used by the Celerra
Replicator feature, must be less than the total supported capacity of the Data
Mover. For a list of Data Mover capacities, refer to the Celerra Network Server
Release Notes, which are available at Powerlink at http://
powerlink.emc.com.
Extending Capacity of a File System
The Celerra Network Server provides two ways to extend capacity of a file system:
◆ by volume
◆ by size if the file system has an associated storage pool
When you extend a file system by volume, you should extend it with the same type
as the original file system. For example, if the original file system was created using
a stripe volume, extend it using a stripe volume of the same size and depth.
You can extend by size a file system that has an associated storage pool. In this
case, the AVM feature extends the file system with storage from the same storage
pool.

Managing Celerra File Systems Version 5.4 9 of 66


Deleting File Systems and Freeing Disk Space
If you want to delete a Celerra file system and free its disk space, you must delete
or disconnect all entities associated with the file system: all checkpoints, business
continuance volumes, slice volumes, stripe volumes, and metavolumes. After you
delete or disconnect all of the file system entities, the disk volumes that provided
storage space to the file system become part of the available free space on the file
server.

About Mounting a Celerra File System on a Data Mover


Before you can make a Celerra file system available for client access, you must
create a mount point on a Data Mover, and then mount the file system on that
mount point. After mounting the file server on a Data Mover, you can set up client
access to the file system.
A file system can be mounted read/write on only one Data Mover, or read-only on
many Data Movers. The same file system cannot be mounted both read/write and
read-only on multiple Data Movers.
A mount point name must begin with a / (slash), followed by alphanumeric
characters (for example, /new). The name can include multiple components that
make the mounted file system appear to be a logical subdirectory (for example,
/new1/new2 or /new1/new2/new3/new4).
With the exception of nested mount file systems, you should mount a file system
only on the end of a multicomponent mount point name. No file systems should be
mounted on intermediate components of a mount point name. For example, if you
have a file system mounted on the /new1 mount point, you should not mount a file
system on a mount point named /new1/new2.
A mount point name can have a maximum of seven components (for example,
/new1/new2/new3/new4/new5/new6/new7). International characters are not
supported as mount point names.
Options for Mounts on Data Movers
When defining a mount point for a file system on a Data Mover, you can use
different options to define how the file system mount is controlled.
Read/Write Option for Data Mover Mounts
When a file system is mounted read/write on a Data Mover, only that Data Mover is
allowed access to the file system. No other Data Mover is allowed read/write or
read-only access to that file system. This is the default for all file systems but
checkpoint and TimeFinder/FS file systems.
Read-Only Option for Data Mover Mounts
When a file system is mounted read-only on a Data Mover, clients cannot write to
the file system regardless of the permissions defined when the client access is set
up. A file system can be mounted read-only on several Data Movers concurrently,
and no Data Mover can mount the file system as read/write. This is the default for
checkpoint and TimeFinder/FS file systems.

10 of 66 Version 5.4 Managing Celerra File Systems


Monitoring the Amount of Space in Use in a File System
The Celerra Network Server monitors the amount of space in use in its file systems.
It is useful to monitor this file system size threshold because the performance of a
file system can degrade as its used space approaches 100 percent full.
By default, the Celerra Network Server is set up to trigger an event when the used
space in a file system exceeds 90 percent full. To be notified of this event, you must
set up event logging (an SNMP trap or e-mail notification) for it as described in the
Configuring Celerra Events and Notifications technical module
When the file system size threshold is exceeded, you can take one of two corrective
actions: Remove files from the file system or extend the file system size. Both
actions have the side effect of reducing the amount of used space to a percentage
below the threshold.
You may want to change the file system size threshold, for example, to a smaller
percentage if the amount of space in use in your file systems grows at a rapid rate.
You change the threshold by changing the percentage number in the global
parameter file (if you want to change the threshold for all file systems on a Celerra
Network Server) or in the parameter file for a single Data Mover (if you want to
change the threshold only for file systems on that Data Mover).
Using Quotas to Prevent File Systems from Filling Up
To prevent a file system from getting full, you can impose quota limits on users and
groups that create files or on directory trees.
You can set a hard quota limit on user, group, or directory tree quotas to prevent
allocation of all of the space in the file system. Once the hard limit is reached, the
system denies user requests to save additional files and notifies the administrator
that the hard limit was reached. At this point, existing files can be read, but action
must be taken either by the user or administrator to delete files from the file system,
or increase the hard limit to allow the saving of additional files.
To avoid degradation of file system performance, you may choose to set the hard
quota limit to between 80 percent and 85 percent of the total file system space. In
addition to setting the hard quota limit, you can set a lower soft quota limit so that
the administrator is notified when the hard limit is being approached.
For example, to prevent a file system containing 100 gigabytes of storage from
filling up, you can set a soft quota limit of 80 gigabytes and a hard quota limit of 85
gigabytes using user, group, or directory tree quotas. When used space in the file
system totals 80 gigabytes, the administrator is notified that the soft limit was hit.
When used space totals 85 gigabytes, the system denies user requests to save
additional files, and the administrator is notified that the hard limit was hit.
For detailed information about quotas and how to set up user, group, or directory
tree quotas, refer to the Using Quotas on Celerra technical module on the user
information CD.

Managing Celerra File Systems Version 5.4 11 of 66


File Systems and Automatic Volume Management
The Celerra management interfaces implement support of AVM to automate
volume creation and management. When you use the AVM to create a file system
in a Celerra interface, the file server automatically creates and manages volumes
as part of file system creation and expansion operations.
AVM and the Celerra Command Line Interface
The Celerra command line interface (CLI) uses AVM to create a file system when
you specify a size and storage pool from which to allocate the space, using the
nas_fs -create size=<size> pool=<pool> options.
When you extend a file system created with AVM, you specify the extension size
instead of a volume to add to the file system. AVM handles the extension by
creating and allocating new storage from the storage pool associated with the file
system. This ensures that the file system and its extensions consist of the same
type of storage, with the same cost, performance, and availability characteristics.
AVM and the Celerra Manager
The Celerra Manager uses AVM to create a file system when you create the file
system from a storage pool.
When you use the Celerra Manager to extend a file system created from a storage
pool, it is recommended that you extend the file system using space from the
storage pool already associated with the file system. If the file system you want to
extend has no associated storage pool, the Celerra Manager will determine the
best storage pool to use for the extension and will present that pool as the default
for the extension.

Celerra WORM
Celerra WORM (Write-Once Read-Many) allows you to archive data to WORM
storage on standard rewritable magnetic disks. You can write data to a CIFS or NFS
file system a single time to create a permanent, noneditable set of records that
cannot be altered, corrupted, or deleted.
CWORM can be applied to any standard Celerra file system. In a CWORM
environment, administrators can use WORM protection on a per-file basis. Files
can be stored with specified retention periods, which until expiration will prohibit the
files from being deleted. Only an administrator has the ability to delete the CWORM
file system.
A Celerra WORM-enabled file system:
◆ Safeguards data while ensuring its integrity and accessibility.
◆ Simplifies the task of archiving data for administrators and applications.
◆ Improves storage management flexibility and application performance.
If your system is CWORM enabled, refer to the Using Celerra WORM technical
module on the user information CD for additional information on this feature.

12 of 66 Version 5.4 Managing Celerra File Systems


Checking File System Consistency
The file system check (fsck) feature checks file system consistency on a specified
file system by detecting and correcting file system storage errors.
When file system corruption is detected, the server either begins fsck automatically
(auto fsck) or panics itself to initiate fsck on reboot. Also, you can start manual
fsck on a particular file system using the nas_fsck command.
The fsck process cannot run on a read/only file system and you do not need to run
fsck for normal reboot/shutdown operations. File system consistency is
maintained through a logging mechanism and reboot/shutdown operation causes
no corruption.

! !
CAUTION
◆ During fsck, the file system is not available to the users. NFS clients receive a
“NFS server not responding” message and CIFS clients lose the server
connection and must remap shares.
◆ Depending on the size of the file system, the fsck utility may use a significant
chunk of the system’s resources (memory and CPU) and may affect overall
system performance.
◆ Two is the maximum number of fsck processes that can be run on a single Data
Mover at the same time.
◆ A fsck of permanently unmounted file system can be executed on a standby Data
Mover. The fsck process continues to run during and after a Data Mover failover.
◆ If a Data Mover reboots or experiences failover or failback while running fsck on
an unmounted file system, fsck will not start automatically after the Data Mover
reboots. You must start fsck manually on the unmounted file system.
Using auto fsck (default)
The Celerra Network Server begins auto fsck when file system corruption is
detected. The first step in the fsck process is to ensure the corruption can be safely
corrected without bringing down the server. The fsck process also corrects any
inconsistencies in the ACL (Access Control List) database.
The corrupted file system is not available to users during the fsck process. After
fsck finds and corrects the corruption, users again have access to the file system.
While fsck is running, other file systems mounted on the same server are not
affected and are available to users.
You can check the status of fsck through the Control Station using the nas_fsck
command. Refer to the section on displaying information on a single file system
check.

Managing Celerra File Systems Version 5.4 13 of 66


Using fsck on Reboot
If auto fsck cannot safely correct the corruption or if there are not enough system
resources available to run fsck, the server panics itself to initiate fsck on reboot.
During reboot, the server detects the corrupted file system and begins fsck.
By default, the reboot process runs fsck on any corrupted file system. If you set
the ufs.skipFsck parameter, the reboot process does not run fsck and the
corrupted file systems are not mounted. Refer to the Celerra Network Server
Parameters Guide on the user information CD for additional information on
parameters.
Using Manual fsck
The nas_fsck command allows you to manually start fsck on a particular file
system. The nas_fsck command also lists and displays the status of fsck and
aclchk. The aclchk utility finds and corrects any errors in the ACL database.
The fsck utility should not be run on a server under heavy load to prevent the
server from running out of resources. In most cases, the user is notified when
sufficient memory is not available to run a fsck. In these cases, users can choose
one of the following options:
◆ Start the fsck during off-peak hours.
◆ Reboot the server and start fsck immediately.
◆ Run fsck on a different server if the file system is unmounted.
For more information on manually starting and using the nas_fsck command,
refer to the section on finding and fixing file system storage errors and the Celerra
Network Server Command Reference Manual on the user information CD.

14 of 66 Version 5.4 Managing Celerra File Systems


Nested Mount File Systems
The Nested Mount File System (NMFS) allows you to manage a collection of file
systems as a single file system. The NMFS contains the component file systems
that are mounted on the NMFS root.
The client sees the component file systems as a single share or single export. The
following features are supported on component file systems only:
◆ Snap
◆ Replicator
◆ Quotas
◆ Security policies
◆ Backup
◆ SRDF
◆ TimeFinder FS
◆ MPFS access to MPFS enabled FS
◆ CDMS
◆ FileMover
When using NMFS, keep in mind the following considerations and procedures:
◆ NMFS contain component file systems only.
◆ Capacity is managed independently for each component file system.
◆ You can increase the total capacity of the NMFS either by extending an existing
component file system or by adding new component file systems.
◆ There are no additional limitations on the number of nested file systems or
component file systems beyond the existing limitations for the number of file
systems on a Data Mover.
◆ NMFS root is a read/only file system.
◆ Hard links (NFS), renames, and simple moves are not possible from one
component file system to another.
◆ NMFS can contain different classes of storage.

Managing Celerra File Systems Version 5.4 15 of 66


System Requirements
Table 3 lists the requirements for creating and managing file systems on the Celerra
Network Server using this document.
Table 3 System Requirements for Managing File Systems

Software Celerra Network Server Version 5.2 or higher

Hardware No specific hardware requirements

Network No specific network requirements

No specific storage requirements


Storage
Note: Any Celerra-qualified storage system.

EMC NAS Interoperability Matrix


Refer to the EMC NAS Interoperability Matrix for definitive information on supported
software and hardware, such as backup software, Fibre Channel switches, and
application support for Celerra network-attached storage (NAS) products.
To view the EMC NAS Interoperability Matrix:
1. Go to http://powerlink.emc.com.
2. Search for NAS Interoperability Matrix.
The EMC NAS Interoperability Matrix appears in the list with a high score. The
default order for search results is from highest to lowest score.

16 of 66 Version 5.4 Managing Celerra File Systems


User Interface Choices
The Celerra Network Server offers flexibility in managing networked storage based
on your support environment and interface preferences.
This technical module describes how to configure Celerra volumes and file systems
using the command line interface (CLI). You can also perform most of these tasks
using the Celerra Manager management interface.
For more information about Celerra Manager, refer to Getting Started with Celerra
Management and the Celerra Manager online help in the application or on the user
information CD.

Managing Celerra File Systems Version 5.4 17 of 66


Managing Celerra File Systems Roadmap
This roadmap shows the process for configuring and managing file systems. This
process contains components that represent the sequential phases of the roadmap.
In addition, any nonsequential phases are represented in blocks at the base of the
roadmap. Each phase contains the tasks required to complete the process.

Note: When viewing online, click the text in the roadmap to access that phase. To return to
this roadmap from other pages, click the roadmap symbol at the center bottom of the page.

Table 4 lists the tasks to manage Celerra volumes as described in this technical
module.
Table 4 File Systems Roadmap Tasks

Task Procedure

Explain procedures for creating a Celerra file Create a Celerra File System on a Celerra
system with existing volumes or using the AVM Network Server on page 20.
feature.

Create a mount point on a Data Mover for a file Create a Mount Point for a Celerra File System
system on a Data Mover on page 22.

Mount a Celerra file system on a Data Mover Mount a Celerra File System on a Mount Point
on a Data Mover on page 23

For a detailed synopsis of the commands presented in this section or to view syntax
conventions, refer to the online Celerra man pages or the Celerra Network Server
Command Reference Manual on the user information CD.

18 of 66 Version 5.4 Managing Celerra File Systems


Create a Celerra File System
on a Celerra Network Server

Create a Mount Point for a Celerra


File System on a Data Mover

Mount a Celerra File System on a


Mount Point on a Data Mover

Enable NFS Access to a


Celerra File System

Enable CIFS Access to a


Celerra File System

Enable FTP Access to a


Celerra File System

Enable TFTP Access to a


Celerra File System

Enable MPFS Access to a


Celerra File System

Managing Celerra File Systems Version 5.4 19 of 66


Create a Celerra File System on a Celerra Network
Server
The following sections explain procedures for creating a Celerra file system with
existing volumes or using the AVM feature. For an explanation of AVM, refer to
Celerra File System Concepts on page 7.
Creating a Celerra File System by Volume
Use this procedure to create a Celerra file system that includes existing volumes.
Prerequisite: Perform the procedure for creating a metavolume in the Managing
Celerra Volumes technical module on the user information CD.

Action

To create a file system with existing volumes, use this command syntax:
$ nas_fs -name <name> -create <volume_name>
Where:
<name> = the name assigned to a file system
<volume_name> = the name of the existing volume
Example:
To create a file system with existing volumes called ufs1, type:
$ nas_fs -name ufs1 -create mtv1

Output

id = 18
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = mtv1
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
002806000209-006,
002806000209-007,
002806000209-008,
002806000209-009
disks = d3,d4,d5,d6

Note

A file system name must be unique on a particular Celerra Network Server. It can include up to
254 characters, including hyphens ( - ), underscores ( _ ), and periods ( . ), but cannot begin with a
hyphen or period.
Refer to the Celerra Network Server Command Reference Manual on the user information CD for
information about the options available for creating a file system with the nas_fs command.

20 of 66 Version 5.4 Managing Celerra File Systems


Creating a Celerra File System by Size Using AVM
When you create a Celerra file system using the AVM feature, you do not have to
create volumes before setting up the file system. AVM allocates space to the file
system from the storage pool you specify, and automatically creates any required
volumes when it creates the file system.
For information about AVM and storage pools, refer to Celerra File System
Concepts on page 7 and storage pools in the Managing Celerra Volumes technical
module on the user information CD. For information about TimeFinder/FS as well
as SRDF/S, refer to the technical modules Using TimeFinder/FS with Celerra and
Using SRDF/S with Celerra for Disaster Recovery on the user information CD.
Use this procedure to create a Celerra file system using AVM.

Action

To create a file system using AVM, use this command syntax:


$ nas_fs -name <name> -create size=<size> pool=<pool>
[storage=<system_name>]
Where:
<name> = the name assigned to a file system
<size> = size of a file system in megabytes
<pool> = the storage pool for the file system
<system_name> = the backend storage system from which to allocate space for the file system

Note: If you do not specify a storage system, any single storage system with available space for
the specified pool will be used.

Example:
To create a file system using AVM called ufs1, type:
$ nas_fs -name ufs1 -create size=10G pool=clar_r5_performance

Output

id = 24
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = v121
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
WRE00100100430-0010,
WRE00100100430-0011
disks = d7,d13

Note

A file system name must be unique on a particular Celerra Network Server. It can include up to
254 characters, including hyphens ( - ), underscores ( _ ), and periods ( . ), but cannot begin with a
hyphen or period. The Celerra Network Server Command Reference Manual on the user
information CD identifies the nas_fs command options available for creating a file system.

Managing Celerra File Systems Version 5.4 21 of 66


Create a Mount Point for a Celerra File System on a
Data Mover
Use this procedure to create a mount point on a Data Mover for a file system.

Action

To create a mount point on a Data Mover, use this command syntax:


$ server_mountpoint <movername> -create <pathname>
Where:
<movername> = name of the specified Data Mover
<pathname> = path to mount point
Example:
To create a mount point called server_3, type:
$ server_mountpoint server_3 -create /ufs1

Output

server_3: done

22 of 66 Version 5.4 Managing Celerra File Systems


Mount a Celerra File System on a Mount Point on a
Data Mover
Prerequisite: Create a Mount Point for a Celerra File System on a Data Mover on
page 22.
To mount a Celerra file system on a Data Mover, you must know the name of the
Data Mover that contains the mount point, the name of the mount point on the Data
Mover, and the name of the file system you want to mount. You can use the:
◆ nas_server -list command to view Data Mover names.
◆ server_mountpoint <movername> -list command to view mount points
on a specified Data Mover.
◆ nas_fs -list command to list existing file systems.
Use this procedure to mount the file system on the mount point on a Data Mover.

Action

To mount a file system on a mount point on a Data Mover, use this command syntax:
$ server_mount <movername> -option <options>
<fs_name> <mount_point>
Where:
<movername> = name of the specified Data Mover
<options> = specifies mount options, separated by commas
<fs_name> = named file system to be mounted
<mount_point> = path to mount point for the specified Data Mover
Example:
To mount a file system on a mount point on a Data Mover with a nolock option, type:
$ server_mount server_2 -option nolock,accesspolicy=NATIVE
ufs1 /ufs1

Output

server_2: done

File systems are mounted permanently by default. If you unmount a file system
temporarily and then reboot the file server, the file system is remounted
automatically.

Managing Celerra File Systems Version 5.4 23 of 66


Enabling Access to Celerra File Systems
This section discusses how to enable access to file systems using the NFS, CIFS,
FTP, TFTP, and MPFS protocols.

Enable NFS Access to a Celerra File System


To enable NFS protocol access to a Celerra file system, you export the file system
explicitly for NFS access from a Data Mover and then, on the client system, mount
the exported Celerra file system using the NFS tools available on that system.
For how to enable NFS access to a Celerra file system, refer to the Managing NFS
Access to the Celerra Network Server technical module on the user information CD.
For how to mount the Celerra file system on the client system, refer to the
documentation provided with the client’s NFS software.

Enable CIFS Access to a Celerra File System


To enable CIFS protocol access to a Celerra file system, you share (export) the
Celerra file system explicitly for CIFS access from a Data Mover and configure
CIFS service and CIFS user authentication on the Celerra Network Server. Then,
on the client system, connect to the shared Celerra file system using the CIFS tools
available on that system.
For how to enable CIFS access to a Celerra file system, refer to the guides
Configuring CIFS on Celerra and Managing Celerra for the Windows Environment
available on the user information CD.
For how to set up and manage simultaneous CIFS and NFS access to a Celerra file
system, refer to the Configuring CIFS on Celerra for a Multiprotocol Environment
technical module on the user information CD. Note that the Celerra Network Server
lets you map Windows users and groups to UNIX UIDs (user IDs) and GIDs (group
IDs) to provide users with seamless access to shared file system data; for details,
refer to the Configuring Celerra User Mapping technical module on the user
information CD.
For how to connect to the Celerra file system from the client, refer to the
documentation provided with the client’s CIFS software.

Enable FTP Access to a Celerra File System


To enable FTP protocol access to a Celerra file system, you configure the FTP
feature on the Celerra Network Server to allow FTP access to the file system by
specified users.
For how to configure the FTP feature on the Celerra Network Server, refer to the
Using FTP on Celerra Network Server technical module on the user information
CD.

24 of 66 Version 5.4 Managing Celerra File Systems


Enable TFTP Access to a Celerra File System
To enable TFTP protocol access to a Celerra file system, you configure the FTP
feature on the Celerra Network Server to allow TFTP access to the file system by
specified users.
For how to configure the TFTP feature on the Celerra Network Server, refer to the
Using TFTP on Celerra Network Server technical module on the user information
CD.

Enable MPFS Access to a Celerra File System


You must enable MPFS (multiplex file system) protocol access to a Celerra Network
Server if you intend to use the EMC Celerra HighRoad product.
For how to enable MPFS access to a Celerra Network Server, refer to the Using
HighRoad on Celerra technical module on the user information CD.

Managing Celerra File Systems Version 5.4 25 of 66


Managing Celerra File Systems
The following sections explain procedures you use to manage Celerra file systems.
Unless otherwise noted, the procedures apply to all Celerra models.

Listing All Celerra File Systems on a Celerra Network Server


Use this procedure to view a list of all file systems on a Celerra Network Server.

Action

To view a list of all Celerra file systems on a Celerra Network Server, type:
$ nas_fs -list

Output

id inuse type acl volume name server


1 n 1 0 166 root_fs_1
2 y 1 0 168 root_fs_2 1
3 y 1 0 170 root_fs_3 2
4 y 1 0 172 root_fs_4 3
5 y 1 0 174 root_fs_5 4
6 n 1 0 176 root_fs_6
7 n 1 0 178 root_fs_7
8 n 1 0 180 root_fs_8
9 n 1 0 182 root_fs_9
10 n 1 0 184 root_fs_10
11 n 1 0 186 root_fs_11
12 n 1 0 188 root_fs_12
13 n 1 0 190 root_fs_13
14 n 1 0 192 root_fs_14
15 n 1 0 194 root_fs_15
16 y 1 0 196 root_fs_common 4,3,2,1
17 n 5 0 245 root_fs_ufslog
18 n 1 0 247 ufs1

Note

Column definitions:
id — ID of the file system (assigned automatically).
inuse — Whether the file system registered into the mount table of a Data Mover;
y indicates yes, n indicates no.
type — Type of file system.
acl — Access control value for the file system.
volume — Volume on which the file system resides.
name — Name assigned to the file system.
server — ID of the Data Mover accessing the file system.

26 of 66 Version 5.4 Managing Celerra File Systems


Listing Configuration Information for a Celerra File System
Use this procedure to list the configuration information for a Celerra file system.

Action

To view configuration information for a specific file system, use this command syntax:
$ nas_fs -info <fs_name>
Where:
<fs_name> = name of the file system for which you want to view information
Example:
To view configuration information on ufs1, type:
$ nas_fs -info ufs1

Output

id = 18
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = mtv1
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
002806000209-006,002806000209-007,002806000209-008,002
806000209-009
disks = d3,d4,d5,d6

Checking Celerra File System Capacity


You can check the space capacity of a Celerra file system or of all file systems on a
specific Data Mover. Checking the space capacity tells you the total, free, and used
disk space allocated to the file systems.
You can also check the inode capacity of a Celerra file system or of all file systems
on a specific Data Mover. A specific number of inodes is allocated to a file system
when you create it. The number of inodes defines the number of filenames and
directory names (folder names) the file system can contain. Checking inode
capacity tells you the total, used, and free number of inodes for the file systems.

Managing Celerra File Systems Version 5.4 27 of 66


Checking Space Capacity of a File System
Use this procedure to check the space capacity of a file system.

Action

To see the total amount of space allocated to a file system, the amount of free and used disk
space available to the file system, and the percentage of space used, use this command
syntax:
$ nas_fs -size <fs_name>
Where:
<fs_name> = name of the file system
Example:
To view the total space available on ufs1, type:
$ nas_fs -size ufs1

Output

total = 8654 avail = 8521 used = 133 ( 1% ) (sizes in MB)

Note

In this example, the file system is mounted

Checking Space Capacity of All File Systems on a Data Mover


Use this procedure to check the space capacity of all file systems on a Data Mover.

Action

To see the total amount of space allocated to every file system on a Data Mover, the free and
used space available to each file system, and the percentage of used space, use this
command syntax:
$ server_df <movername>
Where:
<movername> = name of the specified Data Mover
Example:
$ server_df server_2

Output

server_2:
Filesystem kbytes used avail capacity Mounted on
root_fs_common 15360 1392 13968 9% /.etc_common
ufs1 34814592 54240 34760352 0% /ufs1
ufs2 104438672 64 104438608 0% /ufs2
ufs1_snap1 34814592 64 34814528 0% /ufs1_snap1
root_fs_2 15360 224 15136 1% /

28 of 66 Version 5.4 Managing Celerra File Systems


Checking Inode Capacity of a File System
Use this procedure to check the inode capacity of a file system on a Data Mover.

Action

To see the number of inodes allocated to a file system on a Data Mover, the number of used
and available inodes for the file system, and the percentage of total inodes in use by the file
system, use this command syntax:
$ server_df <movername> -inode <fs_name>
Where:
<movername> = name of the specified Data Mover
<fs_name> = name of the file system
Example:
To view the inode allocation, use, and availability on ufs2, type:
$ server_df server_2 -inode ufs2

Output

server_2:
Filesystem inodes used avail capacity Mounted on
ufs2 12744766 8 12744758 0% /ufs2

Note

Column definitions:
Filesystem — Name of the file system.
inodes — Total number of inodes allocated to the file system.
used — Number of inodes in use by the file system.
avail — Number of free inodes available for use by the file system.
capacity — Percentage of total inodes in use.
Mounted on — Name of the mount point of the file system on the Data Mover.

Managing Celerra File Systems Version 5.4 29 of 66


Checking Inode Capacity of All File Systems on a Data Mover
Use this procedure to check the inode capacity of all file systems on a Data Mover.

Action

To see the number of inodes allocated to each file system on a Data Mover, the number of
used and available inodes for each file system, and the percentage of total inodes in use by
each file system, use this command syntax:
$ server_df <movername> -inode
Where:
<movername> = name of the specified Data Mover
Example:
$ server_df server_2 -inode

Output

server_2:
Filesystem inodes used avail capacity Mounted on
root_fs_common 7870 14 7856 0% /.etc_common
ufs1 4250878 1368 4249510 0% /ufs1
ufs2 12744766 8 12744758 0% /ufs2
ufs1_snap1 4250878 8 4250870 0% /ufs1_snap1
root_fs_2 7870 32 7838 0% /

Note

Column definitions:
Filesystem - Name of the file system.
inodes — Total number of inodes allocated to the file system.
used — Number of inodes in use by the file system.
avail — Number of free inodes available for use by the file system.
capacity — Percentage of total inodes in use.
Mounted on — Name of the mount point of the file system on the Data Mover.

Finding Out If a File System Has an Associated Storage Pool


All file systems created using the AVM feature have an associated storage pool. For
details, refer to Creating a Celerra File System by Size Using AVM on page 21.
Use this procedure to find out if a file system has an associated storage pool.

Action

Check the file system configuration to determine if the file system has an associated storage
pool:
$ nas_fs -info <fs_name>
Where:
<fs_name> = name of the file system whose configuration you want to check
Example:
To check the file system configuration for associated storage pool on ufs1, type:
$ nas_fs -info ufs1

30 of 66 Version 5.4 Managing Celerra File Systems


Output

id = 24
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = v121
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
WRE00100100430-0010,
WRE00100100430-0011
disks = d7,d13

Note

The pool line includes the value clar_r5_performance, which is the storage pool
associated with the file system.
If the pool line contains no value (is blank), the file system has no associated storage pool.

Extending a Celerra File System


If a Celerra file system nears its maximum capacity, you can increase its size by
extending the file system. You extend a file system either by volume (by adding
volumes to it), or by size (using the AVM feature to add space if the file system has
an associated storage pool).

Note: For information about AVM, refer to Celerra File System Concepts on page 7. For
information about storage pools, refer to the Managing Celerra Volumes technical module
on the user information CD.

The following sections explain procedures for extending a Celerra file system by
volume or by size using AVM.
Extending a Celerra File System by Volume

Note: If the file system has an associated storage pool, we recommend that you extend it
using the procedure Extending a Celerra File System by Size Using AVM on page 33. To
figure out if a file system has an associated storage pool, use the procedure Finding Out If a
File System Has an Associated Storage Pool on page 30.

You can extend a file system with any unused disk volume, slice volume, stripe
volume, or metavolume. Adding volume space to a file system adds the space to
the metavolume on which it is built. So when you extend a file system, the total size
of its underlying metavolume is also extended.
If the metavolume underlying the file system is made up of stripe volumes, you
should extend the file system with stripe volumes of the same size and type.

Managing Celerra File Systems Version 5.4 31 of 66


Use this procedure to extend a file system by adding a volume to it.

Step Action

1. Check the size of the file system before extending it:


$ nas_fs -size <fs_name>
Example:
$ nas_fs -size ufs1
Output:
total = 67998 avail = 67997 used = 0 (0%) (sizes in MB)
volume: total = 69048 (sizes in MB)

2. To extend the file system, use this command syntax:


$ nas_fs -xtend <fs_name> <volume_name>
Where:
<fs_name> = the name of the file system.
<volume_name> = the volume you want to add to the file system. The name of the
volume you added appears in the volume output field.
Example:
$ nas_fs -xtend ufs1 emtv2b
Output:
id = 18
name = ufs1
acl = 0
in_use = True
type = uxfs
volume = mtv1, emtv2b
profile =
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
symm_devs = 002804000190-0034,002804000190-0035,002804000190-
0036,002804000190-0037,002804000190-0040,002804000190-
0041,002804000190-0042,002804000190-0043
disks = d3,d4,d5,d6,d15,d16,d17,d18
disk=d3 symm_dev=002804000190-0034 addr=c0t3l8-15-0
server=server_2
disk=d4 symm_dev=002804000190-0035 addr=c0t3l9-15-0
server=server_2
disk=d5 symm_dev=002804000190-0036 addr=c0t3l10-15-0
server=server_2
disk=d6 symm_dev=002804000190-0037 addr=c0t3l11-15-0
server=server_2
disk=d15 symm_dev=002804000190-0040 addr=c0t4l4-15-0
server=server_2
disk=d16 symm_dev=002804000190-0041 addr=c0t4l5-15-0
server=server_2
disk=d17 symm_dev=002804000190-0042 addr=c0t4l6-15-0
server=server_2
disk=d18 symm_dev=002804000190-0043 addr=c0t4l7-15-0
server=server_2

32 of 66 Version 5.4 Managing Celerra File Systems


Step Action

3. Check the size of the file system after extending it to see how much the size increased:
$ nas_fs -size <fs_name>
Where: <fs_name> = the name of the file system
Example:
$ nas_fs -size ufs1
Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)
volume: total = 138096 (sizes in MB)

Extending a Celerra File System by Size Using AVM


All file systems created using the AVM feature have an associated storage pool. For
details, refer to Creating a Celerra File System by Size Using AVM on page 21.
If a file system has an associated storage pool, you can extend it by specifying only
the file system name and the extension size. AVM handles the extension by
creating and allocating new storage from the storage pool already associated with
the file system.

Note: For information about AVM, refer to Celerra File System Concepts on page 7. For
information about storage pools, refer to Storage Pools in the Managing Celerra Volumes
technical module on the user information CD.

Use this procedure to extend a file system by size using AVM.

Step Action

1. Check the file system configuration to confirm that the file system has an associated
storage pool. If you see a pool defined in the output, the file system was created with AVM
and has an associated storage pool.
$ nas_fs -info <fs_name>

Example:
$ nas_fs -info ufs1

Output:
id = 24
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = v121
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
WRE00100100430-0010,
WRE00100100430-0011
disks = d7,d13

Managing Celerra File Systems Version 5.4 33 of 66


Step Action

2. To extend the file system, use the command syntax:


$ nas_fs -xtend <fs_name> size=<size> [pool=<pool>]
[storage=<system_name>]
Where:
<fs_name> = the name of the file system
<size> = amount of space you want to add to the file system
<pool> = storage pool for extending the file system
<system_name> = the backend storage system from which to allocate space for
extending the file system

Note: If you do not specify a storage system, the default storage system is the one on
which the file system resides. If the file system spans multiple storage systems, the
default is to use all the storage systems on which the file system resides.

Enter the size in gigabytes by typing <number>G (for example, 250G) or in megabytes by
typing <number>M (for example, 500M).

Example:
$ nas_fs -xtend ufs1 size=10G

Output:
id = 24
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = v121
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
WRE00100100430-0010,
WRE00100100430-0011
disks = d7,d13,d19,d25

3. Check the size of the file system after extending it to confirm that the size increased.
$ nas_fs -size <fs_name>
Where: <fs_name> = the name of the file system.

Example:
$ nas_fs -size ufs1

Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)
volume: total = 138096 (sizes in MB)

34 of 66 Version 5.4 Managing Celerra File Systems


Exporting a Celerra File System
Use this procedure to export a Celerra file system.

Action

To export a file system on an NFS client, use this command syntax:


$ server_export <movername> -Protocol <Protocol> -option <options>
/<pathname>
Where:
<movername> = name of the specified Data Mover
<Protocol> = NFS or CIFS. NFS is default
<options> = NFS export options
<pathname> = pathname of the file system to the export

Example:
To export the file system ufs2 on an NFS client, type:
$ server_export server_3 -Protocol nfs -option root=10.1.1.1 /ufs2

To export a Celerra file system on an CIFS client, use this command syntax:
$ server_export <movername> -Protocol <Protocol> -name <sharename>
/<pathname>
Where:
<movername> = name of the specified Data Mover
<Protocol> = NFS or CIFS. NFS is default
<sharename> = CIFS sharename
<pathname> = pathname of the file system to export
Example:
To export the file system ufs2 on an CIFS client, type:
$ server_export server_3 -Protocol cifs -name ufs2 /ufs2

Output

For both command examples:


server_3: done

Note

NFS protocol is the default.

Managing Celerra File Systems Version 5.4 35 of 66


Renaming a Celerra File System
Use this procedure to rename a Celerra file system.

Action

To rename a file system, use this command syntax:


$ nas_fs -rename <old_name> <new_name>
Where:
<old_name> = current name of file system
<new_name> = new name of file system
Example:
To rename a file system ufs1, type:
$ nas_fs -rename ufs ufs1

Output

id = 18
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = mtv1
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
002806000209-006,002806000209-007,002806000209-008,002
806000209-009
disks = d3,d4,d5,d6

Deleting a Celerra File System and Freeing Its Disk Space

! !
CAUTION
Deleting a file system deletes all the data of the file system. In case of file system with
checkpoints, all the checkpoints should be deleted prior to the file system deletion.
The file system deletion does not delete data from business continuance volumes
associated with the file system.

Use this procedure to delete a Celerra file system and free its associated disk
space.

Step Action

1. Back up from the Celerra file system all data that you want to keep.

36 of 66 Version 5.4 Managing Celerra File Systems


Step Action

2. Check the file system configuration to determine if the file system has an associated
storage pool (you will need this information for a later step):
$ nas_fs -info <fs_name>
If the pool output line includes a value (is not blank), the file system has an associated
storage pool.
If the Celerra file system does not have an associated storage pool, continue with step 3.
If the Celerra file system has an associated storage pool, continue with step 4.

3. Determine and note the name of the metavolume on which the Celerra file system is built
(you will need this name for a later step):
$ nas_fs -info <fs_name>
Refer to Listing Configuration Information for a Celerra File System on page 27for
example output.
The volume field contains the name of the metavolume. The disks field lists the disks
that provide storage to the file system.

4. If the Celerra file system has associated checkpoints, permanently unmount and then
delete the checkpoints and their associated volumes.

5. If the Celerra file system has associated business continuance volumes, break the
connection between (unmirror) the file system and its business continuance volumes. For
how to unmirror a business continuance volume, refer to the Using TimeFinder/FS with
Celerra technical module on the user information CD.

6. Permanently disable client access to the Celerra file system:


For NFS-exported file systems, use a command of the form:
$ server_export <movername> -Protocol nfs -unexport -perm
<pathname>
Where:
<movername> = name of the specified Data Mover
<pathname> = NFS entry

For CIFS-shared file systems, use a command of the form:


$ server_export <movername> -Protocol cifs -unexport <sharename>

7. Permanently unmount the Celerra file system from its associated Data Movers:
$ server_umount <movername> -perm <fs_name>
Refer to Unmounting a File System Permanently on page 42 for more information.

8. Delete the Celerra file system from the Celerra Network Server:
$ nas_fs -delete <fs_name>
If the Celerra file system had an associated storage pool, you are finished. As part of
the file system delete operation, AVM deletes all underlying volumes and frees the space
for use by other file systems.
If the Celerra file system had no associated storage pool, continue with step 9. The
volumes underlying the file system were created manually and must be manually deleted.

Managing Celerra File Systems Version 5.4 37 of 66


Step Action

9. Delete the metavolume on which the Celerra file system was created:
$ nas_volume -delete <meta_volume_name>
Refer to deleting a metavolume or stripe volume to in the Managing Celerra Volumes
technical module on the user information CD for more information.

10. If the metavolume included stripe volumes, delete all stripe volumes associated with the
metavolume until the disk space is free:
$ nas_volume -delete <stripe_volume_name>

Refer to deleting a metavolume or strip volume to the Managing Celerra Volumes


technical module on the user information CD for more information.

11. If the metavolume included slice volumes, delete all slice volumes associated with the
metavolume until the disk space is free:
$ nas_volume -delete <slice_volume_name>

Refer to deleting a slice volume in the Managing Celerra Volumes technical module on
the user information CD for more information.

12. To finish freeing the disk space, check for slice volumes, stripe volumes, and
metavolumes that are not in use (n in inuse column) with these commands:
$ nas_volume -list
$ nas_slice -list
As needed, delete unused volumes until you free all of the disk space you want to free.

38 of 66 Version 5.4 Managing Celerra File Systems


Listing Mount Points on a Data Mover
Use this procedure to list the mount points on a Data Mover.

Action

To list the mount points on a Data Mover, use this command syntax:
$ server_mountpoint <movername> -list
Where:
<movername> = name of the specified Data Mover
Example:
To list mount points on server_3, type:
$ server_mountpoint server_3 -list

Output

server_3 :
/.etc_common
/ufs1
/ufs1_snap1

Listing All Celerra File Systems Mounted on a Data Mover


You can view a list of all Celerra file systems currently mounted on a specific Data
Mover and the options assigned to each mounted file system.
Use this procedure to view a list of file systems mounted on a Data Mover.

Action

To view a list of all Celerra file systems mounted on a Data Mover, use this command syntax:
$ server_mount <movername>
Where:
<movername> = name of the specified Data Mover
Example:
To list all Celerra file Systems mounted on server_3, type:
$ server_mount server_3

Output

server_3:
fs2 on /fs2 uxfs,perm,rw
fs1 on /fs1 uxfs,perm,rw
root_fs_3 on / uxfs,perm,rw

Managing Celerra File Systems Version 5.4 39 of 66


Mounting a File System for Read/Write Access
A Celerra file system can be mounted for read/write access on only one Data
Mover. If a file system is mounted for read/write access, it cannot be mounted for
read-only access on any Data Mover.
Use this procedure to mount a file system for read/write access on a Data Mover.

Step Action

1. If the file system is mounted on any Data Mover, use the server_umount command to
unmount the file system permanently from all Data Movers on which it is mounted.
For how to unmount a file system from a Data Mover, refer to Unmounting a Celerra File
System from a Data Mover on page 41.

2. Use the server_mount command to mount the file system on a Data Mover for read/
write access.
For how to mount a file system on a Data Mover, refer to Mount a Celerra File System on
a Mount Point on a Data Mover on page 23.

Mounting a File System for Read-Only Access


A Celerra file system can be mounted for read-only access on several Data Movers.
If a file system is mounted for read-only access, it cannot be mounted for read/write
access on any Data Mover.
Use this procedure to mount a file system for read-only access on Data Movers..

Step Action

1. If the file system is mounted for read/write access on a Data Mover, use the
server_umount command to unmount the file system permanently from that Data
Mover.
For how to unmount a file system from a Data Mover, refer to Unmounting a Celerra File
System from a Data Mover on page 41.

2. Use the server_mount command to mount the file system for read-only access on each
Data Mover from which you want to provide read-only access to the file system.
For how to mount a file system on a Data Mover, refer to Mount a Celerra File System on
a Mount Point on a Data Mover on page 23.

40 of 66 Version 5.4 Managing Celerra File Systems


Unmounting a Celerra File System from a Data Mover
You can unmount a single Celerra file system from a Data Mover temporarily or
permanently, or unmount all Celerra file systems from a Data Mover at once. You
specify the file system you want to unmount either by the file system name or the
name of its mount point on the Data Mover.
Unmounting a File System Temporarily
Use this procedure to temporarily unmount a file system from a Data Mover. The
option for a temporary unmount, -temp, is the default and need not be specified as
part of the command.

Note: It is good practice to unexport NFS exports and unshare CIFS shares of the file
system before unmounting the file system from the Data Mover.

Action

To temporarily unmount a file system by specifying the file system name, use this command
syntax:
$ server_umount <movername> -temp <fs_name>

Where:
<movername> = name of the specified Data Mover
<fs_name> = name of file system to unmount
Example:
To temporarily unmount a file system named ufs1, type:
$ server_umount server_2 -temp ufs1

Output

For both command examples:


server_2: done

Managing Celerra File Systems Version 5.4 41 of 66


Unmounting a File System Permanently
Use this procedure to permanently unmount a file system from a Data Mover.

Note: It is good practice to unexport NFS exports and unshare CIFS shares of the file
system before unmounting the file system from the Data Mover.

Action

To permanently unmount a file system from a Data Mover by specifying the file system name,
use this command syntax:
$ server_umount <movername> -perm <fs_name>
Where:
<movername> = name of the specified Data Mover
<fs_name> = name of file system to unmount
Example:
To permanently unmount a file system named mtv1, type:
$ server_umount server_2 -perm ufs1
To permanently unmount a file system from a Data Mover by specifying the name of its mount
point, type:
$ server_umount <movername> -perm <mount_point>
Where:
<movername> = name of the specified Data Mover
<mount_point> = path to mount point for the specified Data Mover
Example:
To permanently unmount a file system to server_2, type:
$ server_umount server_2 -perm /ufs1

Output

For both command examples:


server_2: done

42 of 66 Version 5.4 Managing Celerra File Systems


Unmounting All Celerra File Systems from a Data Mover
Use the procedures in this section to temporarily or permanently unmount all file
systems from a Data Mover.
Unmounting All File Systems on a Data Mover Temporarily
Use this procedure to temporarily unmount all file systems on a Data Mover. The
option for a temporary unmount, -temp, is the default and need not be specified as
part of the command.

Note: It is good practice to unexport NFS exports and unshare CIFS shares of the file
system before unmounting the file system from the Data Mover.

Action

To temporarily unmount all file systems on a Data Mover, use this command syntax:
$ server_umount <movername> -temp all
or
$ server_umount <movername> all
Where:
<movername> = name of the specified Data Mover
Example:
To temporarily unmount all file systems on server_2, type:
$ server_umount server_2 -temp all

Output

Unmount all file systems output:


server_2: done

Managing Celerra File Systems Version 5.4 43 of 66


Unmounting All File Systems on a Data Mover Permanently

! !
CAUTION
Permanently unmounting all file systems from a Data Mover should be done with
caution because the operation deletes the contents of the mount table. To re-
establish client access to the file systems after this operation, you must rebuild the
mount table by remounting each file system on the Data Mover.

Use this procedure to permanently unmount all file systems on a Data Mover. We
recommend that you unexport NFS exports and unshare CIFS shares of the file
systems before unmounting the file systems from the Data Mover.

Action

To permanently unmount all file systems on a Data Mover, use this command syntax:
$ server_umount <movername> -perm all
Where:
<movername> = name of the specified Data Mover
Example:
To permanently unmount all file systems on server_2, type:
$ server_umount server_2 -perm all

Output

Unmount all file systems output:


server_2: done

44 of 66 Version 5.4 Managing Celerra File Systems


Changing the Size Threshold for All File Systems
Use this procedure to change the file system size threshold for all file systems on a
Celerra Network Server.

Step Action

1. To change the size threshold for all file systems, use this command syntax:
$ server_param <movername> -facility <facility_name> -modify
<param_name>
-value <new_value>
Where:
<movername> = name of Data Mover
<facility_name> = facility for the parameters
<param_name> = name of parameter
<new_value> = new value for parameter
Example:
To change the size threshold for all file systems to 85%, type:
$ server_param ALL -facility file -modify fsSizeThreshold -value
85

2. To make the change take effect, reboot the Data Movers:


$ server_cpu ALL -reboot now

Changing the Size Threshold for a Data Mover


Use this procedure to change the file system size threshold for all file systems on
one Data Mover.

Step Action

1. $ server_param <movername> -facility <facility_name> -modify


<param_name>
-value <new_value>
Where:
<movername> = name of Data Mover
<facility_name> = facility for the parameters
<param_name> = name of parameter
<new_value> = new value for parameter
Example:
To change the size threshold for all file systems on server_2, type:
$ server_param server_2 -facility file -modify fsSizeThreshold
-value 85

2. To make the change take effect, reboot the Data Mover:


$ server_cpu <data_mover> -reboot now
Example:
$ server_cpu server_2 -reboot now

Managing Celerra File Systems Version 5.4 45 of 66


Enhancing File Read and Write Performance
The Celerra Network Server includes internal mechanisms for enhancing read/write
performance characteristics for particular types of files.
The read mechanisms are designed to optimize reads of large files and are turned
on by default on the server. These mechanisms can improve read speed up to 100
percent on large files. This default mechanism should be turned off if the read
access pattern for files in the file system involves small random accesses. For how
to turn off this mechanism for a file system, refer to Turning Off the Read Prefetch
Mechanism for a File System on page 46. For how to turn off this mechanism for all
file systems on a Data Mover, refer to Turning Off and On the Read Prefetch
Mechanism for a Data Mover on page 47.
The write mechanisms are designed to improve performance for applications, such
as databases, that have many connections to a large file. These mechanisms can
enhance databases access through the NFS protocol by 30 percent or more. The
mechanism is off by default and can be turned on by file system. For how to turn on
the write performance improvement mechanism, refer to Turning On the Uncached
Write Mechanism for a File System on page 48.
Turning Off the Read Prefetch Mechanism for a File System
Use this procedure to turn off the read prefetch performance improvement
mechanism for a file system.

Action

To turn off the read prefetch mechanism for a file system, use this command syntax:
$ server_mount <movername> -option <options>,noprefetch
<fs_name> <mount_point>
Where:
<movername> = name of the specified Data Mover
<options> = specifies mount options, separated by commas
<fs_name> = name of the file system
<mount_point> = path to mount point for the specified Data Mover
Example:
To read the prefetch mechanism on ufs1, type:
$ server_mount server_3 -option rw,noprefetch ufs1 /ufs1

Output

server_3: done

46 of 66 Version 5.4 Managing Celerra File Systems


Turning Off and On the Read Prefetch Mechanism for a Data
Mover
Use this procedure to turn off the read prefetch performance improvement
mechanism for all file systems on one Data Mover.

Step Action

1. To turn off the read prefetch mechanism for all file systems on one Data Mover, use this
command syntax:
server_param <movername> -facility <facility_name> -modify
<param_name> -value <new_value>
Where:
<movername> = name of the data mover
<facility_name> = facility for the parameters
<param_name> = name of parameter
<new_value> = new value for parameter
Example:
To turn off the prefetch mechanism for all file systems on server_2, type
$ server_param server_2 -facility file -modify prefetch -value 0

To turn on the read prefetch mechanism for all file systems on one Data Mover, use this
command syntax:
server_param <movername> -facility <facility_name> -modify
<param_name> -value <new_value>
Example:
To turn on the read prefetch mechanism for all file systems on server_2, type:
$ server_param server_2 -facility file -modify prefetch -value 1

2. To make the change take effect, reboot the Data Mover:


$ server_cpu <data_mover> -reboot now
Example:
$ server_cpu server_2 -reboot now

Managing Celerra File Systems Version 5.4 47 of 66


Turning On the Uncached Write Mechanism for a File System
Use this procedure to turn on the uncached write performance improvement
mechanism for a file system.

Action

To turn on the write performance improvement mechanism for a file system, use this command
syntax:
$ server_mount <movername> -option <options>,uncached
<fs_name> <mount_point>
Where:
<movername> = name of the specified Data Mover
<options> = specifies mount options, separated by commas
<fs_name> = name of the file system
<mount_point> = path to mount point for the specified Data Mover
Example:
$ server_mount server_3 -option rw,uncached ufs1 /ufs1

Output

server_3: done

48 of 66 Version 5.4 Managing Celerra File Systems


Finding and Fixing File System Storage Errors
The following sections explain procedures you use check file system consistency.

Starting and Monitoring a File System Check


Use this procedure to start and monitor fsck on a specified file system.

Action

To start and monitor fsck on a specified file system, use this command syntax:
$ nas_fsck -start <name> -monitor
Where:
<name> = name of the specified file system
Example:
To start fsck on ufs1 and monitor the progress, type:
$ nas_fsck -start ufsl -monitor

Output

id = 27
name = ufs1
volume = mtv1
fsck_server = server_2
inode_check_percent = 10..20..30..40..60..70..80..100
directory_check_percent = 0..0..100
used_ACL_check_percent = 100
free_ACL_check_status = Done
cylinder_group_check_status = In Progress..Done

Starting ACL Check on a File System


To start ACL check on a specified file system, use the following procedure.

Action

To start ACL check a specified file system, use this command syntax:
$ nas_fsck -start <name> -aclchkonly
Where:
<name> = name of the specified file system
Example:
To start ACL check on ufs1 and monitor the progress, type:
$ nas_fsck -start ufsl -aclchkonly

Output

ACLCHK: in progress for file system ufs1

Managing Celerra File Systems Version 5.4 49 of 66


Listing Current File System Checks
To list all current file system checks, use the following procedure.

Action

To list current file system checks, use this command syntax:


$ nas_fsck -list
Example:
To list current file system checks, type:
$ nas_fsck -list

Output

id type state volume name server


23 1 FSCK 134 ufs2 4
27 1 ACLCHK 144 ufs1 1

Displaying Information on a Single File System Check


To display information on a file system check for a single file system, use the
following procedure.

Action

To display information about a file system check on a single file system, use this command syntax:
$ nas_fsck -info <name>
Where:
<name> = name of the specified file system
Example:
To display information about file system check for ufs2, type:
$ nas_fsck -info ufs2

Output

name = ufs2
id = 23
volume = v134
fsck_server = server_5
inode_check_percent = 100
directory_check_percent = 100
used_ACL_check_percent = 100
free_ACL_check_status = Done
cylinder_group_check_status = In Progress

50 of 66 Version 5.4 Managing Celerra File Systems


Displaying Information on all Current File System Checks
To display information on all file system checks currently running, use the following
procedure.

Action

To display information on all file system checks currently running, use this command syntax:
$ nas_fsck -info -all
Example:
To display information on all files system checks currently running, type:
$ nas_fsck -info -all

Output

name = ufs2
id = 23
volume = v134
fsck_server = server_5
inode_check_percent = 30
directory_check_percent = 0
used_ACL_check_percent = 0
free_ACL_check_status = Not Started
cylinder_group_check_status = Not Started

name = ufs1
id = 27
volume = mtv1
fsck_server = server_2
inode_check_percent = 100
directory_check_percent = 0
used_ACL_check_percent = 0
free_ACL_check_status = Not Started
cylinder_group_check_status = Not Started

Managing Celerra File Systems Version 5.4 51 of 66


Managing Celerra Nested Mount File Systems
The following sections explain procedures you use to manage Celerra nested
mount file systems.

Creating a Nested Mount File System


A NMFS must be initially created. Use this procedure to create a nested mount file
system.

Action

To create a nested mount file system, use this command syntax:


$ nas_fs -name <name> -type <nmfs> -create
Where:
<name> = name of the nested mount file system
<type> = type of file system
Example:
To create a nested mount file system, nmfs1, type:
$ nas_fs -name nmfs4 -type nmfs -create

Output

id = 719
name = nmfs4
acl = 0
in_use = False
type = nmfs
worm = off
volume = 0
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
stor_devs =
disks =

Note

There is no volume allocation.

52 of 66 Version 5.4 Managing Celerra File Systems


Mounting a Nested Mount File System
Use the following procedure to mount the NMFS. A NMFS must be mounted as
read/only.

Action

To mount the NMFS, use this command syntax:


$ server_mount <movername> -o ro <nmfs name> /<nmfs path>
Where:
<movername> = name of the specified Data Mover
<nmfs name> = name of nested mount file system
<pathname> = path to mount point
Example:
To mount a NMFS named nmfs4, type
$ server_mount server_3 -o ro nmfs4 /nmfs4

Output

server_3: done

Deleting the Nested Mount File System


To delete a NMFS, you must first permanently unmount all component file systems
in the NMFS and then you permanently unmount the NMFS. Refer to Unmounting a
File System Permanently on page 42 for more information.
After permanently unmounting all the component file systems and the NMFS,
delete the file system using the nas_fs command.

Step Action

1. To permanently unmount the NMFS, use this command syntax:


$ server_umount <movername> -perm <fs_name>
Where:
<movername> = name of the specified Data Mover
<fs_name> = name of file system to unmount
Example:
To permanently unmount a file system named nmfs4, type:
$ server_umount server_3 -p nmfs4

Output:
server_3 : done

Managing Celerra File Systems Version 5.4 53 of 66


Step Action

2. To delete the NMFS, use this command syntax:


$ nas_fs -delete <fs_name>
Where:
<fs_name> = name of file system to be deleted

Example:
To delete a nested mount file system, nmfs4, type:
$ nas_fs -d nmfs4
Output:
id = 719
name = nmfs4
acl = 0
in_use = False
type = nmfs
worm = off
volume = 0
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
stor_devs =
disks =

54 of 66 Version 5.4 Managing Celerra File Systems


Adding an Existing File System to the NMFS
If you are creating a new component file system to add to the NMFS, see Creating
a Celerra File System by Size Using AVM on page 21. If the file system you want to
add to the NMFS already exists, use the following procedure to add the file system
to a NMFS.

Step Action

1. If the existing file system is mounted, permanently unmount the file system using this
command syntax:
To permanently unmount a file system from a Data Mover by specifying the file system
name, use this command syntax:
$ server_umount <movername> -perm <fs_name>
Where:
<movername> = name of the specified Data Mover
<fs_name> = name of file system to unmount
Example:
To permanently unmount a file system named fs1, type:
$ server_umount server_2 -perm fs1
To permanently unmount a file system from a Data Mover by specifying the name of its
mount point, type:
$ server_umount <movername> -perm <mount_point>
Where:
<movername> = name of the specified Data Mover
<mount_point> = path to mount point for the specified Data Mover
Example:
To permanently unmount a file system to server_2, type:
$ server_umount server_2 -perm /fs1
Output:
For both command examples:
server_2: done

2. Mount the file system in the nested mount file system using this command syntax:
$ server_mount <movername> -option <options> <component file
system name> /<nmfs pathname>/<component file system name>
Where:
<movername> = name of the specified Data Mover
<options> = specifies mount options, separated by commas
<component file system name> = name of the component file system
<nmfs pathname> = pathname of the nested mount file system
<component file system name> = name of file system to be mounted
<mount_point> = path to mount point for the specified Data Mover
Example:
To mount a file system on a mount point on a Data Mover with a nolock option, type:
$ server_mount server_3 fs5 /nmfs4/fs5
Output:
server_2: done

Managing Celerra File Systems Version 5.4 55 of 66


Exporting a Component File System
To export a component file system, refer to Exporting a Celerra File System on
page 35. When exporting a component file system, use the full path to the
component file system. If you are a CIFS client, the component file system has to
be shared.

Removing a Component File System


You can both permanently and temporarily remove component file systems from
the NMFS. To permanently remove a component file system from the NMFS, refer
to Unmounting a File System Permanently on page 42. To temporarily remove a
component file system from the NMFS, refer to Unmounting a File System
Temporarily on page 41.

Moving a Nested Mount File System


You can move a nested mount file system from one Data Mover to another Data
Mover. To do this, perform the following steps:
1. Permanently unmount each of the component file systems. Refer to
Unmounting a File System Permanently on page 42 for more information.
2. Permanently unmount the NMFS. Refer to Unmounting a File System
Permanently on page 42 for more information.
3. Mount the NMFS on the new Data Mover. Refer to Mounting a Nested Mount
File System on page 53 for more information.
4. Mount each component file system on the NMFS on the new Data Mover. Refer
to Adding an Existing File System to the NMFS on page 55.

56 of 66 Version 5.4 Managing Celerra File Systems


Troubleshooting File Systems
This section provides guidance for troubleshooting problems you may encounter
using Celerra file systems. You can query the EMC WebSupport database for
related problem information, obtain release notes, or report a Celerra technical
problem to EMC at Powerlink™, EMC’s secure extranet site, at http://
powerlink.EMC.com. For additional information about using Powerlink and
resolving problems, refer to the Problem Resolution Roadmap technical module on
the user information CD.

Error Messages
This section contains two tables to assist in troubleshooting Celerra file system
problems. Table 5 presents error messages, their definition, and corrective action.
Refer to the Celerra Network Server Error Messages Guide on the user information
CD for additional information on error messages.

Note: There can be more than one symptom for the same problem and, in some cases
more than one cause.

Table 5 File System Error and Event Messages

Server error log Definition Corrective Action

filesystem is mounted, You may be trying to delete a Verify that this is the correct
can not delete mounted file system. file system to be deleted,
permanently unmount the file
system, and then retry.

filesystem is not You may be attempting to Mount the file system, and
mounted execute a command to file then retry.
system that must be mounted.

filesystem unavailable The file system is already Verify the list of mounted file
for read_write mount mounted by another Data systems.
Mover.

item is currently in The file system is still Unmount the file system, and
use by movername mounted. then retry.

Mount Point Name [name] The mount point was not When entering the mount point
is not valid. Please entered correctly or does not name, the slash (/) that
Re-enter exist. precedes the mount point
name may have been omitted.
Type a slash before entering
the mount point name, and
then retry.
If the error message
reappears, check your list of
mount points.

Managing Celerra File Systems Version 5.4 57 of 66


Table 5 File System Error and Event Messages (continued)

Server error log Definition Corrective Action

must unmount from You may be trying to execute a Unmount the file system, and
server(s): movername file system check against a then retry.
mounted file system.

no such file or The value being typed either Check that you are entering
directory does not exist or does not exist the correct value and ensure
as typed (typing error). that the uppercase and
lowercase letters match.
Note: All mount points begin
with a forward slash (/).

Path busy: filesystem A file system is already using Create a new mount point, or
fsname is currently the mount point you are unmount the file system, and
mounted on mountpoint attempting to mount. then retry.

requires root command You may be attempting to Select a non root volume or file
execute a command against a system, and then retry.
root file system or volume to Alternately, retry the command
which you do not have access. as a root command by
prepending it with the word
root, using this command
syntax: root<command>.

server_x: Device busy You may have tried to execute Check the list of mounted file
the same command more than systems to verify that the path
once. is already successfully
mounted.

server_x: No such file The mount point may not exist Confirm that the mount point is
or directory on the specified Data Mover. on the specified Data Mover.

undefined netgroup The netgroup you are trying to Enter the name of the
export for is not recognized. netgroup into the system, and
then retry.
For information about entering
the netgroup name into the
system, refer to the
Configuring Celerra Naming
Services technical module.

Filesystem size The amount of space in use in Extend the file system to
threshold (##%) the file system (<fs_name>) increase its available space.
crossed has reached its maximum This reduces the percentage
(fs <fs_name> threshold (##%). The threshold of used space.
default is 90 percent space in For how to extend a file
Note: This message appears use. File system performance system, refer to Extending a
in the event log on the server. may degrade when a file Celerra File System on
system is almost full. page 31.

When you create a new file in NMFS root directory is read/ Do not try to create files or
the NMFS root directory, a only. folders in the NMFS root
file exists error appears. directory.

58 of 66 Version 5.4 Managing Celerra File Systems


Known Problems and Limitations
This section explains known limitations and restrictions of the Celerra management
applications. Refer to the Release Notes for additional and late-breaking
information about using the management applications.
Table 6 describes known problems that might occur when using Celerra file
systems and workarounds.
Table 6 File Systems Known Problems and Workarounds

Known Problem Symptom Workaround

The Celerra Manager does not To rename a file system, you


provide an interface for can enter the appropriate CLI
renaming a file system. command in the CLI command
entry page available in Celerra
Manager, or directly in the CLI.

You are unable to mount a file There are many probable Perform server_mount -all
system. causes for this scenario. Many to activate all entries in the
will provide an error message, mount table. Obtain a list of
though occasionally, there will mounted file systems, and
not be one. In this case, the then observe the entries. If the
mount table entry already file system in question is
exists. already mounted (temporary or
permanent), perform the
necessary steps to unmount it,
and then retry.

An unmounted file system The file system may have Perform a permanent unmount
reappears in the mount table been temporarily unmounted to remove the entry from the
after a system reboot. prior to reboot. mount table.

When you create a new file in NMFS root directory is read/ Do not try to create files or
the NMFS root directory, a only. folders in the NMFS root
file exists error appears. directory.

Managing Celerra File Systems Version 5.4 59 of 66


Related Information
For specific information related to the features and functionality described in this
technical module, refer to the following publications, which are available on the user
information CD.
◆ Celerra Network Server Command Reference Manual
◆ Online Celerra man pages
◆ Celerra Network Server Parameters Guide
For general information on other EMC Celerra publications, refer to the Celerra
Network Server User Information CD, which is available at Powerlink at http://
powerlink.emc.com.

Want to Know More?


EMC Customer Education Courses are designed to help you learn how EMC
storage products work together and integrate within your environment in order to
maximize your entire infrastructure investment. EMC Customer Education features
online and hands-on training in state-of-the-art labs conveniently located
throughout the world. EMC customer training courses are developed and delivered
by EMC experts. For course information and registration, refer to EMC Powerlink,
our customer and partner website on http://powerlink.emc.com.

60 of 66 Version 5.4 Managing Celerra File Systems


Appendix: GID Support
The Celerra Network Server software supports 32-bit GID (group IDs) on both NFS
and CIFS file systems. This support enables a maximum GID value of 2147483647
(approximately 2 billion).

Restrictions for GID Support


With GID support, the following restrictions apply:
◆ Enabling 16-bit GID support on a Data Mover does not decrease its maximum
of 64,000 GIDs per file system. Regardless of the GID support setting (32-bit or
16-bit), there is a maximum limit of 64,000 GIDs per file system.
◆ File systems with 16-bit and 32-bit GIDs can co-exist on a single Data Mover.
Changing the gid32 parameter setting from 1 to 0 allows you to create file
systems with 16-bit GIDs without disabling 32-bit GIDs on file systems already
created with the parameter set to 1. Conversely, changing the gid32
parameter value from 0 to 1 allows you to create file systems with 32-bit GIDs
without disabling 16-bit GID support on existing file systems.
◆ You cannot convert file systems created with 32-bit GID support to use 16-bit
GIDs, and cannot convert file systems created with 16-bit GID support to use
32-bit GIDs. The 32-bit GID support works only for file systems created with the
parameter set to 1, and the 16-bit GID support works only for file systems
created with the parameter set to 0.
◆ If you back up a file system with 32-bit GIDs, you risk truncating the GID values
when the data is restored if you use any of the following server_archive
formats:
• emctar up to 31-bit
• ustar up to 21-bit
• cpio up to 15-bit
• bcpio up to 16-bit
• sv4cpio up to 21-bit
• sv4crc up to 21-bit
• tar up to 18-bit
If you use these server_archive formats to back up file systems with 32-bit
GIDs, a message appears indicating that the UIDs are being forced to 0 and the
GIDs are being forced to 1.
◆ Some backup applications have restrictions. Ensure that the application can
handle 32-bit UIDs/GIDs.
◆ There is no NAS command you can use to find out whether a file system
supports 16-bit or 32-bit GIDs.

Managing Celerra File Systems Version 5.4 61 of 66


62 of 66 Version 5.4 Managing Celerra File Systems
F
Index File system
capacity 27
CIFS access 24
A configuration information 27
Access creating 20
CIFS and Celerra file systems 24 definition 4
FTP and Celerra file systems 3 deleting 36
MPFS and Celerra file systems 4 displaying mounted 39
NFS and Celerra file systems 24 exporting 35
TFTP and Celerra file systems 3 extending 31
Automatic Volume Management extending capacity 9
Celerra Manager support 12 free space 28
command line interface support 12 freeing disk space allocated to 36
creating file system using 21 FTP access 24
definition 4 inode capacity 29
extending file system using 33 listing all 26
AVM, see Automatic Volume Management 4 managing 26
mounting 10
C mounting on Data Mover 23
MPFS access 25
Capacity
NFS access 24
checking file system 27
permanent
Data Mover 27
mount 23
Celerra Native Manager 17
planning considerations 8
Celerra WORM 12
quotas 6, 11
Character support, international 6
renaming 36
CIFS access 24
requirements 8
CIFS and Celerra file systems 24
size guidelines 9
Considerations, file system 8
TFTP access 25
Creating
troubleshooting 57
file system 20
types 7
unmount all 43
D permanent 44
Data Mover temporary 43
checking unmounting from Data Mover 41
free space 27 permanent 42
system capacity 27 temporary 41
creating a mount point 22 used space 28
free space 27, 28 file system check
listing listing 50
mount points 39 starting ACL check 49
mounted file systems 39 starting and monitoring 49
unmounting file system File system size threshold 11
permanent 42 change
temporary 41 for all file systems 45
Deleting for Data Mover 45
file system 36 change for Data Mover 47
Disk volume fsSizeThreshold parameter 45, 47
freeing file system space 36 Free space
Displaying mounted file systems 39 all file systems on Data Mover 28
checking on Data Mover 27
file system 28
E
fsck
Exporting
listing file system checks 50
file system 35
starting ACL check 49
Extending file system 31
starting and monitoring 49
fsSizeThreshold parameter 45, 47
FTP access to Celerra file systems 3, 24

Managing Celerra File Systems Version 5.4 63 of 66


G R
GID support Renaming
restrictions 61 file system 36
Restrictions
Celerra file systems 6
H GID support 61
HighRoad 25
Symmetrix volumes 6
TimeFinder/FS 6
I
Information S
training 60
server_mount command 46, 48
Inode capacity 29
Size Threshold
Interface choices
changing for a data mover 45
Celerra Native Manager 17
changing for all file systems 45
command line interface 17
Slice volume
International character support 6
definition 5
Interoperability Matrix 16
Storage pool
associated with file system 30
L definition 5
Listing all file systems 26 finding out if file system has 30
LUN, definition 4 System storage errors
finding and fixing 49
M
Metavolume T
definition 4 TFTP
Monitoring file system size 11 definition 3
Mount point TFTP access to Celerra file systems 3, 25
creating 22 Trivial File Transfer Protocol. See TFTP
listing 39 Troubleshooting 57
Mounting file system 57
file system 10
file system on Data Mover 23 U
MPFS access to Celerra file systems 4, 25
Unicode characters 6
Unmounting file systems from Data Mover
N permanent 44
Nested Mount File System 15 temporary 43
Nested mount file system Used space, file system 28
definition 4 User interface choices 17
managing 52
NFS access 24 V
NFS and Celerra file systems 24
Volume
metavolume
P definition 4
Parameters slice volume
fsSizeThreshold 45, 47 definition 5
Performance enhancements
file read/write 46
turning off
read for Data Mover 47
read for file system 46
turning on write for file system 48
Powerlink 16

Q
Quotas for file system 6, 11

64 of 66 Version 5.4 Managing Celerra File Systems


Notes

Managing Celerra File Systems Version 5.4 65 of 66


About This Technical Module
As part of its effort to continuously improve and enhance the performance and capabilities of the Celerra Network Server product line, EMC
from time to time releases new revisions of Celerra hardware and software. Therefore, some functions described in this document may not be
supported by all revisions of Celerra software or hardware presently in use. For the most up-to-date information on product features, see your
product release notes. If your Celerra system does not offer a function described in this document, please contact your EMC representative for
a hardware upgrade or software update.
Comments and Suggestions About the Documentation
Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please send a
message to celerradoc_comments@emc.com with your opinions of this document.

Copyright © 1998–2005 EMC Corporation. All rights reserved.


EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR
WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

66 of 66 Version 5.4

You might also like