You are on page 1of 147

Table of contents

TABLE OF CONTENTS ....................................................................................................................... I CHAPTER 1: NETWORK APPLICANCE (NETAPP) .....................................................................1 CHAPTER 2: ACCESS MANAGEMENT ..........................................................................................3 BLOCK BASED ACCESS .........................................................................................................................3 ISCSI INTRODUCTION...........................................................................................................................3 FC INTRODUCTION ...............................................................................................................................6 GETTING THE STORAGE READY ............................................................................................................7 LUN'S, IGROUPS, LUN MAPS ...............................................................................................................9 SNAPSHOTS AND CLONING .................................................................................................................16 DISK SPACE MANAGEMENT................................................................................................................19 CHAPTER 3: NETAPP SYSTEM ADMINISTRATION.................................................................20 ACCESSING NETAPP ...........................................................................................................................20 SYSTEM CONFIGURATION AND ADMINISTRATION ..............................................................................23 LICENSING ..........................................................................................................................................28 NTP SETUP .........................................................................................................................................28 CHAPTER 4: NETAPP ARCHITECTURE......................................................................................30 HARDWARE ........................................................................................................................................30 SOFTWARE .........................................................................................................................................31 STORAGE TERMINOLOGY ...................................................................................................................31 NETAPP TERMINOLOGY .....................................................................................................................33 CHAPTER 5: BLOCK ACCESS MANAGEMENT .........................................................................36 BLOCK BASED ACCESS .......................................................................................................................36 ISCSI INTRODUCTION.........................................................................................................................36 FC INTRODUCTION .............................................................................................................................39 GETTING THE STORAGE READY ..........................................................................................................40 LUN'S, IGROUPS, LUN MAPS .............................................................................................................42 SNAPSHOTS AND CLONING .................................................................................................................48 DISK SPACE MANAGEMENT................................................................................................................52 CHAPTER 6: NETAPP COMMANDLINE CHEATSHEET ..........................................................53 SERVER ..............................................................................................................................................53 STORAGE ............................................................................................................................................56 DISKS .................................................................................................................................................57 AGGREGATES .....................................................................................................................................58 VOLUMES ...........................................................................................................................................63 FLEXCACHE VOLUMES .......................................................................................................................69 FLEXCLONE VOLUMES .......................................................................................................................70 DEDUPLICATION .................................................................................................................................71 QTREES ..............................................................................................................................................72 QUOTAS..............................................................................................................................................73 LUNS, IGROUPS AND LUN MAPPING ..................................................................................................74 SNAPSHOTTING AND CLONING ...........................................................................................................77 FILE ACCESS USING NFS ....................................................................................................................79 FILE ACCESS USING CIFS...................................................................................................................81 FILE ACCESS USING FTP ....................................................................................................................83 FILE ACCESS USING HTTP .................................................................................................................85 NETWORK INTERFACES ......................................................................................................................86 ROUTING ............................................................................................................................................87 HOSTS AND DNS ................................................................................................................................88 VLAN ................................................................................................................................................88 INTERFACE GROUPS ...........................................................................................................................89 DIAGNOSTIC TOOLS............................................................................................................................91

CHAPTER 7: NETAPP DISK ADMINISTRATION .......................................................................92 STORAGE ............................................................................................................................................92 DISKS .................................................................................................................................................93 AGGREGATES .....................................................................................................................................95 VOLUMES .........................................................................................................................................100 FLEXCACHE VOLUMES .....................................................................................................................109 FLEXCLONE VOLUMES .....................................................................................................................114 SPACE SAVING .................................................................................................................................116 QTREES ............................................................................................................................................118 CIFS OPLOCKS .................................................................................................................................120 SECURITY STYLES ............................................................................................................................120 QUOTAS............................................................................................................................................121 CHAPTER 8: FILE ACCESS MANAGEMENT ............................................................................123 FILE ACCESS USING NFS ..................................................................................................................123 FILE ACCESS USING CIFS.................................................................................................................125 FILE ACCESS USING FTP ..................................................................................................................128 FILE ACCESS USING HTTP ...............................................................................................................129 CHAPTER 9: NETWORK APPLIANCE (NETAPP) ....................................................................131 HISTORY ...........................................................................................................................................131 NETAPP FILER ..................................................................................................................................131 NETAPP BACKUPS ............................................................................................................................133 CHAPTER 10: NETWORK MANAGEMENT ...............................................................................135 ROUTING ..........................................................................................................................................137 HOSTS AND DNS ..............................................................................................................................139 VLAN ..............................................................................................................................................140 INTERFACE GROUPS .........................................................................................................................142 DIAGNOSTIC TOOLS..........................................................................................................................145

ii

Chapter 1: Network Applicance (NetApp)


The following documentation is a guide on using and configuring the NetApp servers, there is also a commandline cheat sheet. I have tried to make this section as brief as possible but still cover a broad range of information regarding the NetApp product but I point you to the Official NetApp web site which contains all the documentation you will ever need. Please feel free to email me any constructive criticism you have with the site as any additional knowledge or mistakes that I have made would be most welcomed. Introduction Introduction History Filer Backups Architecture Hardware Software Storage Terminology NetApp Terminology System Administration Accessing NetApp System Administration Licensing NTP setup Disk Administration Storage Disks Aggregates Volumes FlexCache Volumes FlexClone Volume Space Saving QTrees CIFS Oplocks Quotas Block Access Management Introduction

Block Based Access iSCSI Introduction FC Introduction Getting the Storage Ready LUN's, igroups, Lun Maps and iSCSI Snapshots and Cloning Disk Space Management File Access Management Introduction NFS CIFS FTP HTTP Network Management Interface Configuration Routing Hosts and DNS VLANs Interface Groups Diagnostic Tools Commandline CheatSheet Cheatsheet Links The official NetApp web site

Chapter 2: Access Management


In my NetApp introduction section I spoke about 2 ways on accessing the NetApp filer, either file based access or block based access File-Based Protocol Block-Based Protocol NFS, CIFS, FTP, TFTP, HTTP Fibre Channel (FC), Fibre channel over Ethernet (FCoE), Internet SCSI (iSCSI)

In this section I will cover the following common based protocols, if any others are not covered then please checkout the documentation

iSCSI FC NFS CIFS HTTP FTP

Block Based Access


In iSCSI and FC networks, storage systems are targets that have storage target devices, which are referred to as LUNs, or logical units. Using the Data ONTAP operating system, you configure the storage by creating LUNs. The LUNs are accessed by hosts, which are initiators in the storage network. To connect to iSCSI networks, hosts can use standard Ethernet network adapters (NICs), TCP offload engine (TOE) cards with software initiators, or dedicated iSCSI HBAs. To connect to FC networks, hosts require Fibre Channel host bus adapters (HBAs). Data ONTAP 7.2 added support for the Asymmetric Logical Unit Access (ALUA) features of SCSI, also known as SCSI Target Port Groups or Target Port Group Support. ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on Fibre Channel and iSCSI SANs. ALUA allows the initiator to query the target about path attributes, such as primary path and secondary path. It also allows the target to communicate events back to the initiator. As a result, multipathing software can be developed to support any array. Proprietary SCSI commands are no longer required as long as the host supports the ALUA standard. For iSCSI SANs, ALUA is supported only with Solaris hosts running the iSCSI Solaris Host Utilities 3.0 for Native OS.

iSCSI Introduction
The iSCSI protocol is a licensed service on the storage system that enables you to transfer block data to hosts using the SCSI protocol over TCP/IP. The iSCSI protocol standard is defined by RFC 3720. In an iSCSI network, storage systems are targets that have storage target devices, which are referred to as LUNs (logical units). A host with an iSCSI host bus adapter (HBA), or running iSCSI initiator software, uses the 3

iSCSI protocol to access LUNs on a storage system. The iSCSI protocol is implemented over the storage systems standard gigabit Ethernet interfaces using a software driver. The connection between the initiator and target uses a standard TCP/IP network. No special network configuration is needed to support iSCSI traffic. The network can be a dedicated TCP/IP network, or it can be your regular public network. The storage system listens for iSCSI connections on TCP port 3260. In an iSCSI network, there are two types of nodes: targets and initiators Targets Initiators Storage Systems (NetApp, EMC) Hosts (Unix, Linux, Windows)

Storage systems and hosts can be direct-attached or connected through Ethernet switches. Both direct-attached and switched configurations use Ethernet cable and a TCP/IP network for connectivity. You can of course use existing networks but if possible try to make this a dedicated network for the storage system, as it will increase performance. Every iSCSI node must have a node name. The two formats, or type designators, for iSCSI node names are iqn and eui. The storage system always uses the iqn-type designator. The initiator can use either the iqn-type or eui-type designator. The iqn-type designator is a logical name that is not linked to an IP address. It is based on the following components:

iqn

The type designator itself, iqn, followed by a period (.) The date when the naming authority acquired the domain name, followed by a period The name of the naming authority, optionally followed by a colon (:) A unique device name

The format is: iqn.yyyymm.backward-naming-authority:unique-device-name Note: yyyymm = month and year in which the naming authority acquired the domain name. backward-naming-authority = the reverse domain name of the entity responsible for naming this device. unique-device-name = a free-format unique name for this device assigned by the naming authority. eui The eui-type designator is based on the type designator, eui, followed by a period, followed by sixteen hexadecimal digits. The format is:

eui.0123456789abcdef Each storage system has a default node name based on a reverse domain name and the serial number of the storage system's non-volatile RAM (NVRAM) card. Storage system node name The node name is displayed in the following format: iqn.1992-08.com.netapp:sn.serial-number The following example shows the default node name for a storage system with the serial number 12345678: iqn.1992-08.com.netapp:sn.12345678 The storage system checks the format of the initiator node name at session login time. If the initiator node name does not comply with storage system node name requirements, the storage system rejects the session. A target portal group is a set of network portals within an iSCSI node over which an iSCSI session is conducted. In a target, a network portal is identified by its IP address and listening TCP port. For storage systems, each network interface can have one or more IP addresses and therefore one or more network portals. A network interface can be an Ethernet port, virtual local area network (VLAN), or virtual interface (vif). The assignment of target portals to portal groups is important for two reasons:

The iSCSI protocol allows only one session between a specific iSCSI initiator port and a single portal group on the target. All connections within an iSCSI session must use target portals that belong to the same portal group.

The Internet Storage Name Service (iSNS) is a protocol that enables automated discovery and management of iSCSI devices on a TCP/IP storage network. An iSNS server maintains information about active iSCSI devices on the network, including their IP addresses, iSCSI node names, and portal groups. You obtain an iSNS server from a third-party vendor. If you have an iSNS server on your network, and it is configured and enabled for use by both the initiator and the storage system, the storage system automatically registers its IP address, node name, and portal groups with the iSNS server when the iSNS service is started. The iSCSI initiator can query the iSNS server to discover the storage system as a target device. If you do not have an iSNS server on your network, you must manually configure each target to be visible to the host. The Challenge Handshake Authentication Protocol (CHAP) enables authenticated communication between iSCSI initiators and targets. When you use CHAP authentication, you define CHAP user names and passwords on both the initiator and the storage system. During the initial stage of an iSCSI session, the initiator sends a login request to the storage system to begin the session. The login request includes the initiators CHAP user name and CHAP algorithm. The storage system responds with

a CHAP challenge. The initiator provides a CHAP response. The storage system verifies the response and authenticates the initiator. The CHAP password is used to compute the response. During an iSCSI session, the initiator and the target communicate over their standard Ethernet interfaces, unless the host has an iSCSI HBA. The storage system appears as a single iSCSI target node with one iSCSI node name. For storage systems with a MultiStore license enabled, each vFiler unit is a target with a different node name. On the storage system, the interface can be an Ethernet port, virtual network interface (vif), or a virtual LAN (VLAN) interface. Each interface on the target belongs to its own portal group by default. This enables an initiator port to conduct simultaneous iSCSI sessions on the target, with one session for each portal group. The storage system supports up to 1,024 simultaneous sessions, depending on its memory capacity. To determine whether your hosts initiator software or HBA can have multiple sessions with one storage system, see your host OS or initiator documentation. You can change the assignment of target portals to portal groups as needed to support multiconnection sessions, multiple sessions, and multipath I/O. Each session has an Initiator Session ID (ISID), a number that is determined by the initiator.

FC Introduction
FC is a licensed service on the storage system that enables you to export LUNs and transfer block data to hosts using the SCSI protocol over a Fibre Channel fabric. In a FC network, nodes include targets, initiators, and switches. Nodes register with the Fabric Name Server when they are connected to a FC switch. Targets Initiators Storage Systems (NetApp, EMC) Hosts (Unix, Linux, Windows)

Storage systems and hosts have adapters so they can be directly connected to each other or to FC switches with optical cable. For switch or storage system management, they might be connected to each other or to TCP/IP switches with Ethernet cable. When a node is connected to the FC SAN, it registers each of its ports with the switchs Fabric Name Server service, using a unique identifier. Each FC node is identified by a worldwide node name (WWNN) and a worldwide port name (WWPN). WWPNs identify each port on an adapter. WWPNs are used for the following purposes:

Creating an initiator group - The WWPNs of the hosts HBAs are used to create an initiator group (igroup). An igroup is used to control host access to specific LUNs. You create an igroup by specifying a collection of WWPNs of initiators in an FC network. When you map a LUN on a storage system to an igroup, you grant all the initiators in that group access to that LUN. If a hosts WWPN is not in an igroup that is mapped to a LUN, that host does not have access to the LUN. This means that the LUNs do not appear as disks on that host. You can also create port sets to make a LUN visible only on specific target ports. A port set consists of a group of FC target ports. You bind a port

Uniquely identifying a storage systems HBA target ports -The storage systems WWPNs uniquely identify each target port on the system. The host operating system uses the combination of the WWNN and WWPN to identify storage system adapters and host target IDs. Some operating systems require persistent binding to ensure that the LUN appears at the same target ID on the host.

When the FCP service is first initialized, it assigns a WWNN to a storage system based on the serial number of its NVRAM adapter. The WWNN is stored on disk. Each target port on the HBAs installed in the storage system has a unique WWPN. Both the WWNN and the WWPN are a 64-bit address represented in the following format: nn:nn:nn:nn:nn:nn:nn:nn, where n represents a hexadecimal value. The storage system also has a unique system serial number that you can view by using the sysconfig command. The system serial number is a unique seven-digit identifier that is assigned when the storage system is manufactured. You cannot modify this serial number. Some multipathing software products use the system serial number together with the LUN serial number to identify a LUN. You use the fcp show initiator command to see all of the WWPNs, and any associated aliases, of the FC initiators that have logged on to the storage system. Data ONTAP displays the WWPN as Portname. To know which WWPNs are associated with a specific host, see the FC Host Utilities documentation for your host. These documents describe commands supplied by the Host Utilities or the vendor of the initiator, or methods that show the mapping between the host and its WWPN. For example, for Windows hosts, use the lputilnt, HBAnywhere, or SANsurfer applications, and for UNIX hosts, use the sanlun command.

Getting the Storage Ready


I have discussed in detail how to create the following in my disk administration section:

Aggregates Plexes FlexVol and Traditional Volumes QTrees Files LUNs

Here's a quick recap

A plex is a collection of one or more RAID groups that together provide the storage for one or more Write Anywhere File Layout (WAFL) file system volumes. Data ONTAP uses plexes as the unit of RAID-level mirroring when the SyncMirror software is enabled. An aggregate is a collection of one or two plexes, depending on whether you want to take advantage of RAID-level mirroring. If the aggregate is

A traditional volume is directly tied to the underlying aggregate and its properties. When you create a traditional volume, Data ONTAP creates the underlying aggregate based on the properties you assign with the vol create command, such as the disks assigned to the RAID group and RAID-level protection. A FlexVol volume is a volume that is loosely coupled to its containing aggregate. A FlexVol volume can share its containing aggregate with other FlexVol volumes. Thus, a single aggregate can be the shared source of all the storage used by all the FlexVol volumes contained by that aggregate.

Once you set up the underlying aggregate, you can create, clone, or resize FlexVol volumes without regard to the underlying physical storage. You do not have to manipulate the aggregate frequently. You use either traditional or FlexVol volumes to organize and manage system and user data. A volume can hold qtrees and LUNs. A qtree is a subdirectory of the root directory of a volume. You can use qtrees to subdivide a volume in order to group LUNs. You create LUNs in the root of a volume (traditional or flexible) or in the root of a qtree, with the exception of the root volume. Do not create LUNs in the root volume because it is used by Data ONTAP for system administration. The default root volume is /vol/vol0. Autodelete is a volume-level option that allows you to define a policy for automatically deleting Snapshot copies based on a definable threshold. Using autodelete is recommended in most SAN configurations. You can set that threshold, or trigger, to automatically delete Snapshot copies when:

The volume is nearly full The snap reserve space is nearly full The overwrite reserved space is full

Two other things that you need to be aware of are Space Reservation and Fractional Reserve When space reservation is enabled for one or more LUNs, Data ONTAP Space reserves enough space in the volume (traditional or FlexVol) so that Reservation writes to those LUNs do not fail because of a lack of disk space. Fractional Reserve Fractional reserve is a volume option that enables you to determine how much space Data ONTAP reserves for Snapshot copy overwrites for LUNs, as well as for space-reserved files when all other space in the volume is used.

When provisioning storage in a SAN environment, there are several best practices to consider. Selecting and following the best practice that is most appropriate for you is critical to ensuring your systems run smoothly. There are generally two ways to provision storage in a SAN environment:

Using the autodelete feature Using fractional reserve

In Data ONTAP, fractional reserve is set to 100 percent and autodelete is disabled by default. However, in a SAN environment, it usually makes more sense to use autodelete (and sometimes autosize). When using fractional reserve, you need to reserve enough space for the data inside the LUN, fractional reserve, and snapshot data, or: X + X + Delta. For example, you might need to reserve 50 GB for the LUN, 50 GB when fractional reserve is set to 100%, and 50 GB for snapshot data, or a volume of 150 GB. If fractional reserve is set to a percentage other than 100%, then the calculation becomes more complex. In contrast, when using autodelete, you need only calculate the amount of space required for the LUN and snapshot data, or X + Delta. Since you can configure the autodelete setting to automatically delete older snapshots when space is required for data, you need not worry about running out of space for data. For example, if you have a 100 GB volume, you might allocate 50 GB for a LUN, and the remaining 50 GB is used for snapshot data. Or in that same 100 GB volume, you might reserve 30 GB for the LUN, and 70 GB is then allocated for snapshots. In both cases, you can configure snapshots to be automatically deleted to free up space for data, so fractional reserve is unnecessary.

LUN's, iGroups, LUN maps


When you create a LUN there are a number of items you need to know

Path name Name Multiprotocol type Size Description Identification number space reservation setting

The path name of a LUN must be at the root level of the qtree or volume in which the LUN is located. Do not create LUNs in the root volume. The default root volume is /vol/vol0, for example /vol/database/lun1. The name of the LUN is case-sensitive and can contain 1 to 256 characters. You cannot use spaces. LUN names must use only specific letters and characters. LUN names can contain only the letters A through Z, a through z, numbers 0 through 9, hyphen (-), underscore (_), left brace ({), right brace (}), and period (.). The LUN Multiprotocol Type, or operating system type, specifies the OS of the host accessing the LUN. It also determines the layout of data on the LUN, the geometry used to access that data, and the minimum and maximum size of the LUN. The LUN Multiprotocol Type values are solaris, solaris_efi, windows, windows_gpt, windows_2008 , hpux, aix, linux, netware, xen, hyper_v, and vmware. When you 9

create a LUN, you must specify the LUN type. Once the LUN is created, you cannot modify the LUN host operating system type. You specify the size of a LUN in bytes or by using specific multiplier suffixes (k, m, g, t). The LUN description is an optional attribute you use to specify additional information about the LUN. A LUN must have a unique identification number (ID) so that the host can identify and access the LUN. You map the LUN ID to an igroup so that all the hosts in that igroup can access the LUN. If you do not specify a LUN ID, Data ONTAP automatically assigns one. When you create a LUN by using the lun setup command or FilerView, you specify whether you want to enable space reservations. When you create a LUN using the lun create command, space reservation is automatically turned on. Initiator groups (igroups) are tables of FCP host WWPNs or iSCSI host nodenames. You define igroups and map them to LUNs to control which initiators have access to LUNs. Typically, you want all of the hosts HBAs or software initiators to have access to a LUN. If you are using multipathing software or have clustered hosts, each HBA or software initiator of each clustered host needs redundant paths to the same LUN. You can create igroups that specify which initiators have access to the LUNs either before or after you create LUNs, but you must create igroups before you can map a LUN to an igroup. Initiator groups can have multiple initiators, and multiple igroups can have the same initiator. However, you cannot map a LUN to multiple igroups that have the same initiator. Host with HBA WWPN's igroups WWPN's added to igroups LUN's mapped to igroup

Linux1, single-path (one linuxHBA) group0 10:00:00:00:c9:2b:7c:8f Linux2, multipath (two HBAs) 10:00:00:00:c9:2b:3e:3c 10:00:00:00:c9:2b:09:3c linuxgroup1

10:00:00:00:c9:2b:7c:8f /vol/vol2/lun0

10:00:00:00:c9:2b:3e:3c /vol/vol2/lun1 10:00:00:00:c9:2b:09:3c

The igroup name is a case-sensitive name that must satisfy several requirements. Contains 1 to 96 characters. Spaces are not allowed. Can contain the letters A through Z, a through z, numbers 0 through 9, hyphen (-), underscore (_), colon (:), and period (.). Must start with a letter or number. The igroup type can be either -i for iSCSI or -f for FC. The ostype indicates the type of host operating system used by all of the initiators in the igroup. All initiators in an igroup must be of the same ostype. The ostypes of

10

initiators are solaris, windows, hpux, aix, netware, xen, hyper_v, vmware, and linux. You must select an ostype for the igroup. Finally we get to LUN mapping which is the process of associating a LUN with an igroup. When you map the LUN to the igroup, you grant the initiators in the igroup access to the LUN. You must map a LUN to an igroup to make the LUN accessible to the host. Data ONTAP maintains a separate LUN map for each igroup to support a large number of hosts and to enforce access control. Specify the path name of the LUN to be mapped. Specify the name of the igroup that contains the hosts that will access the LUN. Assign a number for the LUN ID, or accept the default LUN ID. Typically, the default LUN ID begins with 0 and increments by 1 for each additional LUN as it is created. The host associates the LUN ID with the location and path name of the LUN. The range of valid LUN ID numbers depends on the host. There are two ways to setup a LUN ontap1> lun setup LUN setup command Note: the "lun setup" will display prompts that lead you through the setup process # Create the LUN lun create -s 100m -t windows /vol/tradvol1/lun1 # Create the igroup, you must obtain the nodes identifier (my Good old fashioned home pc is: iqn.1991-05.com.microsoft:xblade) igroup create -i -t windows win_hosts_group1 iqn.1991commandline 05.com.microsoft:xblade # Map the LUN to the igroup lun map /vol/tradvol1/lun1 win_hosts_group1 0 The full set of commands for both lun and igroup are below LUN configuration Display Initialize/Configure LUNs, mapping Create Destroy Note: the "-f" will force the destroy lun show lun show -m lun show -v lun setup Note: follow the prompts to create and configure LUN's lun create -s 100m -t windows /vol/tradvol1/lun1 lun destroy [-f] /vol/tradvol1/lun1

11

lun resize <lun path> <size> Resize lun resize /vol/tradvol1/lun1 75m Restart block protocol access Stop block protocol access lun online /vol/tradvol1/lun1 lun offline /vol/tradvol1/lun1 lun map /vol/tradvol1/lun1 win_hosts_group1 0 lun map -f /vol/tradvol1/lun2 linux_host_group1 1 Map a LUN to an initiator group lun show -m Note: use "-f" to force the mapping Remove LUN mapping lun show -m lun offline /vol/tradvol1 lun unmap /vol/tradvol1/lun1 win_hosts_group1 0

Displays or zeros read/write statistics lun stats /vol/tradvol1/lun1 for LUN Comments Check all lun/igroup/fcp settings for correctness lun comment /vol/tradvol1/lun1 "10GB for payroll records" lun config_check -v # Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command snap create tradvol1 tradvol1_snapshot_08122010 # Create the LUN clone by entering the following command lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/tradvol1_snapshot_08122010 lun1

Manage LUN cloning

Show the maximum possible size of a LUN on a lun maxsize /vol/tradvol1 given volume or qtree Move (rename) LUN lun move /vol/tradvol1/lun1 /vol/tradvol1/windows_lun1

Display/change lun serial -x /vol/tradvol1/lun1 LUN serial number Manage LUN properties Configure NAS file-sharing properties lun set reservation /vol/tradvol1/hpux/lun0 lun share <lun_path> { none | read | write | all }

12

Manage LUN and snapshot interactions

lun snap usage -s <volume> <snapshot> igroup configuration

display create (iSCSI) create (FC) destroy

igroup show igroup show -v igroup show iqn.1991-05.com.microsoft:xblade igroup create -i -t windows win_hosts_group1 iqn.199105.com.microsoft:xblade igroup create -i -f windows win_hosts_group1 iqn.199105.com.microsoft:xblade igroup destroy win_hosts_group1

add initiators to an igroup add win_hosts_group1 iqn.1991-05.com.microsoft:laptop igroup remove initiators to igroup remove win_hosts_group1 iqn.1991an igroup 05.com.microsoft:laptop rename set O/S type igroup rename win_hosts_group1 win_hosts_group2 igroup set win_hosts_group1 ostype windows igroup set win_hosts_group1 alua yes Note: ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on Fibre Channel and iSCSI SANs. ALUA enables the initiator to query the target about path attributes, such as primary path and secondary path. It also enables the target to communicate events back to the initiator. As long as the host supports the ALUA standard, multipathing software can be developed to support any array. Proprietary SCSI commands are no longer required.

Enabling ALUA

There are a number of iSCSI commands that you can use, I am not going to discuss iSCSI security (CHAPS or RADIUS), I will leave you to look at the doucmentation on this advanced topic. iscsi initiator show iscsi session show [-t] iscsi connection show -v iscsi security show iscsi status iscsi start iscsi stop iscsi stats iscsi nodename nodename # to change the name

display status start stop stats

13

iscsi nodename <new name> iscsi interface show interfaces iscsi interface enable e0b iscsi interface disable e0b iscsi portal show portals Note: Use the iscsi portal show command to display the target IP addresses of the storage system. The storage system's target IP addresses are the addresses of the interfaces used for the iSCSI protocol iscsi interface accesslist show accesslists Note: you can add or remove interfaces from the list We have discussed how to setup a server using iSCSI but what if the server is using FC to connect to the NetApp. A port set consists of a group of FC target ports. You bind a port set to an igroup, to make the LUN available only on a subset of the storage system's target ports. Any host in the igroup can access the LUNs only by connecting to the target ports in the port set. If an igroup is not bound to a port set, the LUNs mapped to the igroup are available on all of the storage systems FC target ports. The igroup controls which initiators LUNs are exported to. The port set limits the target ports on which those initiators have access. You use port sets for LUNs that are accessed by FC hosts only. You cannot use port sets for LUNs accessed by iSCSI hosts. All ports on both systems in the HA pairs are visible to the hosts. You use port sets to fine-tune which ports are available to specific hosts and limit the amount of paths to the LUNs to comply with the limitations of your multipathing software. When using port sets, make sure your port set definitions and igroup bindings align with the cabling and zoning requirements of your configuration Port Sets display create destroy add remove binding portset show portset show portset1 igroup show linux-igroup1 portset create -f portset1 SystemA:4b igroup unbind linux-igroup1 portset1 portset destroy portset1 portset add portset1 SystemB:4b portset remove portset1 SystemB:4b igroup bind linux-igroup1 portset1 igroup unbind linux-igroup1 portset1 FCP service

14

display daemon status start stop stats

fcp show adapter -v fcp status fcp start fcp stop fcp stats -i interval [-c count] [-a | adapter] fcp stats -i 1

target fcp config <adapter> [down|up] expansion adapters fcp config 4a down target adapter speed set WWPN # swap WWPN # fcp config <adapter> speed [auto|1|2|4|8] fcp config 4a speed 8 fcp portname set [-f] adapter wwpn fcp portname set -f 1b 50:0a:09:85:87:09:68:ad fcp portname swap [-f] adapter1 adapter2 fcp portname swap -f 1a 1b # display nodename fcp nodename fcp nodename [-f]nodename change WWNN fcp nodename 50:0a:09:80:82:02:8d:ff Note: The WWNN of a storage system is generated by a serial number in its NVRAM, but it is stored ondisk. If you ever replace a storage system chassis and reuse it in the same Fibre Channel SAN, it is possible, although extremely rare, that the WWNN of the replaced storage system is duplicated. In this unlikely event, you can change the WWNN of the storage system.

WWPN fcp wwpn-alias show Aliases - fcp wwpn-alias show -a my_alias_1 display fcp wwpn-alias show -w 10:00:00:00:c9:30:80:2 WWPN fcp wwpn-alias set [-f] alias wwpn Aliases create fcp wwpn-alias set my_alias_1 10:00:00:00:c9:30:80:2f fcp wwpn-alias remove [-a alias ... | -w wwpn] WWPN Aliases fcp wwpn-alias remove -a my_alias_1 remove fcp wwpn-alias remove -w 10:00:00:00:c9:30:80:2

15

Snapshots and Cloning


Data ONTAP provides a variety of methods for protecting data in an iSCSI or Fibre Channel SAN. These methods are based on Snapshot technology in Data ONTAP, which enables you to maintain multiple read-only versions of LUNs online per volume. Snapshot copies are a standard feature of Data ONTAP. A Snapshot copy is a frozen, read-only image of the entire Data ONTAP file system, or WAFL (Write Anywhere File Layout) volume, that reflects the state of the LUN or the file system at the time the Snapshot copy is created. The other data protection methods listed in the table below rely on Snapshot copies or create, use, and destroy Snapshot copies, as required. The following table describes the various methods for protecting your data with Data ONTAP Snapshot copy Make point-in-time copies of a volume.

Restore a LUN or file system to an earlier preserved state in less than a minute without rebooting the storage system, regardless of the size of the LUN or volume being restored. Recover from a corrupted database or a damaged application, a file system, a LUN, or a volume by using an existing Snapshot copy. Replicate data or asynchronously mirror data from one storage system to another over local or wide area networks (LANs or WANs). Transfer Snapshot copies taken at specific points in time to other storage systems or near-line systems. These replication targets can be in the same data center through a LAN or distributed across the globe connected through metropolitan area networks (MANs) or WANs. Because SnapMirror operates at the changed block level instead of transferring entire files or file systems, it generally reduces bandwidth and transfer time requirements for replication. Back up data by using Snapshot copies on the storage system and transferring them on a scheduled basis to a destination storage system. Store these Snapshot copies on the destination storage system for weeks or months, allowing recovery operations to occur nearly instantaneously from the destination storage system to the original storage system. Manage storage system Snapshot copies directly from a

SnapRestore

SnapMirror

SnapVault

SnapDrive

16

for Windows or UNIX


Windows or UNIX host. Manage storage (LUNs) directly from a host. Configure access to storage directly from a host. SnapDrive for Windows supports Windows 2000 Server and Windows Server 2003. SnapDrive for UNIX supports a number of UNIX environments.

Native tape backup and Store and retrieve data on tape. recovery NDMP Control native backup and recovery facilities in storage systems and (Network other file servers. Backup application vendors provide a common Data Management interface between backup applications and file servers. Protocol) A LUN clone is a point-in-time, writable copy of a LUN in a Snapshot copy. Changes made to the parent LUN after the clone is created are not reflected in the Snapshot copy. A LUN clone shares space with the LUN in the backing Snapshot copy. When you clone a LUN, and new data is written to the LUN, the LUN clone still depends on data in the backing Snapshot copy. The clone does not require additional disk space until changes are made to it. You cannot delete the backing Snapshot copy until you split the clone from it. When you split the clone from the backing Snapshot copy, the data is copied from the Snapshot copy to the clone, thereby removing any dependence on the Snapshot copy. After the splitting operation, both the backing Snapshot copy and the clone occupy their own space. Use LUN clones to create multiple read/write copies of a LUN. You might want to do this for the following reasons:

You need to create a temporary copy of a LUN for testing purposes. You need to make a copy of your data available to additional users without giving them access to the production data. You want to create a clone of a database for manipulation and projection operations, while preserving the original data in unaltered form. You want to access a specific subset of a LUN's data (a specific logical volume or file system in a volume group, or a specific file or set of files in a file system) and copy it to the original LUN, without restoring the rest of the data in the original LUN. This works on operating systems that support mounting a LUN and a clone of the LUN at the same time. SnapDrive for UNIX allows this with the snap connect command. snap list # Create a LUN by entering the following command

Display clones create clone

17

lun create -s 10g -t solaris /vol/tradvol1/lun1 # Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command snap create tradvol1 tradvol1_snapshot_08122010 # Create the LUN clone by entering the following command lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/lun1 tradvol1_snapshot_08122010 # display the snapshot copies lun snap usage tradvol1 tradvol1_snapshot_08122010 # Delete all the LUNs in the active file system that are displayed by the lun snap usage command by entering the following command lun destroy /vol/tradvol1/clone_lun1 # Delete all the Snapshot copies that are displayed by the lun snap usage command in the order they appear snap delete tradvol1 tradvol1_snapshot_08122010 vol options <vol_name> <snapshot_clone_dependency> on vol options <vol_name> <snapshot_clone_dependency> off Note: Prior to Data ONTAP 7.3, the system automatically locked all backing Snapshot copies when Snapshot copies of LUN clones were taken. Starting with Data ONTAP 7.3, you can enable the system to only lock backing Snapshot copies for the active LUN clone. If you do this, when you delete the active LUN clone, you can delete the base Snapshot clone copy without having to first delete all of the more recent backing dependency Snapshot copies. This behavior in not enabled by default; use the snapshot_clone_dependency volume option to enable it. If this option is set to off, you will still be required to delete all subsequent Snapshot copies before deleting the base Snapshot copy. If you enable this option, you are not required to rediscover the LUNs. If you perform a subsequent volume snap restore operation, the system restores whichever value was present at the time the Snapshot copy was taken. Restoring snapshot splitting the clone stop clone splitting delete snapshot copy snap restore -s payroll_lun_backup.2 -t vol /vol/payroll_lun lun clone split start lun_path lun clone split status lun_path lun clone split stop lun_path snap delete vol-name snapshot-name snap delete -a -f <vol-name>

destroy clone

18

disk space usage Use Volume copy to copy LUN's

lun snap usage tradvol1 mysnap vol copy start -S source:source_volume dest:dest_volume vol copy start -S /vol/vol0 filerB:/vol/vol1

Disk Space Management


There are number of commands that let you see the disk space and manage it. Disk space usage for aggregates Disk space usage for volumes or aggregates The estimated rate of change of data between Snapshot copies in a volume aggr show_space df snap delta snap delta /vol/tradvol1 tradvol1_snapshot_08122010

snap reclaimable The estimated amount of space freed if you delete the specified snap reclaimable /vol/tradvol1 Snapshot copies tradvol1_snapshot_08122010

19

Chapter 3: NetApp System Administration


In this section I will talking about NetApp system adminstration, I will talk about disk administrator in another topic. Basically the NetApp filer is a Unix server highly tuned to deliver large amounts of storage, the hardware again is very similar to the computer that you have at home but will have extra redundancy features. As you know the Operating Systems is called Data ONTAP which is based on Free BSD, you don't need to know a great deal about Unix in order to manage and setup a NetApp file, it comes with two excellent GUI tools one of which is web based but it would be worth while getting to know Unix for more problematic problems as you will need to use the commandline. Generally the NetAPP filer will be setup when you receive it, it should have the latest Data ONTAP o/s installed and be ready to go, I am not going to go into much regarding the operating system.

Accessing NetApp
Once you have your NetApp filer powered up and on the network, you can access it by any of the following common methods

telnet/SSH

Web Access GUI (http)

20

System Manager (GUI)

I will only be using telnet (commandline) and the system manager in my examples. There are a number of common session related parameters that you may wish to tweak, there are many more than below so take a peek at the documentation Help ontap1> options ? ontap1> options telnet telnet.access legacy telnet.distinct.enable on telnet.enable off ## Enabling telnet access ontap1> options telnet.enable on ontap1> options ssh ssh.access * ssh.enable on ssh.idle.timeout 0 ssh.passwd_auth.enable on ssh.port 22 ssh.pubkey_auth.enable on ssh1.enable off ssh2.enable on SSH ## change the idle timeout to 5 minutes ontap1> options ssh.idle.timeout 300 ## You can also use the secureadmin command to setup SSH/SSL secureadmin [setup|addcert|enable|disable|status] ## You also use the system manager

Telnet

21

HTTP

ontap1> options http httpd.access legacy httpd.admin.access legacy httpd.admin.enable on httpd.admin.hostsequiv.enable off httpd.admin.max_connections 512 httpd.admin.ssl.enable on httpd.admin.top-page.authentication on httpd.autoindex.enable off httpd.bypass_traverse_checking off httpd.enable off httpd.log.format common httpd.method.trace.enable off httpd.rootdir XXX httpd.timeout 300 httpd.timewait.enable off ## Enabling HTTP administration access ontap1> httpd.admin.enable on

ontap1> options autologout autologout.console.enable on autologout.console.timeout 300 autologout.telnet.enable on Session timeout autologout.telnet.timeout 300 ## Change the timeout values ontap1> options autologout.telnet.timeout 300 ontap1> options trusted trusted.hosts * Security ## Only allow specific hosts to administrate the NepApp filer ontap1> options trusted.hosts <host1>,<host2>

22

System Configuration and Administration


NetApp filers have two privilege modes, the advanced privilege allows you to access more advanced and dangerous features

Administrative (default) Advanced

To set the privilege priv set [-q] [admin | advanced] Privilege Note: by default you are in administrative mode -q = quiet suppresses warning messages You can use the normal shutdown or reboot command to halt or restart the Netapp filer, if your filer has a RML or BMC you can also start the filer in different modes

startup modes

boot_ontap - boots the current Data ONTAP software release stored on the boot device boot primary - boots the Data ONTAP release stored on the boot device as the primary kernel boot_backup - boots the backup Data ONTAP release from the boot device boot_diags - boots a Data ONTAP diagnostic kernel

Note: there are other options but NetApp will provide these as when necessary halt [-t <mins>] [-f] shutdown -t = shutdown after minutes specified -f = used with HA clustering, means that the partner filer does not take over reboot [-t <mins>] [-s] [-r] [-f] -t = reboot in specified minutes -s = clean reboot but also power cycle the filer (like pushing the off button) -r = bypasses the shutdown (not clean) and power cycles the filer -f = used with HA clustering, means that the partner filer does not take over

restart

When the filer boots you have a chance to enter the boot menu [Ctrl-C] which gives you a number of options, that allow you change the system password, put the filer into maintenance mode, wipe all disks, etc.

23

1) Normal Boot. 2) Boot without /etc/rc. 3) Change password. 4) Clean configuration and initialize all disks. 5) Maintenance mode boot. 6) Update flash from backup config. 7) Install new software first. 8) Reboot node. Selection (1-8)?

Boot Menu

Normal Boot - continue with the normal boot operation Boot without /etc/rc - boot with only default options and disable some services Change Password - change the storage systems password Clean configuration and initialize all disks - cleans all disks and reset the filer to factory default settings Maintenance mode boot - file system operations are disabled, limited set of commands Update flash from backup config - restore the configuration information if corrupted on the boot device Install new software first - use this if the filer does not include support for the storage array Reboot node - restart the filer

To check what versions of Data ONTAP you have use the version command version [-b] Data ONTAP version -b = include name and version information for the primary, secondary and diagnostic kernels and the firmware

I am not going to talk much about users, groups and roles as they are the same in the Unix world, the commands and options that you should be aware of are the following you can perform the following using the secureadmin command

Users

add modify delete list

you can perform the following using the secureadmin command Groups

add modify delete

24

list

you can perform the following using the secureadmin command


Roles

add modify delete list

you can perform the following using the secureadmin command


Domainuser

add delete list load

you can perform the following using the secureadmin command


Diaguser

lock unlock list load

User password options

security.passwd.firstlogin.enable off security.passwd.lockout.numtries 4294967295 security.passwd.rootaccess.enable on security.passwd.rules.enable on security.passwd.rules.everyone on security.passwd.rules.history 6 security.passwd.rules.maximum 256 security.passwd.rules.minimum 8 security.passwd.rules.minimum.alphabetic 2 security.passwd.rules.minimum.digit 1 security.passwd.rules.minimum.symbol 0 The system manager can help with user and groups

System Manager GUI

25

Change a users password

passwd Note: the passwd command will prompt you for the user to change

When you first login into a filer you are placed into a administrative shell that only allows a number of commands to be used (type help to display commands you can access), you can obtain more commands by using the advanced privilege, but on occasions you need a normal Unix shell prompt that allows you to access the normal Unix commands, this is called the systemshell and can be access by the diag user ## First obtain the advanced privileges priv set advanced ## Then unlock and reset the diag users password useradmin diaguser unlock useradmin diaguser password ## Now you should be able to access the systemshell and use all the standard Unix ## commands systemshell login: diag password: ********

Access the systemshell

There are a number of commands to get system configuration information and statisics System Configuration sysconfig General information sysconfig -v sysconfig -a (detailed) Configuration errors sysconfig -c Display disk devices sysconfig -d

26

sysconfig -A Display Raid group information Display arregates and plexes Display tape libraries sysconfig -V sysconfig -r

Display tape devices sysconfig -t sysconfig -m Environment Information General information environment status Disk enclosures (shelves) environment shelf [adapter] environment shelf_power_status environment chassis all environment chassis list-sensors environment chassis Fans environment chassis CPU_Fans environment chassis Power environment chassis Temperature environment chassis [PS1|PS2] Fibre Channel Information Fibre Channel stats fcstat link_stats fcstat fcal_stats fcstat device_map SAS Adapter and Expander Information Shelf information Expander information Disk information sasstat shelf sasstat expander sasstat expander_map sasstat expander_phy_state sasstat dev_stats Statistical Information All stats System Processor Disk Volume LUN Aggregate FC iSCSI CIFS stats show stats show system stats show processor stats show disk stats show volume stats show lun stats show aggregate stats show fcp stats show iscsi stats show cifs

Chassis

Adapter information sasstat adapter_state

27

Network

stats show ifnet

Licensing
The NetApp extra features can be enabled by licensing the product, you can perform this either via the commandline or the system manager GUI ## display licenses license ## Adding a license licenses (commandline) license add <code1> <code2> ## Disabling a license license delete <service>

licenses (GUI)

NTP setup
One very important configuration is the NTP service, this must be setup as it is important for snapshots. ontap1> options timed timed.enable off timed.log off timed.max_skew 30m timed.min_skew 0 NTP setup timed.proto ntp (commandline) timed.sched hourly timed.servers timed.window 0s ontap1> options timed.servers <ntp server> ontap1> options timed.enable on

28

NTP setup (GUI)

29

Chapter 4: NetApp Architecture


The NetApp architecture consist of hardware, Data ONTAP operating system and the network. I have already shown you a diagram of a common NetApp setup but now i will go into more detail.

Hardware
NetApp have a number of filers that would fit into any company and cost, the filer itself may have the following

can be a Intel or AMD server (up to 8 dual core processors) can have dual power supplies can handle up to 64GB RAM and 4GB NVRAM (non-volatile RAM) can manage up to 1176GB storage has a maximum limit of 1176 disk drives can connect the disk shelves via a FC loop for redundancy can support FCP, SATA and SAS disk drives has a maximum 5 PCI and 3 PCI-express slots has 4/8/10GbE support 64bit support

The filer can be attached to a number of disk enclosures (shelves) which expands the storage allocation, these disk enclosures are attached via FC, as mentioned above the disk enclosures can support the following disks FCP SAS SATA These are fibre channel disks, they are very fast but expensive Serial attached SCSI disks again are very fast but expensive , due to replace the FC disks Serial ATA are slow disks but are cheaper, ideal for QA and DEV environments

One note to remember is that the filer that connects to the top module of a shelf controls (owns) the disks in that shelf under normal circumstances (i.e. non-failover). The filers can make use of VIF's (Virtual Interfaces), they come in two flavors

Single-mode VIF

1 active link, others are passive, standby links Failover when link is down No configuration on switches Multiple links are active at the same time Loadbalancing and failover Loadbalancing based on IP address, MAC address or round robin

Multi-mode VIF

30

Requires support & configuration on switches

Software
I have already touched on the operating system Data ONTAP, the latest version is currently version 8 which fully supports grid technology (GX in version 7). It is fully compatible with Intel and AMD architectures and supports 64bit, it borrows the idea's from FreeBSD. All additional NetApp products are activated via licenses, some require the filer to be rebooted so check the documentation. Management of the filer can be accessed via any of the following

Telnet or SSH Filerview (HTTP GUI) System Manager (client software GUI) Console cable snmp and ndmp

Storage Terminology
When talking about storage you probably come across two solutions NAS storage speaks to a file, so the protocol if a file based one. Data is made to be shared examples are NAS (Network Attached Storage)

NFS (Unix) CIFS or SMB (Windows) FTP, HTTP, WebDAV, DAFS

SAN storage speaks to a LUN (Logical Unit Number) and accesses it via data blocks, sharing is difficult examples are SAN (Storage Area Network)

SCSI iSCSI FCAL/FCP

There are a number of terminologies associated with the above solutions, I have already discussed some of them in my EMC section Terminology share/export Drive Solution Description NAS NAS CIFS servers makes data available via shares, a Unix server makes data available via exports CIFS clients typically map a network drive to access

31

mapping/mounting

data stored on a storage server, Unix clients typically mount the remote resource SAN SAN SAN Logical Unit Number , basically a disk presented by a SAN to a host, when attached it looks like a locally attached disk. The machine that offers a disk (LUN) to another machine in other words the SAN The machine that expects to see the disk (LUN) the host OS, appropriate initiator software will be required One or more fibre switches with targets and initiators connected to them are referred to as a fabric. Cisco, McData and Brocade are well know fabric switch makers See my EMC architecture section for more details

LUN Target Initiator

Fabric

SAN

HBA Multipathing (MPIO)

SAN

Host Bus Adapter, the hardware that connects the server or SAN to the fabric switches. There are also iSCSI HBA's The use of redundant storage network components responsible for transfer of data between the server and the storage (Cabling, adapters, switches and software) The partioning of a fabric into smaller subsets to restrict interference, added security and simplify management, it's like VLAN's in networking See my EMC zoning section for more details

SAN

Zoning

SAN

Below is a typical SAN setup using NetApp hardware

32

NetApp Terminology
Now that we know how a NetApp is configured from a hardware point of view, we now need to know how to present the storage to the outside world, first some NetApp terminologies explained This is the physical disk itself, normally the disk will reside in a disk enclosure, the disk will have a pathname like 2a.17

2a = SCSI adapter 17 = disk SCSI ID

Disk Any disks that are classed as spare will be used in any group to replace failed disks. Disks are assigned to a specific pool, also parity disks do not contain any data. Normally there are three pools 0, 1 and spare Raid Group (Pool)

0 = normal pool 1 = mirror pool (if syncMirror is enabled) spare = spares disks that be used for growth and replacement of failed disks

A collection of disks that can have either of the below RAID levels, the aggregate can contain up to 1176 disks, you can have many aggregates Aggregate with the below different RAID levels. An aggregate can contain many volumes (see volumes below).

33

RAID-4 RAID-DP (RAID-6) better fault tolerance

One point to remember is that a aggregate can grow but cannot shrink, the disadvantage with RAID 4 is that a bottleneck can happen on the dedicated parity disk, which is normally the first disk to fail due to it being used the most, however the NVRAM helps out by only writing to disks every 10 seconds or when the NVRAM is 50% full. Plex When a aggregate is mirrored it will have two plexes, when thinking of plexes think of mirroring. A mirrored aggregated can be split into two plexes.

This is more or like a traditional volume in other LVM's, it is a logical Volume space within an aggregate that will contain the actual data, it can be grown (Flexible) or shrunk as needed LUN The Logical Unit Number is what is present to the host to allow access to the volume. Write anywhere filesystem layout is the filesystem used, it uses inodes just like Unix. Disks are not formatted they are zeroed. By default WAFL reserves 10% of a disk space (unreclaimable) A frozen read-only image of a volume or aggregate that reflects the state of the new file system at the time the snapshot was created, snapshot features are Snapshot

WAFL

Up to 255 snapshots per volume can be scheduled Maximum space occupied can be specified (default 20%) File permissions are handled

Snapshots in NetApp world are very fast, basically it takes a snapshot of all the blocks that are associated with the files, this data is never actual changed, if a block is changed a new block is created, the snapshot still points to the old block. NetApp has two products called SnapDrive and SnapManager that deal with consistency problems where data has not actually been written to the disk but cached in memory buffers, you might want to take a look at these products.

34

There are three additional replication products that can you can use

SyncMirror

real time replication of data maximum distance of up to 35km Fibre Channel or DWDM protocol Synchronous

is used primarily for data redundancy


SnapMirror

long distance DR data consolidation no limit on distance and uses IP protocol (WAN/LAN) ASync Mirror (> 1 minute)

is used primarily for disaster recovery


SnapVault

disk-to-disk backup, restore HSM no limit on distance IP protocol (WAN/LAN) ASync Mirror (> 1 hour)

is used primarily for backup/restore

35

Chapter 5: Block Access Management


In my NetApp introduction section I spoke about 2 ways on accessing the NetApp filer, either file based access or block based access File-Based Protocol Block-Based Protocol NFS, CIFS, FTP, TFTP, HTTP Fibre Channel (FC), Fibre channel over Ethernet (FCoE), Internet SCSI (iSCSI)

In this section I will cover the following common based protocols, if any others are not covered then please checkout the documentation

iSCSI FC

I have another web page to cover File Access : NFS, CIFS, FTP, HTTP

Block Based Access


In iSCSI and FC networks, storage systems are targets that have storage target devices, which are referred to as LUNs, or logical units. Using the Data ONTAP operating system, you configure the storage by creating LUNs. The LUNs are accessed by hosts, which are initiators in the storage network. To connect to iSCSI networks, hosts can use standard Ethernet network adapters (NICs), TCP offload engine (TOE) cards with software initiators, or dedicated iSCSI HBAs. To connect to FC networks, hosts require Fibre Channel host bus adapters (HBAs). Data ONTAP 7.2 added support for the Asymmetric Logical Unit Access (ALUA) features of SCSI, also known as SCSI Target Port Groups or Target Port Group Support. ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on Fibre Channel and iSCSI SANs. ALUA allows the initiator to query the target about path attributes, such as primary path and secondary path. It also allows the target to communicate events back to the initiator. As a result, multipathing software can be developed to support any array. Proprietary SCSI commands are no longer required as long as the host supports the ALUA standard. For iSCSI SANs, ALUA is supported only with Solaris hosts running the iSCSI Solaris Host Utilities 3.0 for Native OS.

iSCSI Introduction
The iSCSI protocol is a licensed service on the storage system that enables you to transfer block data to hosts using the SCSI protocol over TCP/IP. The iSCSI protocol standard is defined by RFC 3720. In an iSCSI network, storage systems are targets that have storage target devices, which are referred to as LUNs (logical units). A host with an iSCSI host bus adapter (HBA), or running iSCSI initiator software, uses the iSCSI protocol to access LUNs on a storage system. The iSCSI protocol is implemented over the storage systems standard gigabit Ethernet interfaces using a 36

software driver. The connection between the initiator and target uses a standard TCP/IP network. No special network configuration is needed to support iSCSI traffic. The network can be a dedicated TCP/IP network, or it can be your regular public network. The storage system listens for iSCSI connections on TCP port 3260. In an iSCSI network, there are two types of nodes: targets and initiators Targets Initiators Storage Systems (NetApp, EMC) Hosts (Unix, Linux, Windows)

Storage systems and hosts can be direct-attached or connected through Ethernet switches. Both direct-attached and switched configurations use Ethernet cable and a TCP/IP network for connectivity. You can of course use existing networks but if possible try to make this a dedicated network for the storage system, as it will increase performance. Every iSCSI node must have a node name. The two formats, or type designators, for iSCSI node names are iqn and eui. The storage system always uses the iqn-type designator. The initiator can use either the iqn-type or eui-type designator. The iqn-type designator is a logical name that is not linked to an IP address. It is based on the following components:

iqn

The type designator itself, iqn, followed by a period (.) The date when the naming authority acquired the domain name, followed by a period The name of the naming authority, optionally followed by a colon (:) A unique device name

The format is: iqn.yyyymm.backward-naming-authority:unique-device-name Note: yyyymm = month and year in which the naming authority acquired the domain name. backward-naming-authority = the reverse domain name of the entity responsible for naming this device. unique-device-name = a free-format unique name for this device assigned by the naming authority. The eui-type designator is based on the type designator, eui, followed by a period, followed by sixteen hexadecimal digits. eui The format is: eui.0123456789abcdef Storage Each storage system has a default node name based on a reverse domain

37

name and the serial number of the storage system's non-volatile RAM system node name (NVRAM) card. The node name is displayed in the following format: iqn.1992-08.com.netapp:sn.serial-number The following example shows the default node name for a storage system with the serial number 12345678: iqn.1992-08.com.netapp:sn.12345678 The storage system checks the format of the initiator node name at session login time. If the initiator node name does not comply with storage system node name requirements, the storage system rejects the session. A target portal group is a set of network portals within an iSCSI node over which an iSCSI session is conducted. In a target, a network portal is identified by its IP address and listening TCP port. For storage systems, each network interface can have one or more IP addresses and therefore one or more network portals. A network interface can be an Ethernet port, virtual local area network (VLAN), or virtual interface (vif). The assignment of target portals to portal groups is important for two reasons:

The iSCSI protocol allows only one session between a specific iSCSI initiator port and a single portal group on the target. All connections within an iSCSI session must use target portals that belong to the same portal group.

The Internet Storage Name Service (iSNS) is a protocol that enables automated discovery and management of iSCSI devices on a TCP/IP storage network. An iSNS server maintains information about active iSCSI devices on the network, including their IP addresses, iSCSI node names, and portal groups. You obtain an iSNS server from a third-party vendor. If you have an iSNS server on your network, and it is configured and enabled for use by both the initiator and the storage system, the storage system automatically registers its IP address, node name, and portal groups with the iSNS server when the iSNS service is started. The iSCSI initiator can query the iSNS server to discover the storage system as a target device. If you do not have an iSNS server on your network, you must manually configure each target to be visible to the host. The Challenge Handshake Authentication Protocol (CHAP) enables authenticated communication between iSCSI initiators and targets. When you use CHAP authentication, you define CHAP user names and passwords on both the initiator and the storage system. During the initial stage of an iSCSI session, the initiator sends a login request to the storage system to begin the session. The login request includes the initiators CHAP user name and CHAP algorithm. The storage system responds with a CHAP challenge. The initiator provides a CHAP response. The storage system verifies the response and authenticates the initiator. The CHAP password is used to compute the response. 38

During an iSCSI session, the initiator and the target communicate over their standard Ethernet interfaces, unless the host has an iSCSI HBA. The storage system appears as a single iSCSI target node with one iSCSI node name. For storage systems with a MultiStore license enabled, each vFiler unit is a target with a different node name. On the storage system, the interface can be an Ethernet port, virtual network interface (vif), or a virtual LAN (VLAN) interface. Each interface on the target belongs to its own portal group by default. This enables an initiator port to conduct simultaneous iSCSI sessions on the target, with one session for each portal group. The storage system supports up to 1,024 simultaneous sessions, depending on its memory capacity. To determine whether your hosts initiator software or HBA can have multiple sessions with one storage system, see your host OS or initiator documentation. You can change the assignment of target portals to portal groups as needed to support multiconnection sessions, multiple sessions, and multipath I/O. Each session has an Initiator Session ID (ISID), a number that is determined by the initiator.

FC Introduction
FC is a licensed service on the storage system that enables you to export LUNs and transfer block data to hosts using the SCSI protocol over a Fibre Channel fabric. In a FC network, nodes include targets, initiators, and switches. Nodes register with the Fabric Name Server when they are connected to a FC switch. Targets Initiators Storage Systems (NetApp, EMC) Hosts (Unix, Linux, Windows)

Storage systems and hosts have adapters so they can be directly connected to each other or to FC switches with optical cable. For switch or storage system management, they might be connected to each other or to TCP/IP switches with Ethernet cable. When a node is connected to the FC SAN, it registers each of its ports with the switchs Fabric Name Server service, using a unique identifier. Each FC node is identified by a worldwide node name (WWNN) and a worldwide port name (WWPN). WWPNs identify each port on an adapter. WWPNs are used for the following purposes:

Creating an initiator group - The WWPNs of the hosts HBAs are used to create an initiator group (igroup). An igroup is used to control host access to specific LUNs. You create an igroup by specifying a collection of WWPNs of initiators in an FC network. When you map a LUN on a storage system to an igroup, you grant all the initiators in that group access to that LUN. If a hosts WWPN is not in an igroup that is mapped to a LUN, that host does not have access to the LUN. This means that the LUNs do not appear as disks on that host. You can also create port sets to make a LUN visible only on specific target ports. A port set consists of a group of FC target ports. You bind a port set to an igroup. Any host in the igroup can access the LUNs only by connecting to the target ports in the port set. Uniquely identifying a storage systems HBA target ports -The storage systems WWPNs uniquely identify each target port on the system. The host

39

When the FCP service is first initialized, it assigns a WWNN to a storage system based on the serial number of its NVRAM adapter. The WWNN is stored on disk. Each target port on the HBAs installed in the storage system has a unique WWPN. Both the WWNN and the WWPN are a 64-bit address represented in the following format: nn:nn:nn:nn:nn:nn:nn:nn, where n represents a hexadecimal value. The storage system also has a unique system serial number that you can view by using the sysconfig command. The system serial number is a unique seven-digit identifier that is assigned when the storage system is manufactured. You cannot modify this serial number. Some multipathing software products use the system serial number together with the LUN serial number to identify a LUN. You use the fcp show initiator command to see all of the WWPNs, and any associated aliases, of the FC initiators that have logged on to the storage system. Data ONTAP displays the WWPN as Portname. To know which WWPNs are associated with a specific host, see the FC Host Utilities documentation for your host. These documents describe commands supplied by the Host Utilities or the vendor of the initiator, or methods that show the mapping between the host and its WWPN. For example, for Windows hosts, use the lputilnt, HBAnywhere, or SANsurfer applications, and for UNIX hosts, use the sanlun command.

Getting the Storage Ready


I have discussed in detail how to create the following in my disk administration section:

Aggregates Plexes FlexVol and Traditional Volumes QTrees Files LUNs

Here's a quick recap

A plex is a collection of one or more RAID groups that together provide the storage for one or more Write Anywhere File Layout (WAFL) file system volumes. Data ONTAP uses plexes as the unit of RAID-level mirroring when the SyncMirror software is enabled. An aggregate is a collection of one or two plexes, depending on whether you want to take advantage of RAID-level mirroring. If the aggregate is unmirrored, it contains a single plex. Aggregates provide the underlying physical storage for traditional and FlexVol volumes. A traditional volume is directly tied to the underlying aggregate and its properties. When you create a traditional volume, Data ONTAP creates the underlying aggregate based on the properties you assign with the vol create

40

A FlexVol volume is a volume that is loosely coupled to its containing aggregate. A FlexVol volume can share its containing aggregate with other FlexVol volumes. Thus, a single aggregate can be the shared source of all the storage used by all the FlexVol volumes contained by that aggregate.

Once you set up the underlying aggregate, you can create, clone, or resize FlexVol volumes without regard to the underlying physical storage. You do not have to manipulate the aggregate frequently. You use either traditional or FlexVol volumes to organize and manage system and user data. A volume can hold qtrees and LUNs. A qtree is a subdirectory of the root directory of a volume. You can use qtrees to subdivide a volume in order to group LUNs. You create LUNs in the root of a volume (traditional or flexible) or in the root of a qtree, with the exception of the root volume. Do not create LUNs in the root volume because it is used by Data ONTAP for system administration. The default root volume is /vol/vol0. Autodelete is a volume-level option that allows you to define a policy for automatically deleting Snapshot copies based on a definable threshold. Using autodelete is recommended in most SAN configurations. You can set that threshold, or trigger, to automatically delete Snapshot copies when:

The volume is nearly full The snap reserve space is nearly full The overwrite reserved space is full

Two other things that you need to be aware of are Space Reservation and Fractional Reserve When space reservation is enabled for one or more LUNs, Data ONTAP Space reserves enough space in the volume (traditional or FlexVol) so that Reservation writes to those LUNs do not fail because of a lack of disk space. Fractional Reserve Fractional reserve is a volume option that enables you to determine how much space Data ONTAP reserves for Snapshot copy overwrites for LUNs, as well as for space-reserved files when all other space in the volume is used.

When provisioning storage in a SAN environment, there are several best practices to consider. Selecting and following the best practice that is most appropriate for you is critical to ensuring your systems run smoothly. There are generally two ways to provision storage in a SAN environment:

Using the autodelete feature Using fractional reserve

41

In Data ONTAP, fractional reserve is set to 100 percent and autodelete is disabled by default. However, in a SAN environment, it usually makes more sense to use autodelete (and sometimes autosize). When using fractional reserve, you need to reserve enough space for the data inside the LUN, fractional reserve, and snapshot data, or: X + X + Delta. For example, you might need to reserve 50 GB for the LUN, 50 GB when fractional reserve is set to 100%, and 50 GB for snapshot data, or a volume of 150 GB. If fractional reserve is set to a percentage other than 100%, then the calculation becomes more complex. In contrast, when using autodelete, you need only calculate the amount of space required for the LUN and snapshot data, or X + Delta. Since you can configure the autodelete setting to automatically delete older snapshots when space is required for data, you need not worry about running out of space for data. For example, if you have a 100 GB volume, you might allocate 50 GB for a LUN, and the remaining 50 GB is used for snapshot data. Or in that same 100 GB volume, you might reserve 30 GB for the LUN, and 70 GB is then allocated for snapshots. In both cases, you can configure snapshots to be automatically deleted to free up space for data, so fractional reserve is unnecessary.

LUN's, iGroups, LUN maps


When you create a LUN there are a number of items you need to know

Path name Name Multiprotocol type Size Description Identification number space reservation setting

The path name of a LUN must be at the root level of the qtree or volume in which the LUN is located. Do not create LUNs in the root volume. The default root volume is /vol/vol0, for example /vol/database/lun1. The name of the LUN is case-sensitive and can contain 1 to 256 characters. You cannot use spaces. LUN names must use only specific letters and characters. LUN names can contain only the letters A through Z, a through z, numbers 0 through 9, hyphen (-), underscore (_), left brace ({), right brace (}), and period (.). The LUN Multiprotocol Type, or operating system type, specifies the OS of the host accessing the LUN. It also determines the layout of data on the LUN, the geometry used to access that data, and the minimum and maximum size of the LUN. The LUN Multiprotocol Type values are solaris, solaris_efi, windows, windows_gpt, windows_2008 , hpux, aix, linux, netware, xen, hyper_v, and vmware. When you create a LUN, you must specify the LUN type. Once the LUN is created, you cannot modify the LUN host operating system type.

42

You specify the size of a LUN in bytes or by using specific multiplier suffixes (k, m, g, t). The LUN description is an optional attribute you use to specify additional information about the LUN. A LUN must have a unique identification number (ID) so that the host can identify and access the LUN. You map the LUN ID to an igroup so that all the hosts in that igroup can access the LUN. If you do not specify a LUN ID, Data ONTAP automatically assigns one. When you create a LUN by using the lun setup command or FilerView, you specify whether you want to enable space reservations. When you create a LUN using the lun create command, space reservation is automatically turned on. Initiator groups (igroups) are tables of FCP host WWPNs or iSCSI host nodenames. You define igroups and map them to LUNs to control which initiators have access to LUNs. Typically, you want all of the hosts HBAs or software initiators to have access to a LUN. If you are using multipathing software or have clustered hosts, each HBA or software initiator of each clustered host needs redundant paths to the same LUN. You can create igroups that specify which initiators have access to the LUNs either before or after you create LUNs, but you must create igroups before you can map a LUN to an igroup. Initiator groups can have multiple initiators, and multiple igroups can have the same initiator. However, you cannot map a LUN to multiple igroups that have the same initiator. Host with HBA WWPN's igroups WWPN's added to igroups LUN's mapped to igroup

Linux1, single-path (one linuxHBA) group0 10:00:00:00:c9:2b:7c:8f Linux2, multipath (two HBAs) 10:00:00:00:c9:2b:3e:3c 10:00:00:00:c9:2b:09:3c linuxgroup1

10:00:00:00:c9:2b:7c:8f /vol/vol2/lun0

10:00:00:00:c9:2b:3e:3c /vol/vol2/lun1 10:00:00:00:c9:2b:09:3c

The igroup name is a case-sensitive name that must satisfy several requirements. Contains 1 to 96 characters. Spaces are not allowed. Can contain the letters A through Z, a through z, numbers 0 through 9, hyphen (-), underscore (_), colon (:), and period (.). Must start with a letter or number. The igroup type can be either -i for iSCSI or -f for FC. The ostype indicates the type of host operating system used by all of the initiators in the igroup. All initiators in an igroup must be of the same ostype. The ostypes of initiators are solaris, windows, hpux, aix, netware, xen, hyper_v, vmware, and linux. You must select an ostype for the igroup.

43

Finally we get to LUN mapping which is the process of associating a LUN with an igroup. When you map the LUN to the igroup, you grant the initiators in the igroup access to the LUN. You must map a LUN to an igroup to make the LUN accessible to the host. Data ONTAP maintains a separate LUN map for each igroup to support a large number of hosts and to enforce access control. Specify the path name of the LUN to be mapped. Specify the name of the igroup that contains the hosts that will access the LUN. Assign a number for the LUN ID, or accept the default LUN ID. Typically, the default LUN ID begins with 0 and increments by 1 for each additional LUN as it is created. The host associates the LUN ID with the location and path name of the LUN. The range of valid LUN ID numbers depends on the host. There are two ways to setup a LUN ontap1> lun setup LUN setup command Note: the "lun setup" will display prompts that lead you through the setup process # Create the LUN lun create -s 100m -t windows /vol/tradvol1/lun1 # Create the igroup, you must obtain the nodes identifier (my Good old fashioned home pc is: iqn.1991-05.com.microsoft:xblade) igroup create -i -t windows win_hosts_group1 iqn.1991commandline 05.com.microsoft:xblade # Map the LUN to the igroup lun map /vol/tradvol1/lun1 win_hosts_group1 0 The full set of commands for both lun and igroup are below LUN configuration Display Initialize/Configure LUNs, mapping Create Destroy Note: the "-f" will force the destroy lun resize <lun path> <size> Resize lun resize /vol/tradvol1/lun1 75m Restart block lun online /vol/tradvol1/lun1 lun show lun show -m lun show -v lun setup Note: follow the prompts to create and configure LUN's lun create -s 100m -t windows /vol/tradvol1/lun1 lun destroy [-f] /vol/tradvol1/lun1

44

protocol access Stop block protocol access lun offline /vol/tradvol1/lun1 lun map /vol/tradvol1/lun1 win_hosts_group1 0 lun map -f /vol/tradvol1/lun2 linux_host_group1 1 Map a LUN to an initiator group lun show -m Note: use "-f" to force the mapping Remove LUN mapping lun show -m lun offline /vol/tradvol1 lun unmap /vol/tradvol1/lun1 win_hosts_group1 0

Displays or zeros read/write statistics lun stats /vol/tradvol1/lun1 for LUN Comments Check all lun/igroup/fcp settings for correctness lun comment /vol/tradvol1/lun1 "10GB for payroll records" lun config_check -v # Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command snap create tradvol1 tradvol1_snapshot_08122010 # Create the LUN clone by entering the following command lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/tradvol1_snapshot_08122010 lun1

Manage LUN cloning

Show the maximum possible size of a LUN on a lun maxsize /vol/tradvol1 given volume or qtree Move (rename) LUN lun move /vol/tradvol1/lun1 /vol/tradvol1/windows_lun1

Display/change lun serial -x /vol/tradvol1/lun1 LUN serial number Manage LUN properties Configure NAS file-sharing properties Manage LUN and snapshot interactions lun set reservation /vol/tradvol1/hpux/lun0 lun share <lun_path> { none | read | write | all }

lun snap usage -s <volume> <snapshot> igroup configuration

45

display create (iSCSI) create (FC) destroy

igroup show igroup show -v igroup show iqn.1991-05.com.microsoft:xblade igroup create -i -t windows win_hosts_group1 iqn.199105.com.microsoft:xblade igroup create -i -f windows win_hosts_group1 iqn.199105.com.microsoft:xblade igroup destroy win_hosts_group1

add initiators to an igroup add win_hosts_group1 iqn.1991-05.com.microsoft:laptop igroup remove initiators to igroup remove win_hosts_group1 iqn.1991an igroup 05.com.microsoft:laptop rename set O/S type igroup rename win_hosts_group1 win_hosts_group2 igroup set win_hosts_group1 ostype windows igroup set win_hosts_group1 alua yes Note: ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on Fibre Channel and iSCSI SANs. ALUA enables the initiator to query the target about path attributes, such as primary path and secondary path. It also enables the target to communicate events back to the initiator. As long as the host supports the ALUA standard, multipathing software can be developed to support any array. Proprietary SCSI commands are no longer required.

Enabling ALUA

There are a number of iSCSI commands that you can use, I am not going to discuss iSCSI security (CHAPS or RADIUS), I will leave you to look at the doucmentation on this advanced topic. iscsi initiator show iscsi session show [-t] iscsi connection show -v iscsi security show iscsi status iscsi start iscsi stop iscsi stats iscsi nodename nodename # to change the name iscsi nodename <new name> iscsi interface show interfaces iscsi interface enable e0b iscsi interface disable e0b 46

display status start stop stats

iscsi portal show portals Note: Use the iscsi portal show command to display the target IP addresses of the storage system. The storage system's target IP addresses are the addresses of the interfaces used for the iSCSI protocol iscsi interface accesslist show accesslists Note: you can add or remove interfaces from the list We have discussed how to setup a server using iSCSI but what if the server is using FC to connect to the NetApp. A port set consists of a group of FC target ports. You bind a port set to an igroup, to make the LUN available only on a subset of the storage system's target ports. Any host in the igroup can access the LUNs only by connecting to the target ports in the port set. If an igroup is not bound to a port set, the LUNs mapped to the igroup are available on all of the storage systems FC target ports. The igroup controls which initiators LUNs are exported to. The port set limits the target ports on which those initiators have access. You use port sets for LUNs that are accessed by FC hosts only. You cannot use port sets for LUNs accessed by iSCSI hosts. All ports on both systems in the HA pairs are visible to the hosts. You use port sets to fine-tune which ports are available to specific hosts and limit the amount of paths to the LUNs to comply with the limitations of your multipathing software. When using port sets, make sure your port set definitions and igroup bindings align with the cabling and zoning requirements of your configuration Port Sets display create destroy add remove binding portset show portset show portset1 igroup show linux-igroup1 portset create -f portset1 SystemA:4b igroup unbind linux-igroup1 portset1 portset destroy portset1 portset add portset1 SystemB:4b portset remove portset1 SystemB:4b igroup bind linux-igroup1 portset1 igroup unbind linux-igroup1 portset1 FCP service display daemon status start stop fcp show adapter -v fcp status fcp start fcp stop

47

fcp stats -i interval [-c count] [-a | adapter] stats fcp stats -i 1 target fcp config <adapter> [down|up] expansion adapters fcp config 4a down target adapter speed set WWPN # swap WWPN # fcp config <adapter> speed [auto|1|2|4|8] fcp config 4a speed 8 fcp portname set [-f] adapter wwpn fcp portname set -f 1b 50:0a:09:85:87:09:68:ad fcp portname swap [-f] adapter1 adapter2 fcp portname swap -f 1a 1b # display nodename fcp nodename fcp nodename [-f]nodename change WWNN fcp nodename 50:0a:09:80:82:02:8d:ff Note: The WWNN of a storage system is generated by a serial number in its NVRAM, but it is stored ondisk. If you ever replace a storage system chassis and reuse it in the same Fibre Channel SAN, it is possible, although extremely rare, that the WWNN of the replaced storage system is duplicated. In this unlikely event, you can change the WWNN of the storage system.

WWPN fcp wwpn-alias show Aliases - fcp wwpn-alias show -a my_alias_1 display fcp wwpn-alias show -w 10:00:00:00:c9:30:80:2 WWPN fcp wwpn-alias set [-f] alias wwpn Aliases create fcp wwpn-alias set my_alias_1 10:00:00:00:c9:30:80:2f fcp wwpn-alias remove [-a alias ... | -w wwpn] WWPN Aliases fcp wwpn-alias remove -a my_alias_1 remove fcp wwpn-alias remove -w 10:00:00:00:c9:30:80:2

Snapshots and Cloning


Data ONTAP provides a variety of methods for protecting data in an iSCSI or Fibre Channel SAN. These methods are based on Snapshot technology in Data ONTAP, which enables you to maintain multiple read-only versions of LUNs online per volume. Snapshot copies are a standard feature of Data ONTAP. A Snapshot copy is a frozen, read-only image of the entire Data ONTAP file system, or WAFL (Write Anywhere File Layout) volume, that reflects the state of the LUN or the file system at

48

the time the Snapshot copy is created. The other data protection methods listed in the table below rely on Snapshot copies or create, use, and destroy Snapshot copies, as required. The following table describes the various methods for protecting your data with Data ONTAP Snapshot copy Make point-in-time copies of a volume.

Restore a LUN or file system to an earlier preserved state in less than a minute without rebooting the storage system, regardless of the size of the LUN or volume being restored. Recover from a corrupted database or a damaged application, a file system, a LUN, or a volume by using an existing Snapshot copy. Replicate data or asynchronously mirror data from one storage system to another over local or wide area networks (LANs or WANs). Transfer Snapshot copies taken at specific points in time to other storage systems or near-line systems. These replication targets can be in the same data center through a LAN or distributed across the globe connected through metropolitan area networks (MANs) or WANs. Because SnapMirror operates at the changed block level instead of transferring entire files or file systems, it generally reduces bandwidth and transfer time requirements for replication. Back up data by using Snapshot copies on the storage system and transferring them on a scheduled basis to a destination storage system. Store these Snapshot copies on the destination storage system for weeks or months, allowing recovery operations to occur nearly instantaneously from the destination storage system to the original storage system. Manage storage system Snapshot copies directly from a Windows or UNIX host. Manage storage (LUNs) directly from a host. Configure access to storage directly from a host. SnapDrive for Windows supports Windows 2000 Server and Windows Server 2003. SnapDrive for UNIX supports a number of UNIX

SnapRestore

SnapMirror

SnapVault

SnapDrive for Windows or UNIX

49

environments. Native tape backup and Store and retrieve data on tape. recovery NDMP Control native backup and recovery facilities in storage systems and (Network other file servers. Backup application vendors provide a common Data Management interface between backup applications and file servers. Protocol) A LUN clone is a point-in-time, writable copy of a LUN in a Snapshot copy. Changes made to the parent LUN after the clone is created are not reflected in the Snapshot copy. A LUN clone shares space with the LUN in the backing Snapshot copy. When you clone a LUN, and new data is written to the LUN, the LUN clone still depends on data in the backing Snapshot copy. The clone does not require additional disk space until changes are made to it. You cannot delete the backing Snapshot copy until you split the clone from it. When you split the clone from the backing Snapshot copy, the data is copied from the Snapshot copy to the clone, thereby removing any dependence on the Snapshot copy. After the splitting operation, both the backing Snapshot copy and the clone occupy their own space. Use LUN clones to create multiple read/write copies of a LUN. You might want to do this for the following reasons:

You need to create a temporary copy of a LUN for testing purposes. You need to make a copy of your data available to additional users without giving them access to the production data. You want to create a clone of a database for manipulation and projection operations, while preserving the original data in unaltered form. You want to access a specific subset of a LUN's data (a specific logical volume or file system in a volume group, or a specific file or set of files in a file system) and copy it to the original LUN, without restoring the rest of the data in the original LUN. This works on operating systems that support mounting a LUN and a clone of the LUN at the same time. SnapDrive for UNIX allows this with the snap connect command. snap list # Create a LUN by entering the following command lun create -s 10g -t solaris /vol/tradvol1/lun1

Display clones

create clone

# Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command snap create tradvol1 tradvol1_snapshot_08122010 # Create the LUN clone by entering the following command lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/lun1

50

tradvol1_snapshot_08122010 # display the snapshot copies lun snap usage tradvol1 tradvol1_snapshot_08122010 # Delete all the LUNs in the active file system that are displayed by the lun snap usage command by entering the following command lun destroy /vol/tradvol1/clone_lun1 # Delete all the Snapshot copies that are displayed by the lun snap usage command in the order they appear snap delete tradvol1 tradvol1_snapshot_08122010 vol options <vol_name> <snapshot_clone_dependency> on vol options <vol_name> <snapshot_clone_dependency> off Note: Prior to Data ONTAP 7.3, the system automatically locked all backing Snapshot copies when Snapshot copies of LUN clones were taken. Starting with Data ONTAP 7.3, you can enable the system to only lock backing Snapshot copies for the active LUN clone. If you do this, when you delete the active LUN clone, you can delete the base Snapshot clone copy without having to first delete all of the more recent backing dependency Snapshot copies. This behavior in not enabled by default; use the snapshot_clone_dependency volume option to enable it. If this option is set to off, you will still be required to delete all subsequent Snapshot copies before deleting the base Snapshot copy. If you enable this option, you are not required to rediscover the LUNs. If you perform a subsequent volume snap restore operation, the system restores whichever value was present at the time the Snapshot copy was taken. Restoring snapshot splitting the clone stop clone splitting delete snapshot copy disk space usage Use Volume copy to copy LUN's snap restore -s payroll_lun_backup.2 -t vol /vol/payroll_lun lun clone split start lun_path lun clone split status lun_path lun clone split stop lun_path snap delete vol-name snapshot-name snap delete -a -f <vol-name> lun snap usage tradvol1 mysnap vol copy start -S source:source_volume dest:dest_volume vol copy start -S /vol/vol0 filerB:/vol/vol1

destroy clone

51

Disk Space Management


There are number of commands that let you see the disk space and manage it. Disk space usage for aggregates Disk space usage for volumes or aggregates The estimated rate of change of data between Snapshot copies in a volume aggr show_space df snap delta snap delta /vol/tradvol1 tradvol1_snapshot_08122010

snap reclaimable The estimated amount of space freed if you delete the specified snap reclaimable /vol/tradvol1 Snapshot copies tradvol1_snapshot_08122010

52

Chapter 6: NetApp Commandline Cheatsheet


This is a quick and dirty NetApp commandline cheatsheet on most of the common commands used, this is not extensive so check out the man pages and NetApp documentation. I will be updating this document as I become more familar with the NetApp application.

Server
Startup and Shutdown 1) Normal Boot. 2) Boot without /etc/rc. 3) Change password. 4) Clean configuration and initialize all disks. 5) Maintenance mode boot. 6) Update flash from backup config. 7) Install new software first. 8) Reboot node. Selection (1-8)?

Boot Menu

Normal Boot - continue with the normal boot operation Boot without /etc/rc - boot with only default options and disable some services Change Password - change the storage systems password Clean configuration and initialize all disks - cleans all disks and reset the filer to factory default settings Maintenance mode boot - file system operations are disabled, limited set of commands Update flash from backup config - restore the configuration information if corrupted on the boot device Install new software first - use this if the filer does not include support for the storage array Reboot node - restart the filer boot_ontap - boots the current Data ONTAP software release stored on the boot device boot primary - boots the Data ONTAP release stored on the boot device as the primary kernel boot_backup - boots the backup Data ONTAP release from the boot device boot_diags - boots a Data ONTAP diagnostic kernel

startup modes

Note: there are other options but NetApp will provide these as when necessary halt [-t <mins>] [-f] shutdown -t = shutdown after minutes specified -f = used with HA clustering, means that the partner filer does not

53

take over reboot [-t <mins>] [-s] [-r] [-f] -t = reboot in specified minutes -s = clean reboot but also power cycle the filer (like pushing the off button) -r = bypasses the shutdown (not clean) and power cycles the filer -f = used with HA clustering, means that the partner filer does not take over System Privilege and System shell priv set [-q] [admin | advanced] Privilege Note: by default you are in administrative mode -q = quiet suppresses warning messages ## First obtain the advanced privileges priv set advanced ## Then unlock and reset the diag users password useradmin diaguser unlock useradmin diaguser password ## Now you should be able to access the systemshell and use all the standard Unix ## commands systemshell login: diag password: ******** Licensing and Version ## display licenses license licenses (commandline) ## Adding a license license add <code1> <code2> ## Disabling a license license delete <service> version [-b] Data ONTAP version -b = include name and version information for the primary, secondary and diagnostic kernels and the firmware Useful Commands read the messages rdfile /etc/messages file write to a file wrfile -a <file> <text>

restart

Access the systemshell

54

# Examples wrfile -a /etc/test1 This is line 6 # comment here wrfile -a /etc/test1 "This is line \"15\"." System Configuration General information Configuration errors Display disk devices sysconfig sysconfig -v sysconfig -a (detailed) sysconfig -c sysconfig -d sysconfig -A

Display Raid sysconfig -V group information Display arregates sysconfig -r and plexes Display tape devices Display tape libraries General information Disk enclosures (shelves) sysconfig -t sysconfig -m Environment Information environment status environment shelf [adapter] environment shelf_power_status environment chassis all environment chassis list-sensors environment chassis Fans environment chassis CPU_Fans environment chassis Power environment chassis Temperature environment chassis [PS1|PS2] Fibre Channel Information Fibre Channel stats fcstat link_status fcstat fcal_stat fcstat device_map SAS Adapter and Expander Information Shelf information Expander information Disk information Adapter sasstat shelf sasstat shelf_short sasstat expander sasstat expander_map sasstat expander_phy_state sasstat dev_stats sasstat adapter_state

Chassis

55

information Statistical Information System Processor Disk Volume LUN Aggregate FC iSCSI CIFS Network stats show system stats show processor stats show disk stats show volume stats show lun stats show aggregate stats show fcp stats show iscsi stats show cifs stats show ifnet

Storage
Storage Commands storage show adapter storage show disk [-a|-x|-p|-T] storage show expander storage show fabric storage show fault storage show hub storage show initiators storage show mc storage show port storage show shelf storage show switch storage show tape [supported] storage show acp storage array show storage array show-ports storage array show-luns storage array show-config Enable Disable Rename switch storage enable adapter storage disable adapter storage rename <oldname> <newname>

Display

Remove port storage array remove-port <array_name> -p <WWPN> Load Balance storage load balance storage power_cycle shelf -h Power Cycle storage power_cycle shelf start -c <channel name> storage power_cycle shelf completed

56

Disks
Disk Information This is the physical disk itself, normally the disk will reside in a disk enclosure, the disk will have a pathname like 2a.17 depending on the type of disk enclosure Disk name

2a = SCSI adapter 17 = disk SCSI ID

Any disks that are classed as spare will be used in any group to replace failed disks. They can also be assigned to any aggregate. Disks are assigned to a specific pool. Disk Types Data Spare Parity dParity holds data stored within the RAID group Does not hold usable data but is available to be added to a RAID group in an aggregate, also known as a hot spare Store data reconstruction information within the RAID group Stores double-parity information within the RAID group, if RAID-DP is enabled Disk Commands disk show disk show <disk_name> disk_list Display sysconfig -r sysconfig -d ## list all unnassigned/assigned disks disk show -n disk show -a ## Add a specific disk to pool1 the mirror pool disk assign <disk_name> -p 1 Adding (assigning) ## Assign all disk to pool 0, by default they are assigned to pool 0 if the "-p" ## option is not specififed disk assign all -p 0 disk remove <disk_name> disk reassign -d <new_sysid> disk replace start <disk_name> <spare_disk_name> disk replace stop <disk_name>

Remove (spin down disk) Reassign Replace

57

Note: uses Rapid RAID Recovery to copy data from the specified file system to the specified spare disk, you can stop this process using the stop command Zero spare disks fail a disk Scrub a disk disk zero spares disk fail <disk_name> disk scrub start disk scrub stop disk sanitize start <disk list> disk sanitize abort <disk_list> disk sanitize status disk sanitize release <disk_list> Note: the release modifies the state of the disk from sanitize to spare. Sanitize requires a license. disk maint start -d <disk_list> disk maint abort <disk_list> disk maint list Maintanence disk maint status Note: you can test the disk using maintain mode disk swap disk unswap swap a disk Note: it stalls all SCSI I/O until you physically replace or add a disk, can used on SCSI disk only. Statisics Simulate a pulled disk disk_stat <disk_name> disk simpull <disk_name> disk simpush -l disk simpush <complete path of disk obtained from above command> ## Example ontap1> disk simpush -l Simulate a The following pulled disks are available for pushing: pushed disk v0.16:NETAPP__:VD-1000MB-FZ520:14161400:2104448 ontap1> disk simpush v0.16:NETAPP__:VD-1000MB-FZ520:14161400:2104448

Sanitize

Aggregates
Aggregate States Online Read and write access to volumes is allowed

58

Restricted Offline 32-bit 64-bit aggr copying degraded double degraded foreign growing initializing invalid ironing mirror degraded mirrored needs check normal out-of-date partial raid0 raid4 raid_dp reconstruct redirect

Some operations, such as parity reconstruction are allowed, but data access is not allowed No access to the aggregate is allowed Aggregate Status Values This aggregate is a 32-bit aggregate This aggregate is a 64-bit aggregate This aggregate is capable of contain FlexVol volumes This aggregate is currently the target aggregate of an active copy operation This aggregate is contains at least one RAID group with single disk failure that is not being reconstructed This aggregate is contains at least one RAID group with double disk failure that is not being reconstructed (RAIDDP aggregate only) Disks that the aggregate contains were moved to the current storage system from another storage system Disks are in the process of being added to the aggregate The aggregate is in the process of being initialized The aggregate contains no volumes and none can be added. Typically this happend only after an aborted "aggr copy" operation A WAFL consistency check is being performewd on the aggregate The aggregate is mirrored and one of its plexes is offline or resynchronizing The aggregate is mirrored WAFL consistency check needs to be performed on the aggregate The aggregate is unmirrored and all of its RAID groups are functional The aggregate is mirrored and needs to be resynchronized At least one disk was found for the aggregate, but two or more disks are missing The aggrgate consists of RAID 0 (no parity) RAID groups The agrregate consists of RAID 4 RAID groups The agrregate consists of RAID-DP RAID groups At least one RAID group in the aggregate is being reconstructed Aggregate reallocation or file reallocation with the "-p" option has been started on the aggregate, read performance will be degraded

59

resyncing snapmirror trad verifying wafl inconsistent

One of the mirror aggregates plexes is being resynchronized The aggregate is a SnapMirror replica of another aggregate (traditional volumes only) The aggregate is a traditional volume and cannot contain FlexVol volumes. A mirror operation is currently running on the aggregate The aggregate has been marked corrupted; contact techincal support Aggregate Commands aggr status aggr status -r aggr status <aggregate> [-v] aggr status -s ## Syntax - if no option is specified then the defult is used aggr create <aggr_name> [-f] [-m] [-n] [-t {raid0 |raid4 |raid_dp}] [-r raid_size] [-T disk_type] [-R rpm>] [-L] [B {32|64}] <disk_list> ## create aggregate called newaggr that can have a maximum of 8 RAID groups aggr create newaggr -r 8 -d 8a.16 8a.17 8a.18 8a.19 ## create aggregated called newfastaggr using 20 x 15000rpm disks aggr create newfastaggr -R 15000 20

Displaying Check you have spare disks

Adding (creating)

## create aggrgate called newFCALaggr (note SAS and FC disks may bge used) aggr create newFCALaggr -T FCAL 15 Note: -f = overrides the default behavior that does not permit disks in a plex to belong to different disk pools -m = specifies the optional creation of a SyncMirror -n = displays the results of the command but does not execute it -r = maximum size (number of disks) of the RAID groups for this aggregate -T = disk type ATA, SATA, SAS, BSAS, FCAL or LUN -R = rpm which include 5400, 7200, 10000 and 15000

Remove(destroying)

aggr offline <aggregate> aggr destroy <aggregate>

60

Unremoving(undestroying) aggr undestroy <aggregate> Rename aggr rename <old name> <new name> ## Syntax aggr add <aggr_name> [-f] [-n] [-g {raid_group_name | new |all}] <disk_list> ## add an additonal disk to aggregate pfvAggr, use "aggr status" to get group name aggr status pfvAggr -r aggr add pfvAggr -g rg0 -d v5.25 ## Add 4 300GB disk to aggregate aggr1 aggr add aggr1 4@300 offline online restricted state aggr offline <aggregate> aggr online <aggregate> aggr restrict <aggregate> ## to display the aggregates options aggr options <aggregate> Change an aggregate options ## change a aggregates raid group aggr options <aggregate> raidtype raid_dp ## change a aggregates raid size aggr options <aggregate> raidsize 4 show space usage Mirror Split mirror aggr show_space <aggregate> aggr mirror <aggregate> aggr split <aggregate/plex> <new_aggregate> ## Obtain the status aggr copy status ## Start a copy aggr copy start <aggregate source> <aggregate destination> Copy from one agrregate to another ## Abort a copy - obtain the operation number by using "aggr copy status" aggr copy abort <operation number> ## Throttle the copy 10=full speed, 1=one-tenth full speed aggr copy throttle <operation number> <throttle speed> ## Media scrub status aggr media_scrub status aggr scrub status ## start a scrub operation

Increase size

Scrubbing (parity)

61

aggr scrub start [ aggrname | plexname | groupname ] ## stop a scrub operation aggr scrub stop [ aggrname | plexname | groupname ] ## suspend a scrub operation aggr scrub suspend [ aggrname | plexname | groupname ] ## resume a scrub operation aggr scrub resume [ aggrname | plexname | groupname ] Note: Starts parity scrubbing on the named online aggregate. Parity scrubbing compares the data disks to the parity disk(s) in their RAID group, correcting the parity disks contents as necessary. If no name is given, parity scrubbing is started on all online aggregates. If an aggregate name is given, scrubbing is started on all RAID groups contained in the aggregate. If a plex name is given, scrubbing is started on all RAID groups contained in the plex. Look at the following system options: raid.scrub.duration 360 raid.scrub.enable on raid.scrub.perf_impact low raid.scrub.schedule ## verify status aggr verify status ## start a verify operation aggr verify start [ aggrname ] ## stop a verify operation aggr verify stop [ aggrname ] ## suspend a verify operation aggr verify suspend [ aggrname ] ## resume a verify operation aggr verify resume [ aggrname ] Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then RAID mirror verification is started on all online mirrored aggregates. Verification compares the data in both plexes of a mirrored aggregate. In the default case, all blocks that differ are logged, but no changes are made.

Verify (mirroring)

62

aggr media_scrub status Note: Prints the media scrubbing status of the named aggregate, plex, or group. If no name is given, then status is printed for all RAID groups currently running a media scrub. The status includes a percent-complete and whether it is suspended. Look at the following system options: raid.media_scrub.enable on raid.media_scrub.rate 600 raid.media_scrub.spares.enable on

Media Scrub

Volumes
Volume States Online Restricted Offline Read and write access to this volume is allowed. Some operations, such as parity reconstruction, are allowed, but data access is not allowed. No access to the volume is allowed. Volume Status Values access denied active redirect connecting copying degraded double degraded flex flexcache foreign growing initializing invalid ironing The origin system is not allowing access. (FlexCache volumes only.) The volume's containing aggregate is undergoing reallocation (with the -p option specified). Read performance may be reduced while the volume is in this state. The caching system is trying to connect to the origin system. (FlexCache volumes only.) The volume is currently the target of an active vol copy or snapmirror operation. The volume's containing aggregate contains at least one degraded RAID group that is not being reconstructed after single disk failure. The volume's containing aggregate contains at least one degraded RAID-DP group that is not being reconstructed after double disk failure. The volume is a FlexVol volume. The volume is a FlexCache volume. Disks used by the volume's containing aggregate were moved to the current storage system from another storage system. Disks are being added to the volume's containing aggregate. The volume's containing aggregate is being initialized. The volume does not contain a valid file system. A WAFL consistency check is being performed on the volume's

63

containing aggregate. lang mismatch mirror degraded mirrored needs check out-of-date partial raid0 raid4 raid_dp reconstruct redirect rem vol changed rem vol unavail The language setting of the origin volume was changed since the caching volume was created. (FlexCache volumes only.) The volume's containing aggregate is mirrored and one of its plexes is offline or resynchronizing. The volume's containing aggregate is mirrored. A WAFL consistency check needs to be performed on the volume's containing aggregate. The volume's containing aggregate is mirrored and needs to be resynchronized. At least one disk was found for the volume's containing aggregate, but two or more disks are missing. The volume's containing aggregate consists of RAID0 (no parity) groups (array LUNs only). The volume's containing aggregate consists of RAID4 groups. The volume's containing aggregate consists of RAID-DP groups. At least one RAID group in the volume's containing aggregate is being reconstructed. The volume's containing aggregate is undergoing aggregate reallocation or file reallocation with the -p option. Read performance to volumes in the aggregate might be degraded. The origin volume was deleted and re-created with the same name. Re-create the FlexCache volume to reenable the FlexCache relationship. (FlexCache volumes only.) The origin volume is offline or has been deleted. (FlexCache volumes only.)

remote nvram The origin system is experiencing problems with its NVRAM. (FlexCache volumes only.) err resyncing One of the plexes of the volume's containing mirrored aggregate is being resynchronized. The volume is a traditional volume. The volume is a FlexVol volume that has been marked unrecoverable; contact technical support.

snapmirrored The volume is in a SnapMirror relationship with another volume. trad unrecoverable

The origin system is running a version of Data ONTAP the does not unsup remote support FlexCache volumes or is not compatible with the version vol running on the caching system. (FlexCache volumes only.) verifying wafl inconsistent RAID mirror verification is running on the volume's containing aggregate. The volume or its containing aggregate has been marked corrupted; contact technical support . General Volume Operations (Traditional and FlexVol)

64

Displaying Remove (destroying) Rename online offline restrict decompress

vol status vol status -v (verbose) vol status -l (display language) vol offline <vol_name> vol destroy <vol_name> vol rename <old_name> <new_name> vol online <vol_name> vol offline <vol_name> vol restrict <vol_name> vol decompress status vol decompress start <vol_name> vol decompress stop <vol_name> vol mirror volname [-n][-v victim_volname][-f][-d <disk_list>] Note: Mirrors the currently-unmirrored traditional volume volname, either with the specified set of disks or with the contents of another unmirrored traditional volume victim_volname, which will be destroyed in the process. The vol mirror command fails if either the chosen volname or victim_volname are flexible volumes. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite.

Mirroring

Change language

vol lang <vol_name> <language> ## Display maximum number of files maxfiles <vol_name>

Change maximum number of files ## Change maximum number of files maxfiles <vol_name> <max_num_files> Change root volume vol options <vol_name> root vol media_scrub status [volname|plexname|groupname -s diskname][-v] Note: Prints the media scrubbing status of the named aggregate, volume, plex, or group. If no name is given, then status is printed for all RAID groups currently running a media scrub. The status includes a percent-complete and whether it is suspended. Look at the following system options: raid.media_scrub.enable on

Media Scrub

65

raid.media_scrub.rate 600 raid.media_scrub.spares.enable on FlexVol Volume Operations (only) ## Syntax vol create vol_name [-l language_code] [-s {volume|file|none}] <aggr_name> size{k|m|g|t} Adding (creating) ## Create a 200MB volume using the english character set vol create newvol -l en aggr1 200M ## Create 50GB flexvol volume vol create vol1 aggr0 50g additional disks ## add an additional disk to aggregate flexvol1, use "aggr status" to get group name aggr status flexvol1 -r aggr add flexvol1 -g rg0 -d v5.25 vol size <vol_name> [+|-] n{k|m|g|t} Resizing ## Increase flexvol1 volume by 100MB vol size flexvol1 + 100m

vol autosize vol_name [-m size {k|m|g|t}] [-I size {k|m|g|t}] on Automatically resizing ## automatically grow by 10MB increaments to max of 500MB vol autosize flexvol1 -m 500m -I 10m on Determine free df -Ah space and df -I Inodes Determine size vol size <vol_name> vol options <vol_name> try_first [volume_grow|snap_delete] Note: If you specify volume_grow, Data ONTAP attempts to increase the volume's size before deleting any Snapshot copies. Data ONTAP automatic free increases the volume size based on specifications you provided using space the vol autosize command. preservation If you specify snap_delete, Data ONTAP attempts to create more free space by deleting Snapshot copies, before increasing the size of the volume. Data ONTAP deletes Snapshot copies based on the specifications you provided using the snap autodelete command. display a FlexVol volume's containing aggregate Cloning

vol container <vol_name>

vol clone create clone_vol [-s none|file|volume] -b parent_vol

66

[parent_snap] vol clone split start vol clone split stop vol clone split estimate vol clone split status Note: The vol clone create command creates a flexible volume named clone_vol on the local filer that is a clone of a "backing" flexible volume named par_ent_vol. A clone is a volume that is a writable snapshot of another volume. Initially, the clone and its parent share the same storage; more storage space is consumed only as one volume or the other changes. vol copy start [-S|-s snapshot] <vol_source> <vol_destination> vol copy status vol copy abort <operation number> vol copy throttle <operation_number> <throttle value 10-1> ## Example - Copies the nightly snapshot named nightly.1 on volume vol0 on the local filer to the volume vol0 on remote ## filer named toaster1. vol copy start -s nightly.1 vol0 toaster1:vol0 Note: Copies all data, including snapshots, from one volume to another. If the -S flag is used, the command copies all snapshots in the source volume to the destination volume. To specify a particular snapshot to copy, use the -s flag followed by the name of the snapshot. If neither the -S nor -s flag is used in the command, the filer automatically creates a distinctively-named snapshot at the time the vol copy start command is executed and copies only that snapshot to the destination volume. The source and destination volumes must either both be traditional volumes or both be flexible volumes. The vol copy command will abort if an attempt is made to copy between different volume types. The source and destination volumes can be on the same filer or on different filers. If the source or destination volume is on a filer other than the one on which the vol copy start command was entered, specify the volume name in the filer_name:volume_name format. Traditional Volume Operations (only) vol|aggr create vol_name -v [-l language_code] [-f] [-m] [-n] [-v] [-t {raid4|raid_dp}] [-r raidsize] [-T disk-type] -R rpm] [-L] disk-list adding (creating) ## create traditional volume using aggr command aggr create tradvol1 -l en -t raid4 -d v5.26 v5.27 ## create traditional volume using vol command

Copying

67

vol create tradvol1 -l en -t raid4 -d v5.26 v5.27 ## Create traditional volume using 20 disks, each RAID group can have 10 disks vol create vol1 -r 10 20 additional disks splitting vol add volname[-f][-n][-g <raidgroup>]{ ndisks[@size]|-d <disk_list> } ## add another disk to the already existing traditional volume vol add tradvol1 -d v5.28 aggr split <volname/plexname> <new_volname> ## The more new "aggr scrub " command is preferred vol scrub status [volname|plexname|groupname][-v] vol scrub start [volname|plexname|groupname][-v] vol scrub stop [volname|plexname|groupname][-v] Scrubing (parity) vol scrub suspend [volname|plexname|groupname][-v] vol scrub resume [volname|plexname|groupname][-v] Note: Print the status of parity scrubbing on the named traditional volume, plex or RAID group. If no name is provided, the status is given on all RAID groups currently undergoing parity scrubbing. The status includes a percent-complete as well as the scrubs suspended status (if any). ## The more new "aggr verify" command is preferred ## verify status vol verify status ## start a verify operation vol verify start [ aggrname ] ## stop a verify operation vol verify stop [ aggrname ] Verify (mirroring) ## suspend a verify operation vol verify suspend [ aggrname ] ## resume a verify operation vol verify resume [ aggrname ] Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then RAID mirror verification is started on all online mirrored aggregates. Verification compares the data in both plexes of a mirrored aggregate. In the default case, all blocks

68

that differ are logged, but no changes are made.

FlexCache Volumes
FlexCache Consistency You can think of a delegation as a contract between the origin system and the caching volume; as long as the caching volume has the delegation, the file has not changed. Delegations are used only in certain situations. Delegations When data from a file is retrieved from the origin volume, the origin system can give a delegation for that file to the caching volume. Before that file is modified on the origin volume, whether due to a request from another caching volume or due to direct client access, the origin system revokes the delegation for that file from all caching volumes that have that delegation. When data is retrieved from the origin volume, the file that contains that data is considered valid in the FlexCache volume as long as a delegation exists for that file. If no delegation exists, the file is considered valid for a certain length of time, specified by the attribute cache timeout. If a client requests data from a file for which there are no delegations, and the attribute cache timeout has been exceeded, the FlexCache volume compares the file attributes of the cached file with the attributes of the file on the origin system. If a client modifies a file that is cached, that operation is passed back, or proxied through, to the origin system, and the file is ejected from the cache. When the write is proxied, the attributes of the file on the origin volume are changed. This means that when another client requests data from that file, any other FlexCache volume that has that data cached will re-request the data after the attribute cache timeout is reached. FlexCache Status Values access denied connecting lang mismatch rem vol changed rem vol unavail remote nvram err The origin system is not allowing FlexCache access. Check the setting of the flexcache.access option on the origin system. The caching system is trying to connect to the origin system. The language setting of the origin volume was changed since the FlexCache volume was created. The origin volume was deleted and re-created with the same name. Recreate the FlexCache volume to reenable the FlexCache relationship. The origin volume is offline or has been deleted. The origin system is experiencing problems with its NVRAM.

Attribute cache timeouts

write operation proxy

69

unsup remote vol

The origin system is running a version of Data ONTAP that either does not support FlexCache volumes or is not compatible with the version running on the caching system. FlexCache Commands vol status vol status -v <flexcache_name>

Display

## How to display the options available and what they are set to vol help options vol options <flexcache_name> df -L ## Syntax vol create <flexcache_name> <aggr> [size{k|m|g|t}] -S origin:source_vol

Display free space

Adding (Create)

## Create a FlexCache volume called flexcache1 with autogrow in aggr1 aggregate with the source volume vol1 ## on storage netapp1 server vol create flexcache1 aggr1 -S netapp1:vol1 vol offline < flexcache_name> vol destroy <flexcache_name>

Removing (destroy)

Automatically vol options <flexcache_name> flexcache_autogrow [on|off] resizing Eject file from cache flexcache eject <path> [-f] ## Client stats flexcache stats -C <flexcache_name> Statistics ## Server stats flexcache stats -S <volume_name> -c <client> ## File stats flexcache fstat <path>

FlexClone Volumes
FlexClone Commands Display vol status vol status <flexclone_name> -v df -Lh adding (create) ## Syntax vol clone create clone_name [-s {volume|file|none}] -b parent_name [parent_snap] ## create a flexclone called flexclone1 from the parent flexvol1 70

vol clone create flexclone1 -b flexvol1 Removing (destroy) vol offline <flexclone_name> vol destroy <flexclone_name> ## Determine the free space required to perform the split vol clone split estimate <flexclone_name> ## Double check you have the space df -Ah splitting ## Perform the split vol clone split start <flexclone_name> ## Check up on its status vol colne split status <flexclone_name> ## Stop the split vol clone split stop <flexclone_name> /etc/log/clone The clone log file records the following information: Cloning operation ID The name of the volume in which the cloning operation was performed Start time of the cloning operation End time of the cloning operation Parent file/LUN and clone file/LUN names Parent file/LUN ID Status of the clone operation: successful, unsuccessful, or stopped and some other details

log file

Deduplication
Deduplication Commands sis start -s <path> start/restart deduplication operation stop deduplication operation schedule deduplication sis start -s /vol/flexvol1 ## Use previous checkpoint sis start -sp <path> sis stop <path> sis config -s <schedule> <path> sis config -s mon-fri@23 /vol/flexvol1 Note: schedule lists the days and hours of the day when

71

deduplication runs. The schedule can be of the following forms:

day_list[@hour_list] If hour_list is not specified, deduplication runs at midnight on each scheduled day. hour_list[@day_list] If day_list is not specified, deduplication runs every day at the specified hours. A hyphen (-) disables deduplication operations for the specified FlexVol volume.

enabling disabling status Display saved space

sis on <path> sis off <path> sis status -l <path> df -s <path>

QTrees
QTree Commands qtree status [-i] [-v] Display Note: The -i option includes the qtree ID number in the display. The -v option includes the owning vFiler unit, if the MultiStore license is enabled. ## Syntax - by default wafl.default_qtree_mode option is used qtree create path [-m mode] adding (create) ## create a news qtree in the /vol/users volume using 770 as permissions qtree create /vol/users/news -m 770 rm -Rf <directory> mv <old_name> <new_name> ## Move the directory to a different directory mv /n/joel/vol1/dir1 /n/joel/vol1/olddir ## Create the qtree qtree create /n/joel/vol1/dir1 convert a directory into a qtree directory ## Move the contents of the old directory back into the new QTree mv /n/joel/vol1/olddir/* /n/joel/vol1/dir1 ## Remove the old directory name rmdir /n/joel/vol1/olddir

Remove Rename

72

qtree stats [-z] [vol_name] stats Note: -z = zero stats ## Syntax qtree security path {unix | ntfs | mixed} ## Change the security style of /vol/users/docs to mixed qtree security /vol/users/docs mixed

Change the security style

Quotas
Quota Commands Quotas configuration /mroot/etc/quotas file
## hard limit | thres |soft limit ##Quota Target type disk files| hold |disk file ##------------------- ----- ----- ----- ---* tree@/vol/vol0 # monitor usage on all qtrees in vol0 /vol/vol2/qtree tree 1024K 75k # enforce qtree quota using kb tinh user@/vol/vol2/qtree1 100M # enforce users quota in specified qtree dba group@/vol/ora/qtree1 100M # enforce group quota in specified qtree # * = default user/group/qtree # - = placeholder, no limit enforced, just enable stats collection Note: you have lots of permutations, so checkout the documentation

Example quota file

Displaying Activating

quota report [<path>] quota on [-w] <vol_name> Note: -w = return only after the entire quotas file has been scanned quota off [-w] <vol_name> quota on [-w] <vol_name> quota resize <vol_name>

Deactivitating quota off [-w] <vol_name> Reinitializing Resizing Note: this commands rereads the quota file

73

edit the quota file Deleting quota resize <vol_name> log messaging quota logmsg

LUNs, igroups and LUN mapping


LUN configuration Display Initialize/Configure LUNs, mapping Create Destroy Note: the "-f" will force the destroy lun resize <lun path> <size> Resize lun resize /vol/tradvol1/lun1 75m Restart block protocol access Stop block protocol access lun online /vol/tradvol1/lun1 lun offline /vol/tradvol1/lun1 lun map /vol/tradvol1/lun1 win_hosts_group1 0 lun map -f /vol/tradvol1/lun2 linux_host_group1 1 Map a LUN to an initiator group lun show -m Note: use "-f" to force the mapping Remove LUN mapping lun show -m lun offline /vol/tradvol1 lun unmap /vol/tradvol1/lun1 win_hosts_group1 0 lun show lun show -m lun show -v lun setup Note: follow the prompts to create and configure LUN's lun create -s 100m -t windows /vol/tradvol1/lun1 lun destroy [-f] /vol/tradvol1/lun1

Displays or zeros read/write statistics lun stats /vol/tradvol1/lun1 for LUN Comments Check all lun/igroup/fcp settings for correctness Manage LUN cloning lun comment /vol/tradvol1/lun1 "10GB for payroll records" lun config_check -v # Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command

74

snap create tradvol1 tradvol1_snapshot_08122010 # Create the LUN clone by entering the following command lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/tradvol1_snapshot_08122010 lun1 Show the maximum possible size of a LUN on a lun maxsize /vol/tradvol1 given volume or qtree Move (rename) LUN lun move /vol/tradvol1/lun1 /vol/tradvol1/windows_lun1

Display/change lun serial -x /vol/tradvol1/lun1 LUN serial number Manage LUN properties Configure NAS file-sharing properties Manage LUN and snapshot interactions lun set reservation /vol/tradvol1/hpux/lun0 lun share <lun_path> { none | read | write | all }

lun snap usage -s <volume> <snapshot> igroup configuration

display create (iSCSI) create (FC) destroy

igroup show igroup show -v igroup show iqn.1991-05.com.microsoft:xblade igroup create -i -t windows win_hosts_group1 iqn.199105.com.microsoft:xblade igroup create -i -f windows win_hosts_group1 iqn.199105.com.microsoft:xblade igroup destroy win_hosts_group1

add initiators to an igroup add win_hosts_group1 iqn.1991-05.com.microsoft:laptop igroup remove initiators to igroup remove win_hosts_group1 iqn.1991an igroup 05.com.microsoft:laptop rename set O/S type igroup rename win_hosts_group1 win_hosts_group2 igroup set win_hosts_group1 ostype windows igroup set win_hosts_group1 alua yes Note: ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on Fibre Channel and iSCSI SANs. ALUA enables the initiator to query the target about path attributes, such as primary path and secondary path. It also enables the target to communicate events

Enabling ALUA

75

back to the initiator. As long as the host supports the ALUA standard, multipathing software can be developed to support any array. Proprietary SCSI commands are no longer required. iSCSI commands display status start stop stats nodename iscsi initiator show iscsi session show [-t] iscsi connection show -v iscsi security show iscsi status iscsi start iscsi stop iscsi stats iscsi nodename # to change the name iscsi nodename <new name> iscsi interface show interfaces iscsi interface enable e0b iscsi interface disable e0b iscsi portal show portals Note: Use the iscsi portal show command to display the target IP addresses of the storage system. The storage system's target IP addresses are the addresses of the interfaces used for the iSCSI protocol iscsi interface accesslist show accesslists Note: you can add or remove interfaces from the list Port Sets display create destroy add remove binding portset show portset show portset1 igroup show linux-igroup1 portset create -f portset1 SystemA:4b igroup unbind linux-igroup1 portset1 portset destroy portset1 portset add portset1 SystemB:4b portset remove portset1 SystemB:4b igroup bind linux-igroup1 portset1 igroup unbind linux-igroup1 portset1 FCP service display fcp show adapter -v

76

daemon status start stop stats

fcp status fcp start fcp stop fcp stats -i interval [-c count] [-a | adapter] fcp stats -i 1

target expansion adapters target adapter speed set WWPN #

fcp config <adapter> [down|up] fcp config 4a down fcp config <adapter> speed [auto|1|2|4|8] fcp config 4a speed 8 fcp portname set [-f] adapter wwpn fcp portname set -f 1b 50:0a:09:85:87:09:68:ad fcp portname swap [-f] adapter1 adapter2

swap WWPN # fcp portname swap -f 1a 1b # display nodename fcp nodename fcp nodename [-f]nodename fcp nodename 50:0a:09:80:82:02:8d:ff change WWNN Note: The WWNN of a storage system is generated by a serial number in its NVRAM, but it is stored ondisk. If you ever replace a storage system chassis and reuse it in the same Fibre Channel SAN, it is possible, although extremely rare, that the WWNN of the replaced storage system is duplicated. In this unlikely event, you can change the WWNN of the storage system. fcp wwpn-alias show fcp wwpn-alias show -a my_alias_1 fcp wwpn-alias show -w 10:00:00:00:c9:30:80:2 fcp wwpn-alias set [-f] alias wwpn fcp wwpn-alias set my_alias_1 10:00:00:00:c9:30:80:2f fcp wwpn-alias remove [-a alias ... | -w wwpn] WWPN Aliases remove fcp wwpn-alias remove -a my_alias_1 fcp wwpn-alias remove -w 10:00:00:00:c9:30:80:2

WWPN Aliases display WWPN Aliases create

Snapshotting and Cloning


Snapshot and Cloning commands

77

Display clones snap list # Create a LUN by entering the following command lun create -s 10g -t solaris /vol/tradvol1/lun1 # Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command snap create tradvol1 tradvol1_snapshot_08122010 # Create the LUN clone by entering the following command lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/lun1 tradvol1_snapshot_08122010 # display the snapshot copies lun snap usage tradvol1 tradvol1_snapshot_08122010 # Delete all the LUNs in the active file system that are displayed by the lun snap usage command by entering the following command lun destroy /vol/tradvol1/clone_lun1 # Delete all the Snapshot copies that are displayed by the lun snap usage command in the order they appear snap delete tradvol1 tradvol1_snapshot_08122010 vol options <vol_name> <snapshot_clone_dependency> on vol options <vol_name> <snapshot_clone_dependency> off Note: Prior to Data ONTAP 7.3, the system automatically locked all backing Snapshot copies when Snapshot copies of LUN clones were taken. Starting with Data ONTAP 7.3, you can enable the system to only lock backing Snapshot copies for the active LUN clone. If you do this, when you delete the active LUN clone, you can delete the base Snapshot copy without having to first delete all of the more recent backing Snapshot copies. This behavior in not enabled by default; use the snapshot_clone_dependency volume option to enable it. If this option is set to off, you will still be required to delete all subsequent Snapshot copies before deleting the base Snapshot copy. If you enable this option, you are not required to rediscover the LUNs. If you perform a subsequent volume snap restore operation, the system restores whichever value was present at the time the Snapshot copy was taken. Restoring snapshot splitting the clone stop clone splitting snap restore -s payroll_lun_backup.2 -t vol /vol/payroll_lun lun clone split start lun_path lun clone split status lun_path lun clone split stop lun_path

create clone

destroy clone

clone dependency

78

delete snapshot copy disk space usage Use Volume copy to copy LUN's

snap delete vol-name snapshot-name snap delete -a -f <vol-name> lun snap usage tradvol1 mysnap vol copy start -S source:source_volume dest:dest_volume vol copy start -S /vol/vol0 filerB:/vol/vol1

The estimated rate of change of data between snap delta /vol/tradvol1 tradvol1_snapshot_08122010 Snapshot copies in a volume The estimated amount of space freed if you delete the specified Snapshot copies

snap reclaimable /vol/tradvol1 tradvol1_snapshot_08122010

File Access using NFS


Export Options actual=<path Specifies the actual file system path corresponding to the exported file system path. > anon=<uid>| Specifies the effective user ID (or name) of all anonymous or root NFS client users that access the file system path. <name> nosuid ro | ro=clientid rw | rw=clientid Disables setuid and setgid executables and mknod commands on the file system path. Specifies which NFS clients have read-only access to the file system path. Specifies which NFS clients have read-write access to the file system path.

Specifies which NFS clients have root access to the file system path. If you specify the root= option, you must specify at least one NFS root=clientid client identifier. To exclude NFS clients from the list, prepend the NFS client identifiers with a minus sign (-). Specifies the security types that an NFS client must support to access the file system path. To apply the security types to all types of access, specify the sec= option once. To apply the security types to specific types of access (anonymous, non-super user, read-only, read-write, or root), specify the sec= option at least twice, once before each access type to which it applies (anon, nosuid, ro, rw, or root, respectively).

sec=sectype

79

security types could be one of the following: none sys No security. Data ONTAP treats all of the NFS client's users as anonymous users. Standard UNIX (AUTH_SYS) authentication. Data ONTAP checks the NFS credentials of all of the NFS client's users, applying the file access permissions specified for those users in the NFS server's /etc/passwd file. This is the default security type. Kerberos(tm) Version 5 authentication. Data ONTAP uses data encryption standard (DES) key encryption to authenticate the NFS client's users. Kerberos(tm) Version 5 integrity. In addition to authenticating the NFS client's users, Data ONTAP uses message authentication codes (MACs) to verify the integrity of the NFS client's remote procedure requests and responses, thus preventing "man-in-the-middle" tampering. Kerberos(tm) Version 5 privacy. In addition to authenticating the NFS client's users and verifying data integrity, Data ONTAP encrypts NFS arguments and results to provide privacy. rw=10.45.67.0/24 ro,root=@trusted,rw=@friendly rw,root=192.168.0.80,nosuid Export Commands Displaying exportfs exportfs -q <path> # create export in memory and write to /etc/exports (use default options) exportfs -p /vol/nfs1 create # create export in memory and write to /etc/exports (use specific options) exportsfs -io sec=none,rw,root=192.168.0.80,nosuid /vol/nfs1 # create export in memory only using own specific options exportsfs -io sec=none,rw,root=192.168.0.80,nosuid /vol/nfs1 # Memory only exportfs -u <path> remove # Memory and /etc/exportfs exportfs -z <path> export all check access flush reload storage path exportfs -a exportfs -c 192.168.0.80 /vol/nfs1 exportfs -f exportfs -f <path> exportfs -r exportfs -s <path>

krb5

krb5i

krb5p

Examples

80

Write export to a file

exportfs -w <path/export_file> # Suppose /vol/vol0 is exported with the following export options: -rw=pig:horse:cat:dog,ro=duck,anon=0

fencing

# The following command enables fencing of cat from /vol/vol0 exportfs -b enable save cat /vol/vol0 # cat moves to the front of the ro= list for /vol/vol0: -rw=pig:horse:dog,ro=cat:duck,anon=0

stats

nfsstat

File Access using CIFS


Useful CIFS options change the options wafl.default_security_style {ntfs | unix | mixed} security style timeout options cifs.idle_timeout time options cifs.oplocks.enable on Note: Under some circumstances, if a process has an exclusive oplock Performance on a file and a second process attempts to open the file, the first process must invalidate cached data and flush writes and locks. The client must then relinquish the oplock and access to the file. If there is a network failure during this flush, cached write data might be lost. CIFS Commands /etc/cifsconfig_setup.cfg /etc/usermap.cfs /etc/passwd /etc/cifsconfig_share.cfg

useful files

Note: use "rdfile" to read the file cifs setup CIFS setup start stop sessions Note: you will be prompted to answer a number of questions based on what requirements you need. cifs restart cifs terminate # terminate a specific client cifs terminate <client_name>|<IP Address> cifs sessions

81

cifs sessions <user> cifs sessions <IP Address> # Authentication cifs sessions -t # Changes cifs sessions -c # Security Info cifs session -s Broadcast message cifs broadcast * "message" cifs broadcast <client_name> "message" cifs access <share> <user|group> <permission> # Examples cifs access sysadmins -g wheel Full Control cifs access -delete releases ENGINEERING\mary Note: rights can be Unix-style combinations of r w x - or NT-style "No Access", "Read", "Change", and "Full Control" stats cifs stat <interval> cifs stat <user> cifs stat <IP Address> # create a volume in the normal way # then using qtrees set the style of the volume {ntfs | unix | mixed} create a share # Now you can create your share cifs shares -add TEST /vol/flexvol1/TEST -comment "Test Share " forcegroup workgroup -maxusers 100 cifs shares -change sharename {-browse | -nobrowse} {-comment desc | - nocomment} {-maxusers userlimit | -nomaxusers} {-forcegroup groupname | -noforcegroup} {-widelink | -nowidelink} {symlink_strict_security | - nosymlink_strict_security} {-vscan | change share novscan} {-vscanread | - novscanread} {-umask mask | -noumask {characteristics no_caching | -manual_caching | - auto_document_caching | auto_program_caching} # example cifs shares -change <sharename> -novscan # Display home directories cifs homedir home directories # Add a home directory wrfile -a /etc/cifs_homedir.cfg /vol/TEST # check it

permissions

82

rdfile /etc/cifs_homedir.cfg # Display for a Windows Server net view \\<Filer IP Address> # Connect net use * \\192.168.0.75\TEST Note: make sure the directory exists # add a domain controller cifs prefdc add lab 10.10.10.10 10.10.10.11 # delete a domain controller cifs prefdc delete lab domain controller # List domain information cifs domaininfo # List the preferred controllers cifs prefdc print # Restablishing cifs resetdc change filers cifs changefilerpwd domain password sectrace add [-ip ip_address] [-ntuser nt_username] [-unixuser unix_username] [-path path_prefix] [-a] #Examples sectrace add -ip 192.168.10.23 sectrace add -unixuser foo -path /vol/vol0/home4 -a Tracing permission problems # To remove sectrace delete all sectrace delete <index> # Display tracing sectrace show # Display error code status sectrace print-status <status_code> sectrace print-status 1:51544850432:32:78

File Access using FTP


Useful Options

83

Enable Disable

options ftpd.enable on options ftpd.enable off options ftpd.locking delete options ftpd.locking none

File Locking

Note: To prevent users from modifying files while the FTP server is transferring them, you can enable FTP file locking. Otherwise, you can disable FTP file locking. By default, FTP file locking is disabled. options ftpd.auth_style {unix | ntlm | mixed} options ftpd.bypass_traverse_checking on options ftpd.bypass_traverse_checking off Note: If the ftpd.bypass_traverse_checking option is set to off, when a user attempts to access a file using FTP, Data ONTAP checks the traverse (execute) permission for all directories in the path to the file. If any of the intermediate directories does not have the "X" (traverse permission), Data ONTAP denies access to the file. If the ftpd.bypass_traverse_checking option is set to on, when a user attempts to access a file, Data ONTAP does not check the traverse permission for the intermediate directories when determining whether to grant or deny access to the file.

Authenication Style

bypassing of FTP traverse checking

Restricting FTP options ftpd.dir.restriction on users to a options ftpd.dir.restriction off specific directory Restricting FTP users to their home directories options ftpd.dir.override "" or a default directory Maximum number of connections idle timeout value options ftpd.max_connections n options ftpd.max_connections_threshold n options ftpd.idle_timeout n s | m | h options ftpd.anonymous.enable on options ftpd.anonymous.enable off anonymous logins # specify the name for the anonymous login options ftpd.anonymous.name username # create the directory for the anonymous login options ftpd.anonymous.home_dir homedir FTP Commands

84

/etc/log/ftp.cmd /etc/log/ftp.xfer Log files # specify the max number of logfiles (default is 6) and size options ftpd.log.nfiles 10 options ftpd.log.filesize 1G Note: use rdfile to view Restricting access /etc/ftpusers Note: using rdfile and wrfile to access /etc/ftpusers ftp stat stats # to reset ftp stat -z

File Access using HTTP


HTTP Options enable disable options httpd.enable on options httpd.enable off

options httpd.bypass_traverse_checking on Enabling or disabling the options httpd.bypass_traverse_checking off bypassing of HTTP traverse checking Note: this is similar to the FTP version root directory Host access options httpd.rootdir /vol0/home/users/pages options httpd.access host=Host1 AND if=e3 options httpd.admin.access host!=Host1 HTTP Commands /etc/log/httpd.log Log files # use the below to change the logfile format options httpd.log.format alt1 Note: use rdfile to view redirects pass rule fail rule mime types Note: use rdfile and wrfile to edit interface firewall stats # reset the stats ifconfig f0 untrusted httpstat [-dersta] redirect /cgi-bin/* http://cgi-host/* pass /image-bin/* fail /usr/forbidden/* /etc/httpd.mimetypes

85

httpstat -z[derta]

Network Interfaces
Display ifconfig -a ifconfig <interface> ifconfig e0 <IP Address> ifconfig e0a <IP Address> IP address # Remove a IP Address ifconfig e3 0 subnet mask broadcast media type maximum transmission unit (MTU) ifconfig e0a netmask <subnet mask address> ifconfig e0a broadcast <broadcast address> ifconfig e0a mediatype 100tx-fd ifconfig e8 mtusize 9000 ifconfig <interface_name> <flowcontrol> <value> # example ifconfig e8 flowcontrol none Note: value is the flow control type. You can specify the following values for the flowcontrol option: none - No flow control receive - Able to receive flow control frames send - Able to send flow control frames full - Able to send and receive flow control frames The default flowcontrol type is full. ifconfig e8 untrusted trusted Note: You can specify whether a network interface is trustworthy or untrustworthy. When you specify an interface as untrusted (untrustworthy), any packets received on the interface are likely to be dropped. ifconfig e8 partner <IP Address> ## You must enable takeover on interface failures by entering the following commands: options cf.takeover.on_network_interface_failure enable ifconfig interface_name {nfo|-nfo} nfo Enables negotiated failover -nfo Disables negotiated failover

Flow control

HA Pair

86

Note: In an HA pair, you can assign a partner IP address to a network interface. The network interface takes over this IP address when a failover occurs # Create alias ifconfig e0 alias 192.0.2.30 Alias # Remove alias ifconfig e0 -alias 192.0.2.30 # Block options interface.blocked.cifs e9 Block/Unblock options interface.blocked.cifs e0a,e0b protocols # Unblock options interface.blocked.cifs "" ifstat netstat Stats Note: there are many options to both these commands so I will leave to the man pages bring up/down ifconfig <interface> up an interface ifconfig <interface> down

Routing
# using wrfile and rdfile edit the /etc/rc file with the below route add default 192.168.0.254 1 # the full /etc/rc file will look like something below hostname netapp1 ifconfig e0 192.168.0.10 netmask 255.255.255.0 mediatype 100txfd route add default 192.168.0.254 1 routed on options ip.fastpath.enable {on|off} enable/disable fast path Note: on Enables fast path off Disables fast path routed {on|off} enable/disable routing daemon Note: on Turns on the routed daemon off Turns off the routed daemon netstat -rn route -s routed status route add 192.168.0.15 gateway.com 1

default route

Display routing table Add to routing

87

table

Hosts and DNS


Hosts # use wrfile and rdfile to read and edit /etc/hosts file , it basically use the sdame rules as a Unix # hosts file

# use wrfile and rdfile to read and edit /etc/nsswitch.conf file , it nsswitch file basically uses the same rules as a # Unix nsswitch.conf file # use wrfile and rdfile to read and edit /etc/resolv.conf file , it basically uses the same rules as a # Unix resolv.conf file DNS options dns.enable {on|off} Note: on Enables DNS off Disables DNS Domain Name options dns.domainname <domain> options dns.cache.enable options dns.cache.disable DNS cache # To flush the DNS cache dns flush # To see dns cache information dns info options dns.update.enable {on|off|secure} DNS updates Note: on Enables dynamic DNS updates off Disables dynamic DNS updates secure Enables secure dynamic DNS updates options dns.update.ttl <time> # Example options dns.update.ttl 2h Note: time can be set in seconds (s), minutes (m), or hours (h), with a minimum value of 600 seconds and a maximum value of 24 hour

time-to-live (TTL)

VLAN
Create vlan create [-g {on|off}] ifname vlanid

88

# Create VLANs with identifiers 10, 20, and 30 on the interface e4 of a storage system by using the following command: vlan create e4 10 20 30 # Configure the VLAN interface e4-10 by using the following command ifconfig e4-10 192.168.0.11 netmask 255.255.255.0 Add vlan add e4 40 50 # Delete specific VLAN vlan delete e4 30 Delete # Delete All VLANs on a interface vlan delete e4 Enable/Disable GRVP on VLAN vlan modify -g {on|off} ifname vlan stat <interface_name> <vlan_id> Stat # Examples vlan stat e4 vlan stat e4 10

Interface Groups
# To create a single-mode interface group, enter the following command: ifgrp create single SingleTrunk1 e0 e1 e2 e3 Create (singlemode) # To configure an IP address of 192.168.0.10 and a netmask of 255.255.255.0 on the singlemode interface group SingleTrunk1 ifconfig SingleTrunk1 192.168.0.10 netmask 255.255.255.0 # To specify the interface e1 as preferred ifgrp favor e1 # To create a static multimode interface group, comprising interfaces e0, e1, e2, and e3 and using MAC # address load balancing ifgrp create multi MultiTrunk1 -b mac e0 e1 e2 e3 # To create a dynamic multimode interface group, comprising interfaces e0, e1, e2, and e3 and using IP # address based load balancing ifgrp create lacp MultiTrunk1 -b ip e0 e1 e2 e3 # To create two interface groups and a second-level interface group. In this example, IP address load # balancing is used for the multimode interface groups. ifgrp create multi Firstlev1 e0 e1 ifgrp create multi Firstlev2 e2 e3

Create ( multimode)

Create second level intreface group

89

ifgrp create single Secondlev Firstlev1 Firstlev2 # To enable failover to a multimode interface group with higher aggregate bandwidth when one or more of # the links in the active multimode interface group fail options ifgrp.failover.link_degraded on Note: You can create a second-level interface group by using two multimode interface groups. Secondlevel interface groups enable you to provide a standby multimode interface group in case the primary multimode interface group fails. # Use the following commands to create a second-level interface group in an HA pair. In this example, # IP-based load balancing is used for the multimode interface groups. # On StorageSystem1: ifgrp create multi Firstlev1 e1 e2 ifgrp create multi Firstlev2 e3 e4 ifgrp create single Secondlev1 Firstlev1 Firstlev2 # On StorageSystem2 : ifgrp create multi Firstlev3 e5 e6 ifgrp create multi Firstlev4 e7 e8 ifgrp create single Secondlev2 Firstlev3 Firstlev4 # On StorageSystem1: ifconfig Secondlev1 partner Secondlev2 # On StorageSystem2 : ifconfig Secondlev2 partner Secondlev1 Favoured/nonfavoured interface Add # select favoured interface ifgrp nofavor e3 # select a non-favoured interface ifgrp nofavor e3 ifgrp add MultiTrunk1 e4 ifconfig MultiTrunk1 down ifgrp delete MultiTrunk1 e4 Delete Note: You must configure the interface group to the down state before you can delete a network interface from the interface group ifconfig ifgrp_name down ifgrp destroy ifgrp_name Destroy Note: You must configure the interface group to the down state before you can delete a network interface

Create second level intreface group in a HA pair

90

from the interface group Enable/disable a ifconfig ifgrp_name up interface group ifconfig ifgrp_name down Status Stat ifgrp status [ifgrp_name] ifgrp stat [ifgrp_name] [interval]

Diagnostic Tools
Useful options # Throttle ping options ip.ping_throttle.drop_level <packets_per_second> Ping thottling # Disable ping throttling options ip.ping_throttle.drop_level 0 options ip.icmp_ignore_redirect.enable on Forged IMCP attacks Note: You can disable ICMP redirect messages to protect your storage system against forged ICMP redirect attacks. Useful Commands netdiag The netdiag command continuously gathers and analyzes statistics, and performs diagnostic tests. These diagnostic tests identify and report problems with your physical network or transport layers and suggest remedial action. You can use the ping command to test whether your storage system can reach other hosts on your network. You can use the pktt command to trace the packets sent and received in the storage system's network.

ping pktt

91

Chapter 7: NetApp Disk Administration


In this section I will cover the disk administration, I will create another section for common disk and system problems. In this section I will cover the basics on the following:

Storage Disks Aggregates (RAID options) Volumes (FlexVol and Traditional) FlexCache FlexClone Deduplication QTrees CIFS Oplocks Security styles Quotas

I have tried to cover as much as possible in as little space (I like to keep things short and sweet), I have briefly touched on some subjects so for more details on these subjects I point you to the NetApp documentation. As i get more experienced with the NetApp products I will come back and update this section.

Storage
The storage command can configure and administrate a disk enclosure, the main storage commands are below storage show adapter storage show disk [-a|-x|-p|-T] storage show expander storage show fabric storage show fault storage show hub storage show initiators storage show mc storage show port storage show shelf storage show switch storage show tape [supported] storage show acp storage array show storage array show-ports storage array show-luns storage array show-config Enable storage enable adapter

Display

92

Disable Remove port Load Balance Power Cycle

storage disable adapter storage array remove-port <array_name> -p <WWPN> storage load balance storage power_cycle shelf -h storage power_cycle shelf start -c <channel name> storage power_cycle shelf completed

Rename switch storage rename <oldname> <newname>

Disks
Your NetApp filer will have a number of disks attached that can be used, when attached the disk will have the following device name This is the physical disk itself, normally the disk will reside in a disk enclosure, the disk will have a pathname like 2a.17 depending on the type of disk enclosure Disk name

2a = SCSI adapter 17 = disk SCSI ID

Any disks that are classed as spare will be used in any group to replace failed disks. They can also be assigned to any aggregate. Disks are assigned to a specific pool. There are only four types of disks in Data ONTAP, I will discuss RAID in the aggregate section. Data Spare Parity dParity holds data stored within the RAID group Does not hold usable data but is available to be added to a RAID group in an aggregate, also known as a hot spare Store data reconstruction information within the RAID group Stores double-parity information within the RAID group, if RAID-DP is enabled

There are a number of disk commands that you can use disk show disk show <disk_name> disk_list Display sysconfig -r sysconfig -d ## list all unnassigned/assigned disks disk show -n 93

disk show -a ## Add a specific disk to pool1 the mirror pool disk assign <disk_name> -p 1 Adding (assigning) ## Assign all disk to pool 0, by default they are assigned to pool 0 if the "-p" ## option is not specififed disk assign all -p 0 disk remove <disk_name> disk reassign -d <new_sysid> disk replace start <disk_name> <spare_disk_name> disk replace stop <disk_name> Replace Note: uses Rapid RAID Recovery to copy data from the specified file system to the specified spare disk, you can stop this process using the stop command disk zero spares disk fail <disk_name> disk scrub start disk scrub stop disk sanitize start <disk list> disk sanitize abort <disk_list> disk sanitize status disk sanitize release <disk_list> Note: the release modifies the state of the disk from sanitize to spare. Sanitize requires a license. disk maint start -d <disk_list> disk maint abort <disk_list> disk maint list Maintanence disk maint status Note: you can test the disk using maintain mode disk swap disk unswap swap a disk Note: it stalls all SCSI I/O until you physically replace or add a disk, can used on SCSI disk only. Statisics Simulate a pulled disk disk_stat <disk_name> disk simpull <disk_name>

Remove (spin down disk) Reassign

Zero spare disks fail a disk Scrub a disk

Sanitize

94

disk simpush -l disk simpush <complete path of disk obtained from above command> ## Example ontap1> disk simpush -l The following pulled disks are available for pushing: v0.16:NETAPP__:VD-1000MB-FZ520:14161400:2104448 ontap1> disk simpush v0.16:NETAPP__:VD-1000MB-FZ520:14161400:2104448

Simulate a pushed disk

Aggregates
Disks are grouped together in aggregates, these aggregates provide storage to the volume or volumes that they contain. Each aggregate has it own RAID configuration, plex structure and set of assigned disks or array LUNs. You can create traditional volumes or NetApp's FlexVol volumes (see below section on volumes). There are two types of aggregate

32bit - Maximum 16TB 64bit - Maximum 100TB

A aggregate has only one plex (pool 0), if you use SyncMirror (licensed product) you can mirror the aggregate in which case it will have two plexes (pool 0 and pool 1). Disks can be assigned to different pools which will be used for hot spares or extending aggregates for those pools. The plexes are updated simultaneously when mirroring aggregates and need to be resynchronized if you have problems with one of the plexes. You can see how mirroring works in the diagram below

When using RAID4 or RAID-DP the largest disks will be used as the parity disk/s, if you add a new larger disk to the aggregate, this will be reassigned as the partity disk/s. An aggregate can be in one of three states Online Read and write access to volumes is allowed

95

Restricted Offline

Some operations, such as parity reconstruction are allowed, but data access is not allowed No access to the aggregate is allowed

The aggregate can have a number of diffrent status values 32-bit 64-bit aggr copying degraded double degraded foreign growing initializing invalid ironing mirror degraded mirrored normal out-of-date partial raid0 raid4 raid_dp reconstruct redirect resyncing snapmirror trad This aggregate is a 32-bit aggregate This aggregate is a 64-bit aggregate This aggregate is capable of contain FlexVol volumes This aggregate is currently the target aggregate of an active copy operation This aggregate is contains at least one RAID group with single disk failure that is not being reconstructed This aggregate is contains at least one RAID group with double disk failure that is not being reconstructed (RAID-DP aggregate only) Disks that the aggregate contains were moved to the current storage system from another storage system Disks are in the process of being added to the aggregate The aggregate is in the process of being initialized The aggregate contains no volumes and none can be added. Typically this happend only after an aborted "aggr copy" operation A WAFL consistency check is being performewd on the aggregate The aggregate is mirrored and one of its plexes is offline or resynchronizing The aggregate is mirrored The aggregate is unmirrored and all of its RAID groups are functional The aggregate is mirrored and needs to be resynchronized At least one disk was found for the aggregate, but two or more disks are missing The aggrgate consists of RAID 0 (no parity) RAID groups The agrregate consists of RAID 4 RAID groups The agrregate consists of RAID-DP RAID groups At least one RAID group in the aggregate is being reconstructed Aggregate reallocation or file reallocation with the "-p" option has been started on the aggregate, read performance will be degraded One of the mirror aggregates plexes is being resynchronized The aggregate is a SnapMirror replica of another aggregate (traditional volumes only) The aggregate is a traditional volume and cannot contain FlexVol volumes.

needs check WAFL consistency check needs to be performed on the aggregate

96

verifying

A mirror operation is currently running on the aggregate

wafl The aggregate has been marked corrupted; contact techincal support inconsistent You can mix the disks speeds and different types within the aggregate make sure you change the below options ## to allow mixed speeds options raid.rpm.fcal.enable on options raid.rpm.ata.enable on ## to allow mixed disk types (SAS, SATA, FC, ATA) options raid.disktype.enable off

Mixed disk speeds and types

Now I am only going to detail the common commands that you use with aggregates, I will update this section and the cheatsheet as I get more experienced with the NetApp product. aggr status aggr status -r aggr status <aggregate> [-v] aggr status -s ## Syntax - if no option is specified then the defult is used aggr create <aggr_name> [-f] [-m] [-n] [-t {raid0 |raid4 |raid_dp}] [-r raid_size] [-T disk_type] [-R rpm>] [-L] [B {32|64}] <disk_list> ## create aggregate called newaggr that can have a maximum of 8 RAID groups aggr create newaggr -r 8 -d 8a.16 8a.17 8a.18 8a.19 ## create aggregated called newfastaggr using 20 x 15000rpm disks aggr create newfastaggr -R 15000 20 ## create aggrgate called newFCALaggr (note SAS and FC disks may bge used) aggr create newFCALaggr -T FCAL 15 Note: -f = overrides the default behavior that does not permit disks in a plex to belong to different disk pools -m = specifies the optional creation of a SyncMirror -n = displays the results of the command but does not

Displaying Check you have spare disks

Adding (creating)

97

execute it -r = maximum size (number of disks) of the RAID groups for this aggregate -T = disk type ATA, SATA, SAS, BSAS, FCAL or LUN -R = rpm which include 5400, 7200, 10000 and 15000 Remove(destroying) aggr offline <aggregate> aggr destroy <aggregate> aggr rename <old name> <new name> ## Syntax aggr add <aggr_name> [-f] [-n] [-g {raid_group_name | new |all}] <disk_list> ## add an additonal disk to aggregate pfvAggr, use "aggr status" to get group name aggr status pfvAggr -r aggr add pfvAggr -g rg0 -d v5.25 ## Add 4 300GB disk to aggregate aggr1 aggr add aggr1 4@300 offline online restricted state aggr offline <aggregate> aggr online <aggregate> aggr restrict <aggregate> ## to display the aggregates options aggr options <aggregate> Change an aggregate options ## change a aggregates raid group aggr options <aggregate> raidtype raid_dp aggr options <aggregate> raidtype raid4 ## change a aggregates raid size aggr options <aggregate> raidsize 4 show space usage Mirror Split mirror aggr show_space <aggregate> aggr mirror <aggregate> aggr split <aggregate/plex> <new_aggregate> ## Obtain the status aggr copy status ## Start a copy aggr copy start <aggregate source> <aggregate destination> ## Abort a copy - obtain the operation number by using "aggr copy status" aggr copy abort <operation number>

Unremoving(undestroying) aggr undestroy <aggregate> Rename

Increase size

Copy from one agrregate to another

98

## Throttle the copy 10=full speed, 1=one-tenth full speed aggr copy throttle <operation number> <throttle speed> ## Media scrub status aggr media_scrub status aggr scrub status ## start a scrub operation aggr scrub start [ aggrname | plexname | groupname ] ## stop a scrub operation aggr scrub stop [ aggrname | plexname | groupname ] ## suspend a scrub operation aggr scrub suspend [ aggrname | plexname | groupname ] ## resume a scrub operation aggr scrub resume [ aggrname | plexname | groupname ] Scrubbing (parity) Note: Starts parity scrubbing on the named online aggregate. Parity scrubbing compares the data disks to the parity disk(s) in their RAID group, correcting the parity disks contents as necessary. If no name is given, parity scrubbing is started on all online aggregates. If an aggregate name is given, scrubbing is started on all RAID groups contained in the aggregate. If a plex name is given, scrubbing is started on all RAID groups contained in the plex. Look at the following system options: raid.scrub.duration 360 raid.scrub.enable on raid.scrub.perf_impact low raid.scrub.schedule ## verify status aggr verify status ## start a verify operation aggr verify start [ aggrname ] Verify (mirroring) ## stop a verify operation aggr verify stop [ aggrname ] ## suspend a verify operation aggr verify suspend [ aggrname ] ## resume a verify operation

99

aggr verify resume [ aggrname ] Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then RAID mirror verification is started on all online mirrored aggregates. Verification compares the data in both plexes of a mirrored aggregate. In the default case, all blocks that differ are logged, but no changes are made. aggr media_scrub status Note: Prints the media scrubbing status of the named aggregate, plex, or group. If no name is given, then status is printed for all RAID groups currently running a media scrub. The status includes a percent-complete and whether it is suspended. Look at the following system options: raid.media_scrub.enable on raid.media_scrub.rate 600 raid.media_scrub.spares.enable on

Media Scrub

Volumes
Volumes contain file systems that hold user data that is accessible using one or more of the access protocols supported by Data ONTAP, including NFS, CIFS, HTTP, FTP, FC, and iSCSI. Each volume depends on its containing aggregate for all its physical storage, that is, for all storage in the aggregates disks and RAID groups. A FlexVol volume is a volume that is loosely coupled to its containing aggregate. A FlexVol volume can share its containing aggregate with other FlexVol volumes. Thus, a single aggregate can be the shared source of all the storage used by all the FlexVol volumes contained by that aggregate. Because a FlexVol volume is managed separately from the aggregate, you can create small FlexVol volumes (20 MB or larger), and you can increase or decrease the size of FlexVol volumes in increments as small as 4 KB. When a FlexVol volume is created, it reserves a small amount of extra space (approximately 0.5 percent of its nominal size) from the free space of its containing aggregate. This space is used to store the volume's metadata. Therefore, upon creation, a FlexVol volume with a space guarantee of volume uses free space from the aggregate equal to its size 1.005. A newly-created FlexVol volume with a space guarantee of none or file uses free space equal to .005 its nominal size. There are two types of FlexVolume

100

32-bit 64-bit

If you want to use Data ONTAP to move data between a 32-bit volume and a 64-bit volume, you must use ndmpcopy or qtree SnapMirror. You cannot use the vol copy command or volume SnapMirror between a 32-bit volume and a 64-bit volume. A traditional volume is a volume that is contained by a single, dedicated, aggregate. It is tightly coupled with its containing aggregate. No other volumes can get their storage from this containing aggregate. The only way to increase the size of a traditional volume is to add entire disks to its containing aggregate. You cannot decrease the size of a traditional volume. The smallest possible traditional volume uses all the space on two disks (for RAID4) or three disks (for RAID-DP). Traditional volumes and their containing aggregates are always of type 32-bit. You cannot grow a traditional volume larger than 16 TB. You can change many attributes on a volume

The name of the volume The size of the volume (assigned only for FlexVol volumes; the size of traditional volumes is determined by the size and number of their disks or array LUNs) A security style, which determines whether a volume can contain files that use UNIX security, files that use NT file system (NTFS) file security, or both types of files Whether the volume uses CIFS oplocks (opportunistic locks) The language of the volume The level of space guarantees (for FlexVol volumes only) Disk space and file limits (quotas, optional) A Snapshot copy schedule (optional) Whether the volume is a root volume

Every volume has a language. The language of the volume determines the character set Data ONTAP uses to display file names and data for that volume. Changing the language of an existing volume can cause some files to become inaccessible. The language of the root volume has special significance, because it affects or determines the following items:

Default language for all volumes System name Domain name Console commands and command output NFS user and group names CIFS share names CIFS user account names Access from CIFS clients that don't support Unicode

101

How configuration files in /etc are read How the home directory definition file is read Qtrees Snapshot copies Volumes Aggregates

The following table displays the possible states for volumes. Online Restricted Offline Read and write access to this volume is allowed. Some operations, such as parity reconstruction, are allowed, but data access is not allowed. No access to the volume is allowed.

There are number of possible status values for volumes access denied active redirect connecting copying degraded double degraded flex flexcache foreign growing initializing invalid ironing lang mismatch mirror degraded The origin system is not allowing access. (FlexCache volumes only.) The volume's containing aggregate is undergoing reallocation (with the -p option specified). Read performance may be reduced while the volume is in this state. The caching system is trying to connect to the origin system. (FlexCache volumes only.) The volume is currently the target of an active vol copy or snapmirror operation. The volume's containing aggregate contains at least one degraded RAID group that is not being reconstructed after single disk failure. The volume's containing aggregate contains at least one degraded RAID-DP group that is not being reconstructed after double disk failure. The volume is a FlexVol volume. The volume is a FlexCache volume. Disks used by the volume's containing aggregate were moved to the current storage system from another storage system. Disks are being added to the volume's containing aggregate. The volume's containing aggregate is being initialized. The volume does not contain a valid file system. A WAFL consistency check is being performed on the volume's containing aggregate. The language setting of the origin volume was changed since the caching volume was created. (FlexCache volumes only.) The volume's containing aggregate is mirrored and one of its plexes is offline or resynchronizing.

102

mirrored needs check out-of-date partial raid0 raid4 raid_dp reconstruct redirect rem vol changed rem vol unavail

The volume's containing aggregate is mirrored. A WAFL consistency check needs to be performed on the volume's containing aggregate. The volume's containing aggregate is mirrored and needs to be resynchronized. At least one disk was found for the volume's containing aggregate, but two or more disks are missing. The volume's containing aggregate consists of RAID0 (no parity) groups (array LUNs only). The volume's containing aggregate consists of RAID4 groups. The volume's containing aggregate consists of RAID-DP groups. At least one RAID group in the volume's containing aggregate is being reconstructed. The volume's containing aggregate is undergoing aggregate reallocation or file reallocation with the -p option. Read performance to volumes in the aggregate might be degraded. The origin volume was deleted and re-created with the same name. Re-create the FlexCache volume to reenable the FlexCache relationship. (FlexCache volumes only.) The origin volume is offline or has been deleted. (FlexCache volumes only.)

remote nvram The origin system is experiencing problems with its NVRAM. (FlexCache volumes only.) err resyncing One of the plexes of the volume's containing mirrored aggregate is being resynchronized. The volume is a traditional volume. The volume is a FlexVol volume that has been marked unrecoverable; contact technical support.

snapmirrored The volume is in a SnapMirror relationship with another volume. trad unrecoverable

The origin system is running a version of Data ONTAP the does not unsup remote support FlexCache volumes or is not compatible with the version vol running on the caching system. (FlexCache volumes only.) verifying wafl inconsistent RAID mirror verification is running on the volume's containing aggregate. The volume or its containing aggregate has been marked corrupted; contact technical support .

Usually, you should leave CIFS oplocks on for all volumes and qtrees. This is the default setting. However, you might turn CIFS oplocks off under certain circumstances. CIFS oplocks (opportunistic locks) enable the redirector on a CIFS client in certain file-sharing scenarios to perform client-side caching of read-ahead, write-behind, and lock information. A client can then work with a file (read or write it) without regularly 103

reminding the server that it needs access to the file. This improves performance by reducing network traffic. You might turn CIFS oplocks off on a volume or a qtree under either of the following circumstances:

You are using a database application whose documentation recommends that CIFS oplocks be turned off. You are handling critical data and cannot afford even the slightest data loss.

Otherwise, you can leave CIFS oplocks on. I will discuss in detail CIFS and other File access protocols in another topic. CIFS oplock options cifs.oplocks.enable on cifs.oplocks.opendelta 0

Every qtree and volume has a security style settingNTFS, UNIX, or mixed. The setting determines whether files use Windows NT or UNIX (NFS) security. How you set up security styles depends on what protocols are licensed on your storage system. Although security styles can be applied to volumes, they are not shown as a volume attribute, and are managed for both volumes and qtrees using the qtree command. The security style for a volume applies only to files and directories in that volume that are not contained in any qtree. The volume security style does not affect the security style for any qtrees in that volume. The following table describes the three security styles and the effects of changing them. Security Description Style For CIFS clients, security is handled using Windows NTFS ACLs. For NFS clients, the NFS UID (user id) is mapped to a Windows SID (security identifier) and its associated groups. Those mapped credentials are used to determine file access, based on the NFTS ACL. Effect of changing to this style

NTFS

If the change is from a mixed qtree, Windows NT permissions determine file access for a file that had Windows NT permissions. Otherwise, UNIXstyle (NFS) permission bits determine file access for files created before the change.

Note: If the change is from a CIFS storage system to a multiprotocol Note: To use NTFS security, the storage system, and the /etc directory is storage system must be licensed a qtree, its security style for CIFS. You cannot use an NFS changes to NTFS. client to change file or directory permissions on qtrees with the NTFS security style.

104

UNIX

Files and directories have UNIX permissions.

The storage system disregards any Windows NT permissions established previously and uses the UNIX permissions exclusively.

Mixed

Both NTFS and UNIX security are If NTFS permissions on a file are allowed: A file or directory can changed, the storage system recomputes have either Windows NT UNIX permissions on that file. permissions or UNIX permissions. If UNIX permissions or ownership on a The default security style of a file file are changed, the storage system is the style most recently used to deletes any NTFS permissions on that set permissions on that file. file.

Finally we get to the commands that are used to create and control volumes General Volume Operations (Traditional and FlexVol) Displaying Remove (destroying) Rename online offline restrict decompress vol status vol status -v (verbose) vol status -l (display language) vol offline <vol_name> vol destroy <vol_name> vol rename <old_name> <new_name> vol online <vol_name> vol offline <vol_name> vol restrict <vol_name> vol decompress status vol decompress start <vol_name> vol decompress stop <vol_name> vol mirror volname [-n][-v victim_volname][-f][-d <disk_list>] Note: Mirrors the currently-unmirrored traditional volume volname, either with the specified set of disks or with the contents of another unmirrored traditional volume victim_volname, which will be destroyed in the process. The vol mirror command fails if either the chosen volname or victim_volname are flexible volumes. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite. Change language Change maximum vol lang <vol_name> <language> ## Display maximum number of files maxfiles <vol_name>

Mirroring

105

number of files Change root volume

## Change maximum number of files maxfiles <vol_name> <max_num_files> vol options <vol_name> root vol media_scrub status [volname|plexname|groupname -s disk-name][v]

Note: Prints the media scrubbing status of the named aggregate, volume, plex, or group. If no name is given, then status is printed for all RAID groups currently running a media scrub. The status includes a Media Scrub percent-complete and whether it is suspended. Look at the following system options: raid.media_scrub.enable on raid.media_scrub.rate 600 raid.media_scrub.spares.enable on FlexVol Volume Operations (only) ## Syntax vol create vol_name [-l language_code] [-s {volume|file|none}] <aggr_name> size{k|m|g|t} Adding (creating) ## Create a 200MB volume using the english character set vol create newvol -l en aggr1 200M ## Create 50GB flexvol volume vol create vol1 aggr1 50g # First find the aggregate the volume uses vol container flexvol1 additional disks ## add an additional disk to aggregate aggr1, use "aggr status" to get group name aggr status aggr1 -r aggr add aggr1 -g rg0 -d v5.25 vol size <vol_name> [+|-] n{k|m|g|t} Resizing ## Increase flexvol1 volume by 100MB vol size flexvol1 +100m

vol autosize vol_name [-m size {k|m|g|t}] [-I size {k|m|g|t}] on Automatically resizing ## automatically grow by 10MB increaments to max of 500MB vol autosize flexvol1 -m 500m -I 10m on Determine df -Ah free space and df -L Inodes df -i

106

Determine size

vol size <vol_name> vol options <vol_name> try_first [volume_grow|snap_delete] Note: If you specify volume_grow, Data ONTAP attempts to increase the volume's size before deleting any Snapshot copies. Data ONTAP increases the volume size based on specifications you provided using the vol autosize command. If you specify snap_delete, Data ONTAP attempts to create more free space by deleting Snapshot copies, before increasing the size of the volume. Data ONTAP deletes Snapshot copies based on the specifications you provided using the snap autodelete command.

automatic free space preservation

display a FlexVol volume's containing aggregate

vol container <vol_name>

vol clone create clone_vol [-s none|file|volume] -b parent_vol [parent_snap] vol clone split start vol clone split stop vol clone split estimate vol clone split status Note: The vol clone create command creates a flexible volume named clone_vol on the local filer that is a clone of a "backing" flexible volume named par_ent_vol. A clone is a volume that is a writable snapshot of another volume. Initially, the clone and its parent share the same storage; more storage space is consumed only as one volume or the other changes. vol copy start [-S|-s snapshot] <vol_source> <vol_destination> vol copy status vol copy abort <operation number> vol copy throttle <operation_number> <throttle value 10-1> ## Example - Copies the nightly snapshot named nightly.1 on volume vol0 on the local filer to the volume vol0 on remote ## filer named toaster1. vol copy start -s nightly.1 vol0 toaster1:vol0 Note: Copies all data, including snapshots, from one volume to another. If the -S flag is used, the command copies all snapshots in the source volume to the destination volume. To specify a particular snapshot to copy, use the -s flag followed by the name of the snapshot.

Cloning

Copying

107

If neither the -S nor -s flag is used in the command, the filer automatically creates a distinctively-named snapshot at the time the vol copy start command is executed and copies only that snapshot to the destination volume. The source and destination volumes must either both be traditional volumes or both be flexible volumes. The vol copy command will abort if an attempt is made to copy between different volume types. The source and destination volumes can be on the same filer or on different filers. If the source or destination volume is on a filer other than the one on which the vol copy start command was entered, specify the volume name in the filer_name:volume_name format. Traditional Volume Operations (only) vol|aggr create vol_name -v [-l language_code] [-f] [-m] [-n] [-v] [-t {raid4|raid_dp}] [-r raidsize] [-T disk-type] -R rpm] [-L] disk-list adding (creating) ## create traditional volume using vol command vol create tradvol1 -l en -t raid4 -d v5.26 v5.27 ## Create traditional volume using 20 disks, each RAID group can have 10 disks vol create vol1 -r 10 20 additional disks splitting vol add volname[-f][-n][-g <raidgroup>]{ ndisks[@size]|-d <disk_list> } ## add another disk to the already existing traditional volume vol add tradvol1 -d v5.28 aggr split <volname/plexname> <new_volname> ## The more new "aggr scrub " command is preferred vol scrub status [volname|plexname|groupname][-v] vol scrub start [volname|plexname|groupname][-v] vol scrub stop [volname|plexname|groupname][-v] Scrubing (parity) vol scrub suspend [volname|plexname|groupname][-v] vol scrub resume [volname|plexname|groupname][-v] Note: Print the status of parity scrubbing on the named traditional volume, plex or RAID group. If no name is provided, the status is given on all RAID groups currently undergoing parity scrubbing. The status includes a percent-complete as well as the scrubs suspended status (if any). ## The more new "aggr verify" command is preferred Verify (mirroring) ## verify status vol verify status

108

## start a verify operation vol verify start [ aggrname ] ## stop a verify operation vol verify stop [ aggrname ] ## suspend a verify operation vol verify suspend [ aggrname ] ## resume a verify operation vol verify resume [ aggrname ] Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then RAID mirror verification is started on all online mirrored aggregates. Verification compares the data in both plexes of a mirrored aggregate. In the default case, all blocks that differ are logged, but no changes are made.

FlexCache Volumes
A FlexCache volume is a sparsely-populated volume on a local storage system that is backed by a volume on a different possibly remote storage system, this volume providies access to data in the remote volume without requiring that all the data be in the sparse voluem. This speeds up data access to remote data, because the cached data must be ejected when the data is changed, thus FlexCache volumes work best for data that does not change often. When a client requests data from the FlexCache volume, the data is read from the origin system and cached on the FlexCache volume, subsequent requests for that data is then served directly from the FlexCache volume. This increases performance as data no longer needs to come across the wire (network). Sometimes a picture best describes things

109

In order to use FlexCache volumes there are some requirements:


Data ONTAP version 7.0.5 or later (caching server) A valid FlexCache license (caching server) A valid NFS license with NFS enabled (caching server) Data ONTAP version 7.0.5 or later (origin server) The flexcache.access option set to allow access to FlexCache volumes (origin ) The flexcache.enable options set to on The FlexCache volume must be a FlexVol volume, the origin volume can be a FlexVol or a traditional volume. The FlexCache volume and origin volume can be either 32-bit or 64-bit

You can have a maximum of 100 FlexCache volumes on a storage system. In addition, there are certain features of Data ONTAP that are not available on FlexCache volumes, and others that are not available on volumes that are backing FlexCache volumes. You cannot use the following Data ONTAP capabilities on FlexCache volumes (these limitations do not apply to the origin volumes):

Client access using any protocol other than NFSv2 or NFSv3 Client access using IPv6 Snapshot copy creation SnapRestore SnapMirror (qtree or volume) SnapVaultFlex Clone volume creation The ndmp command Quotas Qtrees Volume copy Deduplication

110

Creation of FlexCache volumes in any vFiler unit other than vFiler0 Creation of FlexCache volumes in the same aggregate as their origin volume Mounting the FlexCache volume as a read-only volume

As mentioned above the FlexCache volume must be a FlexVol volume, the origin volume can be a FlexVol or a traditional volume. Must FlexCache volumes are setup to automatically grow, thus achieving the best performance. FlexCache volumes by default reserve 100MB of space this can be changed by the below option but it is advised to leave it at its default value. FlexCache default reserve vol options flexcache_min_reserved space When you put multiple FlexCache volumes in the same aggregate, each FlexCache volume reserves only a small amount of space (as specified by the flexcache_min_reserved volume option. The rest of the space is allocated as needed. This means that a hot FlexCache volume (one that is being accessed heavily) is permitted to take up more space, while a FlexCache volume that is not being accessed as often will gradually be reduced in size. When an aggregate containing FlexCache volumes runs out of free space, Data ONTAP randomly selects a FlexCache volume in that aggregate to be truncated. Truncation means that files are ejected from the FlexCache volume until the size of the volume is decreased to a predetermined percentage of its former size. If you have regular FlexVol volumes in the same aggregate as your FlexCache volumes, and you start to fill up the aggregate, the FlexCache volumes can lose some of their unreserved space (if they are not currently using it). In this case, when the FlexCache volume needs to fetch a new data block and it does not have enough free space to accommodate it, a data block is ejected from one of the FlexCache volumes to make room for the new data block. You can control how the FlexCache volume functions when connectivity between the caching and origin systems is lost by using the disconnected_mode and acdisconnected volume options. The disconnected_mode volume option and the acdisconnected timeout, combined with the regular TTL timeouts (acregmax, acdirmax, acsymmax, and actimeo), enable you to control the behavior of the FlexCache volume when contact with the origin volume is lost. disconnected_mode acdisconnected ## To list all options of a FlexCache volume vol options <flexcache_name>

Disconnect options

A file is the basic object in a FlexCache volume, but sometimes only some of a file's data is cached. If the data is cached and valid, a read request for that data is fulfilled without access to the origin volume. When a data block from a specific file is requested from a FlexCache volume, then the attributes of that file are cached, and

111

that file is considered to be cached, even if not all of its data blocks are present. If any part of a file is changed, the entire file is invalidated and ejected from the cache. For this reason, data sets consisting of one large file that is frequently updated might not be good candidates for a FlexCache implementation. Cache consistenancy for FlexCache volumes is achieved by using three techniques You can think of a delegation as a contract between the origin system and the caching volume; as long as the caching volume has the delegation, the file has not changed. Delegations are used only in certain situations. Delegations When data from a file is retrieved from the origin volume, the origin system can give a delegation for that file to the caching volume. Before that file is modified on the origin volume, whether due to a request from another caching volume or due to direct client access, the origin system revokes the delegation for that file from all caching volumes that have that delegation. When data is retrieved from the origin volume, the file that contains that data is considered valid in the FlexCache volume as long as a delegation exists for that file. If no delegation exists, the file is considered valid for a certain length of time, specified by the attribute cache timeout. If a client requests data from a file for which there are no delegations, and the attribute cache timeout has been exceeded, the FlexCache volume compares the file attributes of the cached file with the attributes of the file on the origin system. If a client modifies a file that is cached, that operation is passed back, or proxied through, to the origin system, and the file is ejected from the cache. When the write is proxied, the attributes of the file on the origin volume are changed. This means that when another client requests data from that file, any other FlexCache volume that has that data cached will rerequest the data after the attribute cache timeout is reached.

Attribute cache timeouts

write operation proxy

I have only touched lightly on Cache consistenancy and suggest that you check the documentation and options that are available. The following table lists the status messages you might see for a FlexCache volume access denied lang mismatch rem vol The origin system is not allowing FlexCache access. Check the setting of the flexcache.access option on the origin system. The language setting of the origin volume was changed since the FlexCache volume was created. The origin volume was deleted and re-created with the same name. Re-

connecting The caching system is trying to connect to the origin system.

112

changed rem vol unavail

create the FlexCache volume to reenable the FlexCache relationship. The origin volume is offline or has been deleted.

remote The origin system is experiencing problems with its NVRAM. nvram err The origin system is running a version of Data ONTAP that either does unsup not support FlexCache volumes or is not compatible with the version remote vol running on the caching system. Now for the commands vol status vol status -v <flexcache_name> Display ## How to display the options available and what they are set to vol help options vol options <flexcache_name> df -L ## Syntax vol create <flexcache_name> <aggr> [size{k|m|g|t}] -S origin:source_vol Adding (Create) ## Create a FlexCache volume called flexcache1 with autogrow in aggr1 aggregate with the source volume vol1 ## on storage netapp1 server vol create flexcache1 aggr1 -S netapp1:vol1 vol offline < flexcache_name> vol destroy <flexcache_name>

Display free space

Removing (destroy)

Automatically vol options <flexcache_name> flexcache_autogrow [on|off] resizing Eject file from flexcache eject <path> [-f] cache ## Client stats flexcache stats -C <flexcache_name> Statistics ## Server stats flexcache stats -S <volume_name> -c <client> ## File stats flexcache fstat <path>

113

FlexClone Volumes
FlexClone volumes are writable, point-in-time copies of a parent FlexVol volume. Often, you can manage them as you would a regular FlexVol volume, but they also have some extra capabilities and restrictions. The following list outlines some key facts about FlexClone volumes:

A FlexClone volume is a point-in-time, writable copy of the parent volume. Changes made to the parent volume after the FlexClone volume is created are not reflected in the FlexClone volume. FlexClone volumes are fully functional volumes; you manage them using the vol command, just as you do the parent volume. FlexClone volumes always exist in the same aggregate as their parent volumes. Traditional volumes cannot be used as parent volumes for FlexClone volumes. To create a copy of a traditional volume, you must use the vol copy command, which creates a distinct copy that uses additional storage space equivalent to the amount of storage space used by the volume you copied. FlexClone volumes can themselves be cloned to create another FlexClone volume. FlexClone volumes and their parent volumes share the same disk space for any common data. This means that creating a FlexClone volume is instantaneous and requires no additional disk space (until changes are made to the FlexClone volume or its parent). A FlexClone volume is created with the same space guarantee as its parent. The space guarantee setting is enforced for the new FlexClone volume only if there is enough space in the containing aggregate. A FlexClone volume is created with the same space reservation and fractional reserve settings as its parent. While a FlexClone volume exists, some operations on its parent are not allowed. You can sever the connection between the parent volume and the FlexClone volume. This is called splitting the FlexClone volume. Splitting removes all restrictions on the parent volume and causes the FlexClone to use its own additional disk space rather than sharing space with its parent. Quotas applied to the parent volume are not automatically applied to the FlexClone volume. When a FlexClone volume is created, any LUNs present in the parent volume are present in the FlexClone volume but are unmapped and offline.

The following restrictions apply to parent volumes or their clones:

You cannot delete the base Snapshot copy in a parent volume while a FlexClone volume using that Snapshot copy exists. The base Snapshot copy is the Snapshot copy that was used to create the FlexClone volume, and is marked busy, vclone in the parent volume. You cannot perform a volume SnapRestore operation on the parent volume using a Snapshot copy that was taken before the base Snapshot copy was taken.

114

You cannot destroy a parent volume if any clone of that volume exists. You cannot create a FlexClone volume from a parent volume that has been taken offline, although you can take the parent volume offline after it has been cloned. You cannot perform a vol copy command using a FlexClone volume or its parent as the destination volume. If the parent volume is a SnapLock Compliance volume, the FlexClone volume inherits the expiration date of the parent volume at the time of the creation of the FlexClone volume. The FlexClone volume cannot be deleted before its expiration date. There are some limitations on how you use SnapMirror with FlexClone volumes.

A FlexClone volume inherits its initial space guarantee from its parent volume. For example, if you create a FlexClone volume from a parent volume with a space guarantee of volume, then the FlexClone volume's initial space guarantee will be volume also. You can change the FlexClone volume's space guarantee. For example, suppose that you have a 100-MB FlexVol volume with a space guarantee of volume, with 70 MB used and 30 MB free, and you use that FlexVol volume as a parent volume for a new FlexClone volume. The new FlexClone volume has an initial space guarantee of volume, but it does not require a full 100 MB of space from the aggregate, as it would if you had copied the volume. Instead, the aggregate needs to allocate only 30 MB (100 MB minus 70 MB) of free space to the clone. If you have multiple clones with the same parent volume and a space guarantee of volume, they all share the same shared parent space with each other, so the space savings are even greater. You can identify a shared Snapshot copy by listing the Snapshot copies in the parent volume with the snap list command. Any Snapshot copy that appears as busy, vclone in the parent volume and is also present in the FlexClone volume is a shared Snapshot copy. Splitting a FlexClone volume from its parent removes any space optimizations that are currently employed by the FlexClone volume. After the split, both the FlexClone volume and the parent volume require the full space allocation determined by their space guarantees. The FlexClone volume becomes a normal FlexVol volume. Creating FlexClone files or FlexClone LUNs is highly space-efficient and timeefficient because the cloning operation does not involve physically copying any data. You can create a clone of a file that is present in a FlexVol volume in a NAS environment, and you can also clone a complete LUN without the need of a backing Snapshot copy in a SAN environment. The cloned copies initially share the same physical data blocks with their parents and occupy negligible extra space in the storage system for their initial metadata. Display vol status 115

vol status <flexclone_name> -v df -Lh ## Syntax vol clone create clone_name [-s {volume|file|none}] -b parent_name [parent_snap] ## create a flexclone called flexclone1 from the parent flexvol1 vol clone create flexclone1 -b flexvol1 Removing (destroy) vol offline <flexclone_name> vol destroy <flexclone_name> ## Determine the free space required to perform the split vol clone split estimate <flexclone_name> ## Double check you have the space df -Ah splitting ## Perform the split vol clone split start <flexclone_name> ## Check up on its status vol colne split status <flexclone_name> ## Stop the split vol clone split stop <flexclone_name> /etc/log/clone The clone log file records the following information: Cloning operation ID The name of the volume in which the cloning operation was performed Start time of the cloning operation End time of the cloning operation Parent file/LUN and clone file/LUN names Parent file/LUN ID Status of the clone operation: successful, unsuccessful, or stopped and some other details

adding (create)

log file

I have only briefly touched on FlexCloning so I advise you to take a peek at the documentation for a full description, including the FlexClone file, FlexClone LUN and rapid cloning utility for VMWare.

Space Saving
ONTAP Data has an additional feature called deduplication, it improves physical storage space by eliminating duplicate data blocks within a FlexVol volume. Deduplication works at the block level on the active file system, and uses the WAFL

116

block-sharing mechanism. Each block of data has a digital signature that is compared with all other signatures in a data volume. If an exact block match exists, the duplicate block is discarded and its disk space is reclaimed. You can configure deduplication operations to run automatically or on a schedule. You can deduplicate new and existing data, or only new data, on a FlexVol volume. You do require a license to enable deduplication. Data ONTAP writes all data to a storage system in 4-KB blocks. When deduplication runs for the first time on a FlexVol volume with existing data, it scans all the blocks in the FlexVol volume and creates a digital fingerprint for each of the blocks. Each of the fingerprints is compared to all other fingerprints within the FlexVol volume. If two fingerprints are found to be identical, a byte-for-byte comparison is done for all data within the block. If the byte-for-byte comparison detects identical fingerprints, the pointer to the data block is updated, and the duplicate block is freed. Deduplication runs on the active file system. Therefore, as additional data is written to the deduplicated volume, fingerprints are created for each new block and written to a change log file. For subsequent deduplication operations, the change log is sorted and merged with the fingerprint file, and the deduplication operation continues with fingerprint comparisons as previously described. sis start -s <path> start/restart deduplication operation stop deduplication operation sis start -s /vol/flexvol1 ## Use previous checkpoint sis start -sp <path> sis stop <path> sis config -s <schedule> <path> sis config -s mon-fri@23 /vol/flexvol1 Note: schedule lists the days and hours of the day when deduplication runs. The schedule can be of the following forms: schedule deduplication

day_list[@hour_list] If hour_list is not specified, deduplication runs at midnight on each scheduled day. hour_list[@day_list] If day_list is not specified, deduplication runs every day at the specified hours. A hyphen (-) disables deduplication operations for the specified FlexVol volume.

enabling

sis on <path> 117

disabling status Display saved space

sis off <path> sis status -l <path> df -s <path>

Again I have only briefly touiched on this subject, for more details checkout the documentation.

QTrees
Qtrees enable you to partition your volumes into smaller segments that you can manage individually. You can set a qtree's size or security style, back it up, and restore it. You use qtrees to partition your data. You might create qtrees to organize your data, or to manage one or more of the following factors: quotas, backup strategy, security style, and CIFS oplocks setting. The following list describes examples of qtree usage strategies:

Quotas - You can limit the size of the data used by a particular project, by placing all of that project's files into a qtree and applying a tree quota to the qtree. Backups -You can use qtrees to keep your backups more modular, to add flexibility to backup schedules, or to limit the size of each backup to one tape. Security style -If you have a project that needs to use NTFS-style security, because the members of the project use Windows files and applications, you can group the data for that project in a qtree and set its security style to NTFS, without requiring that other projects also use the same security style. CIFS oplocks settings - If you have a project using a database that requires CIFS oplocks to be off, you can set CIFS oplocks to Off for that project's qtree, while allowing other projects to retain CIFS oplocks.

The table below compares qtree with FlexVol and Traditional volumes Functionality Enables organizing user data Enables grouping users with similar needs Accepts a secruity style Accepts oplocks configuration Can be backed up and restored as a unit using QTree Yes Yes Yes Yes Yes FlexVolume Yes Yes Yes Yes Yes Traditional Volume Yes Yes Yes Yes Yes

118

Snap Mirror Can be backed up and restored as a unit using Snap Vault Can be resized Support snapshot copies Supports quotas Can be cloned Maximum number allowed Now for the commands qtree status [-i] [-v] Display Note: The -i option includes the qtree ID number in the display. The -v option includes the owning vFiler unit, if the MultiStore license is enabled. ## Syntax - by default wafl.default_qtree_mode option is used qtree create path [-m mode] adding (create) ## create a news qtree in the /vol/users volume using 770 as permissions qtree create /vol/users/news -m 770 Remove Rename rm -Rf <directory> mv <old_name> <new_name> ## Move the directory to a different directory mv /n/joel/vol1/dir1 /n/joel/vol1/olddir ## Create the qtree qtree create /n/joel/vol1/dir1 convert a directory into a qtree directory ## Move the contents of the old directory back into the new QTree mv /n/joel/vol1/olddir/* /n/joel/vol1/dir1 ## Remove the old directory name rmdir /n/joel/vol1/olddir stats qtree stats [-z] [vol_name] Yes Yes (using quota limits) No (qtree data can be extracted from volume snapshot copies) Yes No (except as part of a FlexVol volume) 4,995 per volume No Yes Yes Yes Yes 500 per system No Yes Yes Yes No 100 per system

119

Note: -z = zero stats

CIFS Oplocks
CIFS oplocks reduce network traffic and improve storage system performance. However, in some situations, you might need to disable them. You can disable CIFS oplocks for the entire storage system or for a specific volume or qtree. Usually, you should leave CIFS oplocks on for all volumes and qtrees. This is the default setting. However, you might turn CIFS oplocks off under certain circumstances. CIFS oplocks (opportunistic locks) enable the redirector on a CIFS client in certain file-sharing scenarios to perform client-side caching of read-ahead, write-behind, and lock information. A client can then work with a file (read or write it) without regularly reminding the server that it needs access to the file. This improves performance by reducing network traffic. You might turn CIFS oplocks off on a volume or a qtree under either of the following circumstances:

You are using a database application whose documentation recommends that CIFS oplocks be turned off. You are handling critical data and cannot afford even the slightest data loss

Otherwise, you can leave CIFS oplocks on. Enabling/Disabling cifs.oplocks.enable on for entire storage cifs.oplocks.enable off Enabling/Disabling qtree oplocks /vol/vol2/proj enable for qtrees qtree oplocks /vol/vol2/proj disable

Security Styles
You might need to change the security style of a new volume or qtree. Additionally, you might need to accommodate other users; for example, if you had an NTFS qtree and subsequently needed to include UNIX files and users, you could change the security style of that qtree from NTFS to mixed. Make sure there are no CIFS users connected to shares on the qtree whose security style you want to change. If there are, you cannot change UNIX security style to mixed or NTFS, and you cannot change NTFS or mixed security style to UNIX. ## Syntax qtree security path {unix | ntfs | mixed} ## Change the security style of /vol/users/docs to mixed qtree security /vol/users/docs mixed

Change the security style

120

Also see volumes above for more information about security styles

Quotas
Quotas provide a way to restrict or track the disk space and number of files used by a user, group, or qtree. You specify quotas using the /etc/quotas file. Quotas are applied to a specific volume or qtree. You can use quotas to limit resource usage, to provide notification when resource usage reaches specific levels, or simply to track resource usage. You specify a quota for the following reasons:

To limit the amount of disk space or the number of files that can be used by a user or group, or that can be contained by a qtree To track the amount of disk space or the number of files used by a user, group, or qtree, without imposing a limit To warn users when their disk usage or file usage is high

Quotas can cause Data ONTAP to send a notification (soft quota) or to prevent a write operation from succeeding (hard quota) when quotas are exceeded. When Data ONTAP receives a request to write to a volume, it checks to see whether quotas are activated for that volume. If so, Data ONTAP determines whether any quota for that volume (and, if the write is to a qtree, for that qtree) would be exceeded by performing the write operation. If any hard quota would be exceeded, the write operation fails, and a quota notification is sent. If any soft quota would be exceeded, the write operation succeeds, and a quota notification is sent. Quotas configuration /mroot/etc/quotas file
## hard limit | thres |soft limit ##Quota Target type disk files| hold |disk file ##------------------- ----- ----- ----- ---* tree@/vol/vol0 # monitor usage on all qtrees in vol0 /vol/vol2/qtree tree 1024K 75k # enforce qtree quota using kb tinh user@/vol/vol2/qtree1 100M # enforce users quota in specified qtree dba group@/vol/ora/qtree1 100M # enforce group quota in specified qtree # * = default user/group/qtree # - = placeholder, no limit enforced, just enable stats collection

Example quota file

121

Note: you have lots of permutations, so checkout the documentation

Displaying Activating

quota report [<path>] quota on [-w] <vol_name> Note: -w = return only after the entire quotas file has been scanned quota off [-w] <vol_name> quota on [-w] <vol_name> quota resize <vol_name>

Deactivitating quota off [-w] <vol_name> Reinitializing Resizing Note: this commands rereads the quota file edit the quota file Deleting quota resize <vol_name> log messaging quota logmsg

122

Chapter 8: File Access Management


I have covered Block Access Management now I will discuss File Access Management covering

NFS CIFS FTP HTTP

Data ONTAP controls access to files according to the authentication-based and filebased restrictions that you specify. With authentication-based restrictions, you can specify which client machines and which users can connect to the entire storage system or a vFiler unit. Data ONTAP supports Kerberos authentication from both UNIX and Windows servers. With file-based restrictions, you can specify which users can access which files. When a user creates a file, Data ONTAP generates a list of access permissions for the file. While the form of the permissions list varies with each protocol, it always includes common permissions, such as reading and writing permissions. When a user tries to access a file, Data ONTAP uses the permissions list to determine whether to grant access. Data ONTAP grants or denies access according to the operation that the user is performing, such as reading or writing, and the following factors:

User account User group or netgroup Client protocol Client IP address File type

As part of the verification process, Data ONTAP maps host names to IP addresses using the lookup service you specifyLightweight Directory Access Protocol (LDAP), Network Information Service (NIS), or local storage system information.

File Access using NFS


You can export and unexport file system paths on your storage system, making them available or unavailable, respectively, for mounting by NFS clients, including PCNFS and WebNFS clients. Export Options actual=<path Specifies the actual file system path corresponding to the exported file system path. > anon=<uid>| Specifies the effective user ID (or name) of all anonymous or root NFS client users that access the file system path. <name> nosuid Disables setuid and setgid executables and mknod commands on the file system path.

123

ro | ro=clientid rw | rw=clientid

Specifies which NFS clients have read-only access to the file system path. Specifies which NFS clients have read-write access to the file system path.

Specifies which NFS clients have root access to the file system path. If you specify the root= option, you must specify at least one NFS root=clientid client identifier. To exclude NFS clients from the list, prepend the NFS client identifiers with a minus sign (-). Specifies the security types that an NFS client must support to access the file system path. To apply the security types to all types of access, specify the sec= option once. To apply the security types to specific types of access (anonymous, non-super user, read-only, read-write, or root), specify the sec= option at least twice, once before each access type to which it applies (anon, nosuid, ro, rw, or root, respectively). security types could be one of the following: none No security. Data ONTAP treats all of the NFS client's users as anonymous users.

Standard UNIX (AUTH_SYS) authentication. Data ONTAP checks the NFS credentials of all of the sys sec=sectype NFS client's users, applying the file access permissions specified for those users in the NFS server's /etc/passwd file. This is the default security type. krb5 Kerberos(tm) Version 5 authentication. Data ONTAP uses data encryption standard (DES) key encryption to authenticate the NFS client's users. Kerberos(tm) Version 5 integrity. In addition to authenticating the NFS client's users, Data ONTAP uses message authentication codes (MACs) to verify the integrity of the NFS client's remote procedure requests and responses, thus preventing "man-in-the-middle" tampering. Kerberos(tm) Version 5 privacy. In addition to authenticating the NFS client's users and verifying data integrity, Data ONTAP encrypts NFS arguments and results to provide privacy. rw=10.45.67.0/24 ro,root=@trusted,rw=@friendly rw,root=192.168.0.80,nosuid Export Commands Displaying exportfs exportfs -q <path> # create export in memory and write to /etc/exports (use default options) exportfs -p /vol/nfs1 create # create export in memory and write to /etc/exports (use specific options) exportsfs -io sec=none,rw,root=192.168.0.80,nosuid /vol/nfs1

krb5i

krb5p

Examples

124

# create export in memory only using own specific options exportsfs -io sec=none,rw,root=192.168.0.80,nosuid /vol/nfs1 # Memory only exportfs -u <path> remove # Memory and /etc/exportfs exportfs -z <path> export all check access flush reload storage path Write export to a file exportfs -a exportfs -c 192.168.0.80 /vol/nfs1 exportfs -f exportfs -f <path> exportfs -r exportfs -s <path> exportfs -w <path/export_file> # Suppose /vol/vol0 is exported with the following export options: -rw=pig:horse:cat:dog,ro=duck,anon=0 fencing # The following command enables fencing of cat from /vol/vol0 exportfs -b enable save cat /vol/vol0 # cat moves to the front of the ro= list for /vol/vol0: -rw=pig:horse:dog,ro=cat:duck,anon=0 stats nfsstat

File Access using CIFS


Netapp supports a number of Windows versions when it comes to CIFS, it is a licenced product. before you begin you need to setup the CIFS server by running the follow command. I am not going to go into detail but here are the basic commands that you need. If you are familiar with SAMBA then you will have no troble with this. Useful CIFS options change the options wafl.default_security_style {ntfs | unix | mixed} security style timeout options cifs.idle_timeout time options cifs.oplocks.enable on Performance Note: Under some circumstances, if a process has an exclusive oplock on a file and a second process attempts to open the file, the first process must invalidate cached data and flush writes and locks. The

125

client must then relinquish the oplock and access to the file. If there is a network failure during this flush, cached write data might be lost. CIFS Commands /etc/cifsconfig_setup.cfg /etc/usermap.cfs /etc/passwd /etc/cifsconfig_share.cfg

useful files

Note: use "rdfile" to read the file cifs setup CIFS setup start stop Note: you will be prompted to answer a number of questions based on what requirements you need. cifs restart cifs terminate # terminate a specific client cifs terminate <client_name>|<IP Address> cifs sessions cifs sessions <user> cifs sessions <IP Address> # Authentication cifs sessions -t # Changes cifs sessions -c # Security Info cifs session -s Broadcast message cifs broadcast * "message" cifs broadcast <client_name> "message" cifs access <share> <user|group> <permission> # Examples cifs access sysadmins -g wheel Full Control cifs access -delete releases ENGINEERING\mary Note: rights can be Unix-style combinations of r w x - or NT-style "No Access", "Read", "Change", and "Full Control" stats create a share cifs stat <interval> cifs stat <user> cifs stat <IP Address> # create a volume in the normal way

sessions

permissions

126

# then using qtrees set the style of the volume {ntfs | unix | mixed} # Now you can create your share cifs shares -add TEST /vol/flexvol1/TEST -comment "Test Share " forcegroup workgroup -maxusers 100 cifs shares -change sharename {-browse | -nobrowse} {-comment desc | - nocomment} {-maxusers userlimit | -nomaxusers} {-forcegroup groupname | -noforcegroup} {-widelink | -nowidelink} {symlink_strict_security | - nosymlink_strict_security} {-vscan | change share novscan} {-vscanread | - novscanread} {-umask mask | -noumask {characteristics no_caching | -manual_caching | - auto_document_caching | auto_program_caching} # example cifs shares -change <sharename> -novscan # Display home directories cifs homedir # Add a home directory wrfile -a /etc/cifs_homedir.cfg /vol/TEST # check it rdfile /etc/cifs_homedir.cfg # Display for a Windows Server net view \\<Filer IP Address> # Connect net use * \\192.168.0.75\TEST Note: make sure the directory exists # add a domain controller cifs prefdc add lab 10.10.10.10 10.10.10.11 # delete a domain controller cifs prefdc delete lab domain controller # List domain information cifs domaininfo # List the preferred controllers cifs prefdc print # Restablishing cifs resetdc change filers cifs changefilerpwd domain password

home directories

127

sectrace add [-ip ip_address] [-ntuser nt_username] [-unixuser unix_username] [-path path_prefix] [-a] #Examples sectrace add -ip 192.168.10.23 sectrace add -unixuser foo -path /vol/vol0/home4 -a Tracing permission problems # To remove sectrace delete all sectrace delete <index> # Display tracing sectrace show # Display error code status sectrace print-status <status_code> sectrace print-status 1:51544850432:32:78

File Access using FTP


You can enable and configure the Internet File Transfer Protocol (FTP) server to let users of Windows and UNIX FTP clients access the files on your storage system. Again there is not much to say about FTP so I will keep this short and sweet. Useful Options Enable Disable options ftpd.enable on options ftpd.enable off options ftpd.locking delete options ftpd.locking none File Locking Note: To prevent users from modifying files while the FTP server is transferring them, you can enable FTP file locking. Otherwise, you can disable FTP file locking. By default, FTP file locking is disabled. options ftpd.auth_style {unix | ntlm | mixed} options ftpd.bypass_traverse_checking on options ftpd.bypass_traverse_checking off Note: If the ftpd.bypass_traverse_checking option is set to off, when a user attempts to access a file using FTP, Data ONTAP checks the traverse (execute) permission for all directories in the path to the file. If any of the intermediate directories does not have the "X" (traverse permission), Data ONTAP denies access to the file. If the ftpd.bypass_traverse_checking option is set to on, when a user attempts to access a file, Data ONTAP does not check the traverse permission for the intermediate directories when

Authenication Style

bypassing of FTP traverse checking

128

determining whether to grant or deny access to the file. Restricting FTP options ftpd.dir.restriction on users to a options ftpd.dir.restriction off specific directory Restricting FTP users to their home directories options ftpd.dir.override "" or a default directory Maximum number of connections idle timeout value options ftpd.max_connections n options ftpd.max_connections_threshold n options ftpd.idle_timeout n s | m | h options ftpd.anonymous.enable on options ftpd.anonymous.enable off anonymous logins # specify the name for the anonymous login options ftpd.anonymous.name username # create the directory for the anonymous login options ftpd.anonymous.home_dir homedir FTP Commands /etc/log/ftp.cmd /etc/log/ftp.xfer Log files # specify the max number of logfiles (default is 6) and size options ftpd.log.nfiles 10 options ftpd.log.filesize 1G Note: use rdfile to view Restricting access /etc/ftpusers Note: using rdfile and wrfile to access /etc/ftpusers ftp stat stats # to reset ftp stat -z

File Access using HTTP


To let HTTP clients (web browsers) access the files on your storage system, you can enable and configure Data ONTAP's built-in HyperText Transfer Protocol (HTTP) server. Alternatively, you can purchase and connect a third-party HTTP server to your storage system.

129

HTTP Options Enable Disable options httpd.enable on options httpd.enable off

options httpd.bypass_traverse_checking on Enabling or disabling the options httpd.bypass_traverse_checking off bypassing of HTTP traverse checking Note: this is similar to the FTP version root directory Host access options httpd.rootdir /vol0/home/users/pages options httpd.access host=Host1 AND if=e3 options httpd.admin.access host!=Host1 HTTP Commands /etc/log/httpd.log Log files # use the below to change the logfile format options httpd.log.format alt1 Note: use rdfile to view Redirects pass rule fail rule mime types Note: use rdfile and wrfile to edit interface firewall Stats ifconfig f0 untrusted httpstat [-dersta] # reset the stats httpstat -z[derta] redirect /cgi-bin/* http://cgi-host/* pass /image-bin/* fail /usr/forbidden/* /etc/httpd.mimetypes

130

Chapter 9: Network Appliance (NetApp)


This section is short introduction into Network Appliance (NetApp), the company creates storage systems and management software associated with companies data. They offer products that cater for small, medium and large companies and can provide support. Other storage main vendors are

EMC Hitachi Data Systems HP IBM

The NetApp filer also know as NetApp Fabric-Attached Storage (FAS) is a type of disk storage device which owns and controls a filesystem and present files and directories over the network, it uses an operating systems called Data ONTAP (based on FreeBSD). NetApp Filers can offer the following

Supports SAN, NAS, FC, SATA, iSCSI, FCoE and Ethernet all on the same platform Supports either SATA, FC and SAS disk drives Supports block protocols such as iSCSI, Fibre Channel and AoE Supports file protocols such as NFS, CIFS , FTP, TFTP and HTTP High availability Easy Management Scalable

History
NetApp was created in 1992 by David Hitz, James Lau and Michael Malcolm, the company become public in 1995 and grew rapidly in the dot com boom, the companies headquarters are in Sunnyvale, California, US. NetApp has acquired a number of companies that helped in development of various products. The first NetApp network appliance shipped in 1993 known as a filer, this product was a new beginning in data storage architecture, the device did one task and it did it extremely well. NetApp made sure that the device was fully compatible to use industry standard hardware rather than specialized hardware. Today's NetApp products cater for small, medium and large size corporations and can be found in many blue-chip companies.

NetApp Filer
The NetApp Filer also know as NetApp Fabric-Attached Storage (FAS), is a data storage device, it can act as a SAN or as a NAS, it serves storage over a network using either file-based or block-based protocols

131

File-Based Protocol Block-Based Protocol

NFS, CIFS, FTP, TFTP, HTTP Fibre Channel (FC), Fibre channel over Ethernet (FCoE), Internet SCSI (iSCSI)

The most common NetAPP configuration consists of a filer (also known as a controller or head node) and disk enclosures (also known as shelves), the disk enclosures are connected by FC or parallel/serial ATA, the filer is then accessed by other Linux, Unix or Window servers via a network (Ethernet or FC). An example setup would be like the one in the diagram below

The filers run NetApp's own adapted operating system (based on FreeBSD) called Data ONTAP, it is highly tuned for storage-serving purposes. All filers have a battery-backed NVRAM, which allows them to commit writes to stable storage quickly, without waiting on the disks. It is also possible to cluster filers to create a highly-availability cluster with a private high-speed link using either FC or InfiniBand, clusters can then be grouped together under a single namespace when running in the cluster mode of the Data ONTAP 8 operating system. The filer will be either Intel or AMD processor based computer using PCI, each filer will have a battery-backed NVRAM adaptor to log all writes for performance and to

132

replay in the event of a server crash. The Data ONTAP operating system implements a single proprietary file-system called WAFL (Write Anywhere File Layout). WAFL is not a filesystem in the traditional sense, but a file layout that supports very large high-performance RAID arrays (up to 100TB), it provides mechanisms that enable a variety of filesystems and technologies that want to access disk blocks. WAFL also offers

snapshots (up to 255 per volume can be made) snapmirror (disk replication) syncmirror (mirror RAID arrays for extra resilience, can be mirrored up to 100km away) snaplock (Write once read many, data cannot be deleted until its retention period has been reached) read-only copies of the file system read-write snapshots called FlexClone ACL's quick defragmentation

Filers offer two RAID options (see below), you can also create very large RAID arrays up to 28 disks, this depends on the type of filer. RAID 4 RAID 6 offers single parity on a dedicated disk (unlike RAID 5) is the same as RAID 5 but offers double parity (more resilience), two disks in the raid could fail

NetApp Backups
The last point to touch on is backups, NetApp offers two types

Dump

backs up files and directories supports level-0, incremental and differential backups supports single file restore capable of backing only the base snapshot copy Backs up blocks of data to tape Supports only level-0 backup does not support single file restore capable of backing up multiple snapshot copies in a volume does not support remote tape backups and restores

SMTape

The filer will support either SCSI or Fibre channel (FC) tape drives and can have a maximum of 64 mixed tape devices attached to a single storage system. Network Data Management Protocol (NDMP) is a standardized protocol for controlling backup, recovery and other transfers of data between primary and secondary storage devices such as storage systems and tape libraries. This removes

133

the need for transporting the data through the backup server itself, thus enhancing speed and removing load from the backup server. By enabling NDMP support you enable that storage system to carry communications with the NDMP-enabled commercial network-attached backup application, it also provides low-level control of tape devices and medium changers. The advantages of NDMP are

provide sophisticated scheduling of data protection across multiple storage systems provide media management and tape inventory management services to eliminate tape handling during data protection operations support data cataloging services that simplify the process of locating specific recovery data supports multiple topology configurations, allowing sharing of secondary storage (tape library) resources through the use of three-way network data connections supports security features to prevent or monitoring unauthorized use of NDMP connections

134

Chapter 10: Network Management


Your storage system supports physical network interfaces, such as Ethernet and Gigabit Ethernet interfaces, and virtual network interfaces, such as interface group and virtual local area network (VLAN). Each of these network interface types has its own naming convention. Your storage system supports the following types of physical network interfaces:

10/100/1000 Ethernet Gigabit Ethernet (GbE) 10 Gigabit Ethernet

In addition, some storage system models include a physical network interface named e0M. The e0M interface is used only for Data ONTAP management activities, such as for running a Telnet, SSH, or RSH session. The following table lists interface types, interface name formats, and example of names that use these identifiers. Interface Type Physical interface on a single-port adapter or slot Interface Name Format e<slot_number> Example e0 e1 e0a e0b e1a e1b web_ifgrp ifgrp1 e8-2 ifgrp1-3

Physical interface on a e<slot_number><port_letter> multiple-port adapter or slot Interface group VLAN Any user-specified string that meets certain criteria <physical_interface_name>-<vlanID> or <ifgrp_name>-<vlan_ID>

Beginning with Data ONTAP 7.3, storage systems can accommodate from 256 to 1,024 network interfaces per system, depending on the storage system model, system memory, and whether they are in an HA pair. Each storage system can support up to 16 interface groups. The maximum number of VLANs that can be supported equals the maximum number of network interfaces shown in the following table minus the total number of physical interfaces, interface groups, vh, and loopback interfaces supported by the storage system. You can manage your storage system locally from an Ethernet connection by using any network interface. However, to manage your storage system remotely, the system should have a Remote LAN Module (RLM) or Baseboard Management Controller (BMC). These provide remote platform management capabilities, including remote access, monitoring, troubleshooting, and alerting features. Jumbo frames are larger than standard frames and require fewer frames. Therefore, you can reduce the CPU processing overhead by using jumbo frames with your 135

network interfaces. Particularly, by using jumbo frames with a Gigabit or 10 Gigabit Ethernet infrastructure, you can significantly improve performance,depending on the network traffic. Jumbo frames are packets that are longer than the standard Ethernet (IEEE 802.3) frame size of 1,518 bytes. The frame size definition for jumbo frames is vendor-specific because jumbo frames are not part of the IEEE standard. The most commonly used jumbo frame size is 9,018 bytes. Jumbo frames can be used for all Gigabit and 10 Gigabit Ethernet interfaces that are supported on your storage system. The interfaces must be operating at or above 1,000 Mbps. You can set up jumbo frames on your storage system in the following two ways:

During initial setup, the setup command prompts you to configure jumbo frames if you have an interface that supports jumbo frames on your storage system. If your system is already running, you can enable jumbo frames by setting the MTU size on an interface.

You can configure IP addresses for your network interface during system setup. To configure the IP addresses later, you should use the ifconfig command. Display ifconfig -a ifconfig <interface> ifconfig e0 <IP Address> ifconfig e0a <IP Address> IP address # Remove a IP Address ifconfig e3 0 subnet mask broadcast media type maximum transmission unit (MTU) ifconfig e0a netmask <subnet mask address> ifconfig e0a broadcast <broadcast address> ifconfig e0a mediatype 100tx-fd ifconfig e8 mtusize 9000 ifconfig <interface_name> <flowcontrol> <value> # example ifconfig e8 flowcontrol none Note: value is the flow control type. You can specify the following values for the flowcontrol option: none - No flow control receive - Able to receive flow control frames send - Able to send flow control frames full - Able to send and receive flow control frames The default flowcontrol type is full.

Flow control

136

ifconfig e8 untrusted trusted Note: You can specify whether a network interface is trustworthy or untrustworthy. When you specify an interface as untrusted (untrustworthy), any packets received on the interface are likely to be dropped. ifconfig e8 partner <IP Address> ## You must enable takeover on interface failures by entering the following commands: options cf.takeover.on_network_interface_failure enable ifconfig interface_name {nfo|-nfo} HA Pair nfo Enables negotiated failover -nfo Disables negotiated failover Note: In an HA pair, you can assign a partner IP address to a network interface. The network interface takes over this IP address when a failover occurs # Create alias ifconfig e0 alias 192.0.2.30 Alias # Remove alias ifconfig e0 -alias 192.0.2.30 # Block options interface.blocked.cifs e9 Block/Unblock options interface.blocked.cifs e0a,e0b protocols # Unblock options interface.blocked.cifs "" ifstat netstat Stats Note: there are many options to both these commands so I will leave to the man pages bring up/down ifconfig <interface> up an interface ifconfig <interface> down

Routing
You can have Data ONTAP route its own outbound packets to network interfaces. Although your storage system can have multiple network interfaces, it does not function as a router. However, it can route its outbound packets. Data ONTAP uses two routing mechanisms:

Fast path Data ONTAP uses this mechanism to route NFS packets over UDP and to route all TCP traffic.

137

Routing table To route IP traffic that does not use fast path, Data ONTAP uses the information available in the local routing table. The routing table contains the routes that have been established and are currently in use, as well as the default route specification.

Fast path is an alternative routing mechanism to the routing table, in which the responses to incoming network traffic are sent back by using the same interface as the incoming traffic. It provides advantages such as load balancing between multiple network interfaces and improved storage system performance. Fast path is enabled automatically on your storage system; however, you can disable it. Using fast path provides the following advantages:

Load balancing between multiple network interfaces on the same subnet. Load balancing is achieved by sending responses on the same interface of your storage system that receives the incoming requests. Increased storage system performance by skipping routing table lookups.

You can manage the routing table automatically by using the routed daemon, or manually by using the route command. The routed daemon performs the following functions by default:

Deletes redirected routes after a specified period Performs router discovery with ICMP Router Discovery Protocol (IRDP) This is useful only if there is no static default route. Listens for Routing Information Protocol (RIP) packets Migrates routes to alternate interfaces when multiple interfaces are available on the same subnet

The routed daemon can also be configured to perform the following functions:

Control RIP and IRDP behavior Generate RIP response messages that update a host route on your storage system Recognize distant gateways identified in the /etc/gateways file

If you are firmiliar with Unix routing then you should have no trouble with the following routing commands: # using wrfile and rdfile edit the /etc/rc file with the below route add default 192.168.0.254 1 # the full /etc/rc file will look like something below hostname netapp1 ifconfig e0 192.168.0.10 netmask 255.255.255.0 mediatype 100txfd route add default 192.168.0.254 1 routed on options ip.fastpath.enable {on|off}

default route

enable/disable

138

fast path Note: on Enables fast path off Disables fast path routed {on|off} enable/disable routing daemon Note: on Turns on the routed daemon off Turns off the routed daemon netstat -rn route -s routed status route add 192.168.0.15 gateway.com 1

Display routing table Add to routing table

Hosts and DNS


Hosts and DNS are the same as Unix but here is a quick table just to jog your memory # use wrfile and rdfile to read and edit /etc/hosts file , it basically use the sdame rules as a Unix # hosts file

Hosts

# use wrfile and rdfile to read and edit /etc/nsswitch.conf file , it nsswitch file basically uses the same rules as a # Unix nsswitch.conf file # use wrfile and rdfile to read and edit /etc/resolv.conf file , it basically uses the same rules as a # Unix resolv.conf file DNS options dns.enable {on|off} Note: on Enables DNS off Disables DNS Domain Name options dns.domainname <domain> options dns.cache.enable options dns.cache.disable DNS cache # To flush the DNS cache dns flush # To see dns cache information dns info DNS updates options dns.update.enable {on|off|secure}

139

Note: on Enables dynamic DNS updates off Disables dynamic DNS updates secure Enables secure dynamic DNS updates options dns.update.ttl <time> # Example options dns.update.ttl 2h Note: time can be set in seconds (s), minutes (m), or hours (h), with a minimum value of 600 seconds and a maximum value of 24 hour I will leave you to read the documentation regarding how to configure NIS.

time-to-live (TTL)

VLAN
This section is a breif introduction into VLANs. VLANs provide logical segmentation of networks by creating separate broadcast domains. A VLAN can span multiple physical network segments. The end-stations belonging to a VLAN are related by function or application. For example, end-stations in a VLAN might be grouped by departments, such as engineering and accounting, or by projects, such as release1 and release2. Because physical proximity of the endstations is not essential in a VLAN, you can disperse the end-stations geographically and still contain the broadcast domain in a switched network. An end-station must become a member of a VLAN before it can share the broadcast domain with other end-stations on that VLAN. The switch ports can be configured to belong to one or more VLANs (static registration), or end-stations can register their VLAN membership dynamically, with VLAN-aware switches. VLAN membership can be based on one of the following:

Switch ports End-station MAC addresses Protocol

In Data ONTAP, VLAN membership is based on switch ports. With port-based VLANs, ports on the same or different switches can be grouped to create a VLAN. As a result, multiple VLANs can exist on a single switch. Any broadcast or multicast packets originating from a member of a VLAN are confined only among the members of that VLAN. Communication between VLANs, therefore, must go through a router. The following figure illustrates how communication occurs between geographically dispersed VLAN members.

140

In this figure, VLAN 10 (Engineering), VLAN 20 (Marketing), and VLAN 30 (Finance) span three floors of a building. If a member of VLAN 10 on Floor 1 wants to communicate with a member of VLAN 10 on Floor 3, the communication occurs without going through the router, and packet flooding is limited to port 1 of Switch 2 and Switch 3 even if the destination MAC address to Switch 2 and Switch 3 is not known. GARP VLAN Registration Protocol (GVRP) uses Generic Attribute Registration Protocol (GARP) toallow end-stations on a network to dynamically register their VLAN membership with GVRP-aware switches. Similarly, these switches dynamically register with other GVRP-aware switches on the network, thus creating a VLAN topology across the network. GVRP provides dynamic registration of VLAN membership; therefore, members can be added or removed from a VLAN at any time, saving the overhead of maintaining static VLAN configuration on switch ports. Additionally, VLAN membership information stays current, limiting the broadcast domain of a VLAN only to the active members of that VLAN. By default, GVRP is disabled on all VLAN interfaces in Data ONTAP; however, you can enable it. After you enable GVRP on an interface, the VLAN interface informs the connecting switch about the VLANs it supports. This information (dynamic registration) is updated periodically. This information is also sent every time an interface comes up after being in the down state or whenever there is a change in the VLAN configuration of the interface. A VLAN tag is a unique identifier that indicates the VLAN to which a frame belongs. Generally, a VLAN tag is included in the header of every frame sent by an end-station on a VLAN. On receiving a tagged frame, the switch inspects the frame header and, based on the VLAN tag, identifies the VLAN. The switch then forwards the frame to the destination in the identified VLAN. If the destination MAC address is unknown, the switch limits the flooding of the frame to ports that belong to the identified VLAN. 141

VLANs provide a number of advantages such as ease of administration, confinement of broadcast domains, reduced network traffic, and enforcement of security policies. vlan create [-g {on|off}] ifname vlanid # Create VLANs with identifiers 10, 20, and 30 on the interface e4 of a storage system by using the following command: vlan create e4 10 20 30 # Configure the VLAN interface e4-10 by using the following command ifconfig e4-10 192.168.0.11 netmask 255.255.255.0 Add vlan add e4 40 50 # Delete specific VLAN vlan delete e4 30 Delete # Delete All VLANs on a interface vlan delete e4 Enable/Disable GRVP on VLAN vlan modify -g {on|off} ifname vlan stat <interface_name> <vlan_id> Stat # Examples vlan stat e4 vlan stat e4 10

Create

Interface Groups
An interface group is a feature in Data ONTAP that implements link aggregation on your storage system. Interface groups provide a mechanism to group together multiple network interfaces (links) into one logical interface (aggregate). After an interface group is created, it is indistinguishable from a physical network interface. Interface groups provide several advantages over individual network interfaces:

Higher throughput Multiple interfaces work as one interface. Fault tolerance If one interface in an interface group goes down, your storage system stays connected to the network by using the other interfaces. No single point of failureIf the physical interfaces in an interface group are connected to multiple switches and a switchgoes down, your storage system stays connected to the network through the other switches.

You can create three different types of interface groups on your storage system: single-mode interface groups, static multimode interface groups, and dynamic multimode interface groups. Each interface group provides different levels of fault tolerance. Multimode interface groups provide methods for load balancing network traffic.

142

In a single-mode interface group, only one of the interfaces in the interface group is active. The other interfaces are on standby, ready to take over if the active interface fails. All interfaces in a singlemode interface group share a common MAC address. There can be more than one interface on standby in a single-mode interface group. If an active interface fails, your storage system randomly picks one of the standby interfaces to be the next active link. The active link is monitored and link failover is controlled by the storage system; therefore, single-mode interface group does not require any switch configuration. Single-mode interface groups also do not require a switch that supports link aggregation. Dynamic multimode interface groups can detect not only the loss of link status (as do static multimode interface groups), but also a loss of data flow. This feature makes dynamic multimode interface groups compatible with high-availability environments. The dynamic multimode interface group implementation in Data ONTAP is in compliance with IEEE 802.3ad (dynamic), also known as Link Aggregation Control Protocol (LACP). Dynamic multimode interface groups have some special requirements. They include the following:

Dynamic multimode interface groups must be connected to a switch that supports LACP. Dynamic multimode interface groups must be configured as first-level interface groups. Dynamic multimode interface groups should be configured to use the IP-based load-balancing method.

In a dynamic multimode interface group, all interfaces in the interface group are active and share a single MAC address. This logical aggregation of interfaces provides higher throughput than a singlemode interface group. A dynamic multimode interface group requires a switch that supports link aggregation over multiple switch ports. The switch is configured so that all ports to which links of an interface group are connected are part of a single logical port. For information about configuring the switch, see your switch vendor's documentation. Some switches might not support link aggregation of ports configured for jumbo frames. The load-balancing method for a multimode interface group can be specified only when the interface group is created. If no method is specified, the IP address based load-balancing method is used. # To create a single-mode interface group, enter the following command: ifgrp create single SingleTrunk1 e0 e1 e2 e3 Create (singlemode) # To configure an IP address of 192.168.0.10 and a netmask of 255.255.255.0 on the singlemode interface group SingleTrunk1 ifconfig SingleTrunk1 192.168.0.10 netmask 255.255.255.0 # To specify the interface e1 as preferred ifgrp favor e1 Create ( multi# To create a static multimode interface group, comprising

143

mode)

interfaces e0, e1, e2, and e3 and using MAC # address load balancing ifgrp create multi MultiTrunk1 -b mac e0 e1 e2 e3 # To create a dynamic multimode interface group, comprising interfaces e0, e1, e2, and e3 and using IP # address based load balancing ifgrp create lacp MultiTrunk1 -b ip e0 e1 e2 e3 # To create two interface groups and a second-level interface group. In this example, IP address load # balancing is used for the multimode interface groups. ifgrp create multi Firstlev1 e0 e1 ifgrp create multi Firstlev2 e2 e3 ifgrp create single Secondlev Firstlev1 Firstlev2

Create second level intreface group

# To enable failover to a multimode interface group with higher aggregate bandwidth when one or more of # the links in the active multimode interface group fail options ifgrp.failover.link_degraded on Note: You can create a second-level interface group by using two multimode interface groups. Secondlevel interface groups enable you to provide a standby multimode interface group in case the primary multimode interface group fails. # Use the following commands to create a second-level interface group in an HA pair. In this example, # IP-based load balancing is used for the multimode interface groups. # On StorageSystem1: ifgrp create multi Firstlev1 e1 e2 ifgrp create multi Firstlev2 e3 e4 ifgrp create single Secondlev1 Firstlev1 Firstlev2 # On StorageSystem2 : ifgrp create multi Firstlev3 e5 e6 ifgrp create multi Firstlev4 e7 e8 ifgrp create single Secondlev2 Firstlev3 Firstlev4 # On StorageSystem1: ifconfig Secondlev1 partner Secondlev2 # On StorageSystem2 : ifconfig Secondlev2 partner Secondlev1

Create second level intreface group in a HA pair

Favoured/nonfavoured interface

# select favoured interface ifgrp nofavor e3 # select a non-flavoured interface ifgrp nofavor e3

144

Add

ifgrp add MultiTrunk1 e4 ifconfig MultiTrunk1 down ifgrp delete MultiTrunk1 e4

Delete

Note: You must configure the interface group to the down state before you can delete a network interface from the interface group ifconfig ifgrp_name down ifgrp destroy ifgrp_name

Destroy

Note: You must configure the interface group to the down state before you can delete a network interface from the interface group

Enable/disable a ifconfig ifgrp_name up interface group ifconfig ifgrp_name down Status Stat ifgrp status [ifgrp_name] ifgrp stat [ifgrp_name] [interval]

Diagnostic Tools
There are a number of tools and options that you can use to help with network related problems Useful options # Throttle ping options ip.ping_throttle.drop_level <packets_per_second> Ping thottling # Disable ping throttling options ip.ping_throttle.drop_level 0 options ip.icmp_ignore_redirect.enable on Forged IMCP attacks Note: You can disable ICMP redirect messages to protect your storage system against forged ICMP redirect attacks. Useful Commands netdiag The netdiag command continuously gathers and analyzes statistics, and performs diagnostic tests. These diagnostic tests identify and report problems with your physical network or transport layers and suggest remedial action. You can use the ping command to test whether your storage system can reach other hosts on your network. You can use the pktt command to trace the packets sent and received in the storage system's network.

ping pktt

145

You might also like