You are on page 1of 43

VERITAS Volume Manager System Administrator's Guide by Brendan Choi Last Modified: 01-18-01 -------------------------------------------------INTRODUCTION TO VERITAS VOLUME MANAGER

VERITAS Volume Manager (aka VxVM, Sun Enterprise Volume Manager or Sun StorEdge Volume Manager) is VERITAS Corporation's software RAID product for Sun SPARC and UltraSPARC systems running SunOS 5.x (Solaris 2.x). VxVM can be controlled by a Visual Administrator GUI, a CLI menu, or by individual commands in a UNIX shell. NOTE: Under VxVM 3.x, the Storage Administrator GUI replaces the Visual Administrator GUI. VxVM natively supports RAID 0 (striping), 1 (mirroring), 1+0 (mirroring plus striping), and RAID 5 (parity across multiple disks). VxVM is compatible with the VERITAS File System (VxFS) product, a journal file system which can be purchased separately. NOTE: Under VxVM 3.x, RAID 0+1 (striping plus mirroring) is also available. Basically, under VxVM entire hard disks or logical unit of spindles (e.g., in an EMC Symmetrix or Sun A3500) are converted to VM Disks, those VM Disks are added to Disk Groups, VxVM volume operations are performed on those disk, and filesystems are created or resized. Thus, a Sun Enterprise server with many disk arrays can consist of many disk groups, each with many VM disks, volumes, and filesystems. Disk Groups can be moved from server to server. When the disks within a Disk Group are spread across many controllers and arrays, the whole VxVM system, both physically and logically, can become quite complex. VxVM 3.x introduces new features like a stripe-mirror (RAID 0+1) volume layout, Online Relayout, and layered volumes. The "vxtask" command is also new in VxVM 3.x. -----------------------------------------SOLARIS VxVM STRUCTURES DISKS PRTVTOC Under UNIX operating systems like Solaris, disks are organized into partition slices. Partitions are logical units of sectors and can overlap each other. Under VxVM, the partitions are pretty much invisible. But they do exist in the form of Public and Private Regions. These are especially important to understand on the the root disk and its mirrors. Traditional VxVM disk layout:

-------------------VTOC -------------------PRIVATE REGION -------------------PUBLIC REGION VOLUMES PLEXES SUBDISKS -------------------Under VxVM 3.x, you can also have Subvolumes or "layered volumes", e.g. under a striping-mirror layout. -------------------VTOC -------------------PRIVATE REGION -------------------PUBLIC REGION VOLUMES SUBVOLUMES PLEXES SUBDISKS -------------------If a disk already has Solaris partitions and data on it, the disk can be placed under VxVM control (e.g., for mirroring) through "encapsulation". The data will not be lost. All VxVM controlled disks, whether encapsulated or not, have a Private and Public region. The Public Region spans nearly the entire disk and overlaps any slices. The Private Region is a very small slice that holds a copy of the VxVM configuration of the disk's Disk Group. Example VTOC of encapsulated root disk (from prtvtoc command output): * * Partition 0 1 2 3 4 6 7 Tag 2 3 5 0 7 14 15 First Sector Last Flags Sector Count Sector 00 0 22749536 22749535 01 22749536 4198392 26947927 00 0 35368272 35368271 00 26947928 2101552 29049479 00 29049480 2101552 31151031 01 0 35368272 35368271 01 35363560 4712 35368271

VTOC of mirror (disk initialized by VxVM) of above encapsulated disk: * * Partition 0 1 2 3 4 6 7 Tag 2 3 5 15 14 0 7 First Sector Last Flags Sector Count Sector 00 2106264 22749536 24855799 01 24855800 4198392 29054191 01 0 35368272 35368271 01 0 4712 4711 01 4712 35363560 35368271 00 4712 2101552 2106263 00 33252584 2101552 35354135

Example VTOC of VxVM encapsulated non-root disk: # prtvtoc /dev/rdsk/c1t1d4s2 * * Partition Tag Flags 2 0 00 6 14 01 7 15 01 First Sector 0 0 2050272 Sector Last Count Sector Mount Directory 2052288 2052287 2052288 2052287 2016 2052287

Tag 14 indicates the Public Region partition, and tag 15 is the Private Region partition. The default Solaris tags and flags are: Tags: UNASSIGNED BOOT ROOT SWAP USR BACKUP STAND VAR HOME ALTSCTR CACHE Flags: MOUNTABLE, READ AND WRITE NOT MOUNTABLE MOUNTABLE, READ ONLY 0x00 0x01 0x10 0x00 0x01 0x02 0x03 0x04 0x05 0x06 0x07 0x08 0x09 0x0a

Under VxVM encapsulation (VxVM controlling Solaris partitioned disks), partitions for /, /usr, /var and swap are preserved, while other partitions on the rootdisk are removed (the data is preserved and converted to volumes). On other disks, encapsulation will also remove all Solaris partition information. NOTE: More information about the tags can be found in /usr/include/sys/vtoc.h /usr/include/sys/dklabel.h and vxvm/vollocal.h (in the VRTSvmdev package). -----------------------------------------VxVM DEFAULT VALUES Max. number of Plexes per non-RAID5 Volume = 32 RAID 0 Stripe Unit Size (stripe width) = 64KB RAID 5 Stripe Unit Size (stripe width) = 16KB Dirty Region Logging Subdisk Length = 32KB Max. number of RAID5 columns = 8

Min. number of RAID5 columns = 3 Private Region length = 1024 sectors VxVM Record Length (e.g. volume, plex, subdisk) = 256 bytes -----------------------------------------VxVM OBJECTS PHYSICAL DISKS Hard disks or logical units of spindles that may or may not be under VxVM control. May or may not have filesystems. PARTITIONS Organizations on the Physical Disk. May or may not be contiguous and can overlap. May or may not have filesystems. If filesystems exist, these filesystems can be brought under VxVM control via encapsulation. VM DISKS Disks under VxVM control. May or may not have filesystems. Can only belong to one Disk Group at a time. SUBDISKS These are contiguous areas of VM Disks. Subdisks make up Plexes. A Plex can have more than one Subdisk. LOG SUBDISK These are optional subdisks within a Volume which contain DRL (Dirty Region Logging) information. DISK GROUP A collection of VM Disks. A Disk Group can be deported and imported to another server system. There must be a functional Disk Group called "rootdg" for VxVM to start properly. PLEX Technically, the organization of subdisks that make up a Volume, and any copies (mirrors) of those organizations if they exist. Usually refers to a mirror within a Volume. LOG PLEX These are optional plexes within a Volume which contain DRL (Dirty Region Logging) information. VOLUME An organization of Plexes and Subdisks on one or more VM Disks. May be concatenated (spanned), striped, or mirrored across many VM Disks. May or may not contain a filesystem. A Volume has at least one Plex (i.e., itself and unmirrored), but may have up to 32. RAID5 VOLUME A RAID5 Volume can only have one RAID5 plex, but can have multiple RAID5 log plexes. ---------------------------------------FILES RELATED TO THE BEHAVIOR OF VxVM Kernel Information: /etc/system This file contains VxVM kernel information. Edit this file to manipulate VxVM behavior upon bootup (e.g., disabling root disk encapsulation).

System Startup Scripts: /etc/init.d/vxvm-sysboot /etc/init.d/vxvm-startup1 /etc/init.d/vxvm-startup2 /etc/init.d/vxvm-reconfig /etc/init.d/vxvm-recover NOTE: These may be hard linked from entries in /etc/rc2.d and /etc/rcS.d. Miscellaneous: /etc/default/vxassist This file contains default settings for the vxassist command. /etc/vx This directory contains VxVM related files. /etc/vx/volboot This file contains important VxVM boot-time information. /etc/vx/reconfig.d These directories contain important /etc/vx/reconfig.d/state.d encapsulation information. /etc/vx/reconfig.d/disk.d/cXtYdZ /etc/vx/reconfig.d/disk.d/cXtYdZ This directory contains information about disks encapsulated by VxVM. /etc/vx/reconfig.d/disk.d/cXtYdZ/vtoc This is the vtoc file for an encapsulated disk. You can use this file to partition a disk with "fmthard -s". /etc/vx/reconfig.d/state.d/install-db If this file exists, and the boot disk has not been encapsulated or under VxVM control, VxvM will not start up during bootup. Otherwise, VxVM will startup. NOTE: If VxVM controls the bootdisk, VxVM has to startup anyway, and so it will ignore the install-db file. /etc/vx/reconfig.d/state.d/root-done This file is created by VxVM after rootdisk encapsulation. /etc/vfstab This file controls which VERITAS volumes will be mounted on startup. /etc/vfstab.prevm This file is created by VxVM during rootdisk encapsulation as a backup of the original vfstab file. DO NOT LOSE THIS FILE!! /var/vxvm This directory contains VxVM database information. /sbin, /usr/sbin These directories have many VxVM executable files. NOTE: There may be differences in file structures among the VxVM releases.

-------------------------------------------------INFORMATIONAL VxVM COMMANDS Here are some commands that will show you useful information about your VxVM environment: /usr/sbin/vxdisk /usr/sbin/vxprint /usr/sbin/vxinfo To bring up the VxVM Visual Administrator GUI: /opt/SUNWvxva/bin/vxva Remember to set your display environment variable. You must do this at a graphical console if the server is outside the firewall. NOTE: You must be root to run most VxVM commands, including the Visual Administrator. -------------------------------------------------VXSERIAL VXLICENSE INSTALL LICENSES VxVM has separate codewords for enabling the Base product and other components, such as the RAID-5 Option. To install a VERITAS license code, enter the supplied codeword after entering this command: vxlicense -c NOTE: Run this command for each codeword supplied. Older versions of VxVM might use the "vxserial" or "vxfsserial" command to manage the license codes. The vxlicense command is also used to install other VERITAS products like VxFS. -----------------------------------------------VXINSTALL FIRST COMMAND After entering your VxVM licenses, you should run vxinstall to begin the initial setup of your VxVM layout. vxinstall This interactive command will encapsulate your boot disk, create rootdg and add disks to this or other disk groups. -----------------------------------------------VxVM DAEMONS VXCONFIGD VXIOD VXRELOCD The two daemons that must be running for VxVM to function are: vxconfigd

vxiod NOTE: These daemons should automatically startup as part of the Solaris boot process. The vxiod dameon DOES NOT show up under the 'ps' command. Should you ever need to manually start VxVM: To enable and run vxconfigd: vxdctl enable ; vxconfigd To startup 4 vxiod daemons: vxiod set 4 NOTE: VERITAS recommends at least one vxiod daemon per CPU. By default, VxVM Hot-Relocation is enabled. You should, therefore, see this daemon as being up: vxrelocd -----------------------------------------------VXCONFIGD VXDCTL COMMAND SUMMARY Some of the options for vxdctl and vxconfigd do the same thing. For each scenario below, choose the best command to run. To disable vxconfigd: vxconfig -d vxconfig -m disable vxdctl stop vxdctl disable To reset vxconfigd (useful for debugging): vxconfig -r reset To kill vxconfigd: vxconfigd -k vxdctl -k stop To startup vxconfigd: vxconfigd -m boot vxconfigd -m enable vxdctl enable -----------------------------------------------VXCONFIGD HANG If the vxconfigd daemon hangs, kill and restart it using:

vxconfigd -k -m enable -----------------------------------------------VXDCTL HARDWARE INSTALL If VxVM does not recognize the presence of new disks installed, restart the VxVM daemons with this command: vxdctl enable -----------------------------------------------VXDISK LIST DISKS AVAILABLE To see basic VxVM information about each physical disk on your system: vxdisk list NOTE: This command will report on what VxVM thinks of the disk, regardless of whether the disk is controlled by VxVM or not. If the disk is not controlled by VxVM, it will be marked with the word "error". For Multipath information: vxdisk list <disk> Examples: vxdisk list c1t5d0s2 vxdisk list disk07 Example output: Device: c1t5d0s2 devicetag: c1t5d0 type: simple hostid: rtfm disk: name=disk07 id=943977664.1229.rtfm group: name=rootdg id=943917799.1025.rtfm info: privoffset=1 flags: online ready autoimport imported pubpaths: block=/dev/vx/dmp/c1t5d0s2 char=/dev/vx/rdmp/c1t5d0s2 version: 2.1 iosize: min=512 (bytes) max=2048 (blocks) public: slice=1 offset= 1025 len=2096575 To see what Disk Groups the disks belong to, whether the Disk Groups are imported or not: vxdisk -o alldgs list Example Output: DEVICE c0t0d0s2 c1t3d0s2 c1t8d0s2 c1t9d0s2 c1t10d0s2 TYPE sliced sliced sliced sliced sliced DISK rootdisk GROUP rootdg (oradg) (oradg) (oradg) (oradg) STATUS online online online online online

c1t11d0s2 c1t13d0s2 c1t14d0s2 c1t15d0s2

sliced sliced sliced sliced

(PRODdg) (PRODdg) -

online online online online

----------------------------------------VXDG DISPLAY DISK GROUP INFORMATION Simple Disk Group information: vxdg list Detailed Disk Group information: vxdg list <disk group> Example Output: # vxdg list db_data Group: db_data dgid: 925827691.1088.rtfm import-id: 0.1220 flags: version: 20 copies: nconfig=default nlog=default config: seqno=0.1394 permlen=1090 free=1068 templen=15 loglen=165 config disk c1t0d0s2 copy 1 len=1090 state=clean online config disk c1t3d0s2 copy 1 len=1090 disabled config disk c1t4d0s2 copy 1 len=1090 state=clean online config disk c1t5d0s2 copy 1 len=1090 disabled log disk c1t0d0s2 copy 1 len=165 log disk c1t3d0s2 copy 1 len=165 disabled log disk c1t4d0s2 copy 1 len=165 log disk c1t5d0s2 copy 1 len=165 disabled ----------------------------------DISK GROUP CONFIGURATION VERITAS saves the Disk Group configuration to selected disks using an algorithm. Without these config databases, the Disk Group will not startup. A disk failure could lead to lost of the whole Disk Group configuration if that was the only disk with the database. To make VxVM save a config database to every disk, do this: 1. vxdg list <Disk Group name> grep config

Note that some have the config databases marked "disabled". 2. vxedit -g <Disk Group name> set nconfig=all db vxedit -g <Disk Group name> set nlog=all db vxdg flush <Disk Group name> 3. Now check to see that all config datases should

be "online". vxdg list <Disk Group name> ----------------------------------VXPRIVUTIL DISK GROUP CONFIGURATION To see the contents of the Private Region on a disk, even if its not in an imported Disk Group, use the /etc/vx/diag.d/vxprivutil command. vxprivutil scan /dev/rdsk/c1t3d3s3 Example Output: diskid: 943979447.1267.rtfm group: name=db_data id=943979360.1265.rtfm flags: private noautoimport hostid: version: 2.1 iosize: 512 public: slice=4 offset=0 len=4143520 private: slice=3 offset=1 len=10639 update: time: 943993750 seqno: 0.14 headers: 0 248 configs: count=1 len=7827 logs: count=1 len=1186 You can also use the "dumpconfig" option to dump the config to a file that vxmake can read. vxprivutil dumpconfig /dev/rdsk/c1t3d3s3 vxprint -D - -mvps > /tmp/<output file> ----------------------------------VXDG LIST SPARE DISKS To list spare disks that will be used for Hot-Relocation: vxdg -g <disk group> spare ----------------------------------VXPRINT DETAILED OUTPUT VM OBJECTS To see detailed information of all VxVM objects on your system (useful for diagnostic purposes): vxprint -ht To see all information about all subdisks: vxprint -ls To see all information about all plexes: vxprint -lp \ grep config

To see all information about all volumes: vxprint -lv To see information about a particular plex: vxprint -g <disk group> -l <plex> To see information about a particular volume: vxprint -g <disk group> -l <volume name> To see information about a subdisk: vxprint -g <disk group> -l <subdisk> ----------------------------------------------------DISK TYPES VERITAS categorizes disks in three types: 1. nopriv disks (e.g., ramdisks, transient devices) 2. sliced disks (default type, most disks, devices that can be partitioned) 3. simple disks (devices that cannot be partitioned, partitions or virtual disks on spindles, RAID LUNS, e.g. EMC logical volumes, Private and Public Regions are on the same "partition") NOTE: Disks here refers to what VERITAS sees. The disks may be actual physical spindles or really logical devices. ----------------------------------------------------VXPRINT OUTPUT EXAMPLES These are the headers of the vxprint output: DG DM V PL SD SV NAME NAME NAME NAME NAME NAME NCONFIG DEVICE USETYPE VOLUME PLEX PLEX NLOG TYPE KSTATE KSTATE DISK VOLNAME MINORS PRIVLEN STATE STATE DISKOFFS NVOLLAYR GROUP-ID PUBLEN LENGTH LENGTH LENGTH LENGTH STATE READPOL LAYOUT [COL/]OFF [COL/]OFF

PREFPLEX NCOL/WID MODE DEVICE MODE AM/NM MODE

Here's a typical rootdg Disk Group output: dg rootdg dm c0t12d0 dm c0t13d0 default c0t12d0s2 c0t13d0s2 default 0 sliced sliced 3590 3590 965178473.1025.cawc-wigdb1 17682083 17678493 -

Here's a typical vxprint output for the Rootdisk volume: v rootvol pl rootvol-01 root rootvol ENABLED ACTIVE ENABLED ACTIVE 9289917 ROUND 9289917 CONCAT RW

sd sd pl sd

rootdisk-B0 rootvol-01 rootdisk-02 rootvol-01 rootvol-02 rootvol rootdisk03-01 rootvol-02

rootdisk 9289916 rootdisk 0 ENABLED ACTIVE rootdisk03 0

1 9289916 9289917 9289917

0 1 CONCAT 0

c0t0d0 c0t0d0 c7t0d0

ENA ENA RW ENA

Here's an output for a typical striped/logged volume: v pl sd sd sd sd pl sd sd sd sd pl sd IWATCH IWATCH-01 appdg01-06 appdg02-06 appdg03-06 appdg04-06 IWATCH-02 appdg05-06 appdg06-06 appdg07-06 appdg08-06 IWATCH-03 appdg06-07 fsgen IWATCH IWATCH-01 IWATCH-01 IWATCH-01 IWATCH-01 IWATCH IWATCH-02 IWATCH-02 IWATCH-02 IWATCH-02 IWATCH IWATCH-03 ENABLED ENABLED appdg01 appdg02 appdg03 appdg04 ENABLED appdg05 appdg06 appdg07 appdg08 ENABLED appdg06 ACTIVE ACTIVE 12076533 12076533 12076533 12076533 ACTIVE 12076533 12076533 12076533 12076533 ACTIVE 12335085 1024000 1034232 258552 258552 258552 258552 1034232 258552 258552 258552 258552 LOGONLY 5 SELECT STRIPE 0/0 1/0 2/0 3/0 STRIPE 0/0 1/0 2/0 3/0 CONCAT LOG 4/128 c0t2d0 c0t3d0 c0t4d0 c0t5d0 4/128 c7t2d0 c7t3d0 c7t4d0 c7t5d0 c7t3d0 RW ENA ENA ENA ENA RW ENA ENA ENA ENA RW ENA

NOTE: A subdisk's offset within a Public Region is NOT the same as the Public Region's offset within a physical disk slice. This is mainly important in regards to encapsulated disks. vxdisk list <disk name> shows the offsets of the Public and Private Regions. vxprint shows the offsets of the subdisks within the Regions. ----------------------------------VXDG LIST FREE SPACE IN DISK GROUP To see the amount of free space in all Disk Groups: vxdg free To see the amount of free disk space inside a Disk Group: vxdg -g <disk group> free NOTE: The free space is under the LENGTH column and is measured in 512-byte sectors. ----------------------------------VXASSIST LIST MAXIMUM VOLUME SIZE IN DISK GROUP To see the largest volume that can be currently created in a Disk Group: vxassist -g <disk group> maxsize ----------------------------------VXINFO INFORMATION ABOUT VOLUME PLEXES

To find basic information about the plexes and volumes in a disk group: vxinfo -p -g <disk group> ----------------------------------DISK ENCAPSULATION REQUIREMENTS 1. Small amount of space at the beginning or end of the disk, or in swap. 2. Two free partitions (unassigned). 3. Slice s2 representing the whole disk. ----------------------------------ROOT ENCAPSULATED FILESYSTEMS If you have a boot disk you want encapsulated, and mirrored with another disk, VxVM will make the subdisks within the two physical disks like this. EXAMPLE: / and /var on the bootdisk. rootdisk ---------------rootdisk-01 ---------------rootdisk-02 ---------------rootdisk-03 ---------------rootdisk-B0 ---------------rootdisk2 ---------------rootdisk2-01 ---------------rootdisk2-02 ---------------rootdisk2-03 -------------------------------

Notice that encapsulation adds the rootdisk-B0 subdisk. This VxVM mechanism diverts any I/O away from Block 0 of the Disk (the first 512 bytes where the critically important VTOC information is stored) to an offset within the Public Region. -B0 subdisks appear on any disks encapsulated by VxvM, not just the root disk. Here's a vxprint output that shows rootdisk-B0: Disk group: rootdg V NAME PL NAME SD NAME v pl sd sd rootvol rootvol-01 rootdisk-B0 rootdisk-02 USETYPE VOLUME PLEX root rootvol rootvol-01 rootvol-01 KSTATE KSTATE DISK ENABLED ENABLED rootdisk rootdisk STATE LENGTH STATE LENGTH DISKOFFS LENGTH ACTIVE ACTIVE 66527 0 66528 66528 1 66527 READPOL PREFPLEX LAYOUT NCOL/WID MODE [COL/]OFF DEVICE MODE ROUND CONCAT 0 1 c0t3d0 c0t3d0 RW ENA ENA

In general, the vxprint output shows the Public Region. In this example, rootdisk-B0 remaps I/O to Block 0 to offset 66527. The Public Region is usally offset itself, and the Private Region contains cylinders inside the Public Region. The area rootdisk-B0 occupies is the overlap between these two Regions. Since no user process can write to the Private Region,

the Block 0 is protected. If during encapsulation, there were no free cylinders at the ends of the boot disk, and VxVM had to get free cylinders from swap space, another special subdisk is created, outside the rootvol. It is usually named <rootdisk name>Priv. This is the Private Region. Example vxprint output: sd rootdiskPriv rootdisk 7171200 2159 PRIVATE c0t0d0 ENA

On the mirror of the encapsulated boot disk, and all other VxVM initialized disks, the Private Region occupies the Block 0 area of the disk. ----------------------------------BOOT DISK MIRROR ENCAPSULATED ROOTDISK NON-RAS VXMIRROR Once you have your boot disk encapsulated, you want it mirrored. A quick and simple way to mirror every volume on the rootdisk after encapsulation is to execute: vxdg -g rootdg adddisk rootdisk2=c1t1d1 vxmirror -g rootdg rootdisk rootdisk2 In this example, rootdisk is the primary boot disk, and rootdisk2 is the secondary boot disk (i.e., the "mirror"). Since VxVM mirrors the volumes in alphabetical order, the ordering of the volumes may not meet the Sun RAS (Reliability, Availability, Serviceability) requirements. -------------------------------------------------MIRROR ROOTDISK SUN RAS Here is the Sun RAS procedure to mirror the rootdisk. The order is important if you are going to re-initialize the encapsulated rootdisk for a Sun RAS configuration. All RAS information was derived from the Sun BluePrints Online - August 2000, "Towards a Reference Configuration for VxVM Managed Boot Disks", by Gene Trantham and John S. Howard. 1. 2. 3. 4. 5. 6. 7. /etc/vx/bin/vxdisksetup -i c1t1d1 vxdg -g rootdg adddisk rootmirror=c1t1d1 /etc/vx/bin/vxrootmir rootmirror vxassist -g rootdg mirror swapvol rootmirror vxassist -g rootdg mirror var rootmirror vxassist -g rootdg mirror opt rootmirror vxdisk -g rootdg list

-------------------------------------------------REINITIALIZE ENCAPSULATED ROOTDISK VXMKSDPART SUN RAS Sun's RAS configuration for the rootdisk recommends that the rootdisk be reinitialize, and encapsulation eliminated. Make sure you have mirrored the rootdisk first. Then follow

this procedure: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. vxplex vxplex vxplex vxplex vxedit vxedit dis rootvol-01 dis swapvol-01 dis var-01 dis opt-01 -rf rm rootvol-01 swapvol-01 var-01 opt-01 rm rootdiskPriv (if rootdiskPriv exists)

vxdg -g rootdg rmdisk rootdisk /ext/vx/bin/vxdisksetup -i c0t0d0 vxdg -g rootdg adddisk rootdisk=c0t0d0 /ext/vx/bin/vxrootmir rootdisk vxassist -g rootdg mirror swapvol vxassist -g rootdg mirror var vxassist -g rootdg mirror opt Use /usr/lib/vxvm/bin/vxmksdpart to make the underlying partitions for each volume on both boot disks. NOTE: The rootvol root volume partition should be made already by VxVM. EXAMPLE: Make a partition for /opt. Use prtvtoc to see which slices are available. Use vxprint to find the subdisk for /opt. vxmksdpart -g <disk group> <subdisk> <slice> <tag> <flag> vxmksdpart -g rootdg rootdisk-03 5 0x09 0x01

15. Re-specify the dump device (e.g., use dumpadm). 16. Edit the NVRAMC to contain the aliases of bootable disks. Now, rootvol does not depend on encapsulated boot disks. ----------------------------------VXDISKADD ADD DISK TO EXISTING OR NEW DISK GROUP To add a Physical Disk to a new or existing Disk Group: vxdiskadd c0t1d0 NOTE: This command will let you pick an existing Disk Group, or make a new one. It will give you the option of using custom or default disk names. ---------------------------------VXDG CREATE NEW DISK GROUP FROM PHYSICAL DISK You can create a new disk group with Physical Disks that are not already in a Disk Group. vxdg init <disk group> <VM disk>=<disk device> EXAMPLE: vxdg init webdg webdg01=c0t2d0 ----------------------------------

VXDG ADDDISK PHYSICAL DISK TO DISK GROUP Another way to add a Physical Disk to an existing Disk Group is: vxdg -g <disk group> adddisk <VM disk>=<disk device> EXAMPLE: vxdg -g webdg adddisk webdg02=c3t4d0 ---------------------------------VXASSIST CREATE CONCATENATED VOLUMES vxassist -g <disk group> make <volume> <size> <disk1> <disk2> ... NOTE: This will also create the first (yet to be mirrored) plex for you. EXAMPLE: To create a concatenated 2GB Volume in a Disk Group on a specific disk: vxassist -g webdg make webvol1 2g disk03 NOTE: The <size> setting can be a variety of units. g m k s b = = = = = GB MB KB Standard Sector Size (usually 512 bytes) 512 bytes

If no unit is given, VxVM will interpret the number to be in sectors. --------------------------------------------DEVICE FILES AMONG DISK GROUPS Volumes in the Root Disk Group (rootdg) have device files of the following form: Block device: /dev/vx/dsk/<volume name> Raw (character device): /dev/vx/rdsk/<volume name> Volumes in all other disk groups have device files of this form: Block device: /dev/vx/dsk/<disk group>/<volume name> Raw (character device): /dev/vx/rdsk/<disk group>/<volume name> --------------------------------------------VXASSIST CREATE STRIPED VOLUMES COLUMNS To create a 300MB volume striped on 3 disks in a Disk Group: vxassist -g webdg make webvol1 300m layout=stripe disk03 disk04 disk05 To specify the number of stripes (columns) on a controller: vxassist -g appdg make ORAPRD 3g layout=stripe,log ctrl:c0 ncol=4

NOTE: Aliases for "ncol" include "stripes, ncolumn, ncolumns, ncol, ncols, columns, cols, nstripe" To then mirror this to another 4 disks on another controller: vxassist -g appdg mirror ORAPRD ctrl:c5 ncol=4 To specify a stripe width size (default is 64K): vxassist -g appdg make ORAPRD 3g layout=stripe stripeunit=128 ctrl:c0 ncol=4 --------------------------------------------VXASSIST CREATE RAID5 VOLUMES To create a 700MB RAID5 volume: vxassist -g webdg make webvol9 700m layout=raid5 ---------------------------------VXMAKE CREATE RAID5 PLEXES To create a 3 column RAID5 plex from 6 subdisks: vxmake -g <disk group> plex <plex name> layout=raid5 stwidth=32 \ sd=<subdisk1>:0,<subdisk2>:1,<subdisk3>:2,<subdisk4>:0, \ <subdisk5>:1,<subdisk6>:2 If you want to create the subdisks later: vxmake -g <disk group> plex <plex name> layout=raid5 ncolumn=3 stwidth=32 NOTE: "stwidth" is the stripe unit size, in sectors if no unit is specified. ---------------------------------VXVOL RESYNCHRONIZE VOLUME To resync normal or RAID5 volumes: vxvol -g <disk group> resync <volume name> For RAID5 volumes, this will resync the parity. ---------------------------------VXVOL STALE SUBDISKS To recover stale RAID5 subdisks: vxvol -g <disk group> recover <volume> <subdisk> To recover multiple stale RAID5 subdisks: vxvol -g <disk group> recover <volume> ----------------------------------

VXVOL MAINTENANCE MODE If all plexes are stale, put the volume in maintenance mode: vxvol -g <disk group> maint <volume name> ---------------------------------VXVOL START STOP ALL VOLUME ACTIVITY To stop all VxVM activity on a Volume: vxvol -g <disk group> stop <volume name> To stop all enabled volumes in a Disk Group: vxvol -g <disk group> stopall To start all disabled volumes in a Disk Group: vxvol -g <disk group> startall or vxrecover -g <disk group> -sb ---------------------------------VXVOL INITIALIZE VOLUME AND SYNCHRONIZE FROM PLEX Before starting a volume, you may need to initialize it by choosing a plex and synchronizing all other plexes to it. vxvol -g <disk group> init clean <volume name> <plex name> EXAMPLE: To make all plexes synchronize with plex webvol-02: vxvol -g webdg init clean webvol webvol-02 If the data does not need to be in sync, you can temporarily initialize the volume, restore data from backup and re-initialize and start the volume: vxvol -g <disk group> init enable <volume name> vxvol -g <disk group> init active <volume name> If you need to destroy all data on a volume before a full restore: vxvol -g <disk group> init zero <volume name> ---------------------------------------------VXVOL VXRECOVER START VOLUME To force start a volume, and resync all plexes in the background:

vxvol -g <disk group> -o bg -f start <volume name> To start all disabled volumes in a volume group: vxrecover -g <disk group> -s or vxvol -g <disk group> startall -------------------------------------------------------------------VOLUME PLEX STATES KERNEL STATES The following information was taken from Sun InfoDocs 17588 and 17963. Both Volumes and Plexes have Kernel States and ordinary States. EXAMPLE: V NAME PL NAME v pl sd sd pl sd USETYPE VOLUME KSTATE KSTATE STATE STATE LENGTH LENGTH 9289917 9289917 1 9289916 9289917 9289917 READPOL LAYOUT ROUND CONCAT 0 1 CONCAT 0 PREFPLEX NCOL/WID MODE c0t0d0 c0t0d0 c7t0d0 RW ENA ENA RW ENA

rootvol root rootvol-01 rootvol rootdisk-B0 rootvol-01 rootdisk-02 rootvol-01 rootvol-02 rootvol rootdisk03-01 rootvol-02

ENABLED ACTIVE ENABLED ACTIVE rootdisk 9289916 rootdisk 0 ENABLED ACTIVE rootdisk03 0

Volume Kernel States: *DISABLEDThe volume may not be accessed. *DETACHEDThe volume cannot be read or written, but plex device operations and ioctl functions are accepted. *ENABLEDThe volumes can be read and written. Volume States: 1. State=CLEAN Kernel State=DISABLED The volume is not started and its plexes are synchronized. 2. State=ACTIVE The volume has been started (Kernel State ENABLED) or was in use when the machine was rebooted (Kernel State ENABLED). If the volume is DISABLED, the plexes cannot be guaranteed to be consistant, but will be consistant when the volume is started. 3. State=EMPTY Kernel State=DISABLED The volume contents are not initialized. The kernel state is always DISABLED when the volume is EMPTY.

4. State=SYNC The volume is either in read-writeback recovery mode (Kernel State ENABLED) or was in this mode when the machine was rebooted (Kernel State DISABLED). With read-writeback recovery, plex consistancy is recovered by reading data blocks of one plex and writing the data to all other writable plexes. If the volume is Kernel State ENABLED, the plexes are being resyncronized. If volume is Kernel State DISABLED it was resynching when machine was reboted and the plexes need to be resynchronized. 5. State=NEEDSYNC The volume will require a resynchronization operation the next time it is started. RAID-5 Volume States Raid-5 have there own set of volume states: CLEAN - The volume is not started and its parity is good. The raid-5 plex stripes are consistant.

ACTIVE - The volume has been started or was in use when the machine was rebooted. If the volume is DISABLED, the parity can't be guaranteed to be synchronized. EMPTY SYNC - The volume contents are not initalized. The kernel state is always DISABLED when the volume is EMPTY. - The volume is either undergoing a parity resynchronization or was having its parity resynchronized when the machine was rebooted.

NEEDSYNC- The volume will require a parity resynchronization operation the next time it is started. REPLAY - The volume is in a transient state as part of a log replay. A log replay occurs when it becomes necessary to use logged parity and data. Plex Kernel States: *DISABLEDThe plex may not be accessed. *DETACHEDA write to the volume is not reflected to the plex. A read request from the volume will never be satisfied from the plex. Plex operations and ioctl functions are accepted. *ENABLEDA write request to the volume will be reflected to the plex. A read request from the volume will be satisfied from the plex. NOTE- No user intervention is required to set these states; they are maintained internally. On a sytem that is operating properly, all plexes are enabled.

Plex States: EMPTY CLEAN ACTIVE STALE OFFLINE TEMP TEMPRM TEMPRMSD IOFAIL EMPTY Plex StateWhen a volume is created and the plex isn't initialized the plex is in an EMPTY state. CLEAN Plex StateA plex is in a CLEAN state when it is known to contain a good copy (mirror) of the volume. Therefore all the plexes of a volume are clean. If the Volume is DISABLED, you can start the Volume with all plexes in sync. ACTIVE Plex StateA plex can be in the ACTIVE state in two situations: *When the volume is started and the plex fully participates in normal volume I/O (meaning the plex contents change as the contents of the volume change) *When the volume was stopped as a result of a system crash and the plex was active at the moment of the crash. In the latter case, a system failure may leave plex contents in an inconsistent state. When a volume is started, VxVM performs a recovery action to guarantee that the contents of the plexes are marked as ACTIVE are made identical. --------------------------------------------------------------NOTE- ACTIVE state should be the most common state for plexes on a well running system _______________________________________________________________ STALE Plex StateIf there is a possibility a plex doesn't have the complete and current volume contents, This plex is placed in a STALE state. Also, if I/O errors occur on a plex, the kernel stops using and updating this plex, and the operation sets the state of the plex to STALE. To re-attach the plex to the volume you can run the *vxplex -g (disk group) att volume-name plex-name* or highlight the plex and goto advanced options->plex-> attach plex and it will sync the date and set the plex to ACTIVE state. To force a plex into STALE state you can run *vxplex -g (disk group) det volume-name plex-name* or highlight the plex and goto advanced options->plex-> dettach plex.

OFFLINE Plex StateThe *vxmend -g diskgroup off volname plexname* will detach a plex from a volume setting the plex state to OFFLINE. Although the detached plex is associated with the volume the changes to the volume aren't reflected to the plex while in the OFFLINE state. Running the *vxplex -g (disk group) att volume-name plex-name* or highlight the plex and goto advanced options->plex-> attach plex will set the plex state to STALE and will start to recover data after the vxvol start operation. TEMP Plex StateA utility will set the plex state to TEMP at operation and to an appropriate state at the opperation. For example, attaching a plex to requires copying volume contents to the plex consited fully attached. the start of an end of the an enabled volume before it can be

If the system goes down for any reason, a TEMP plex state indicates the operation is incompleate; a subsequent vxvol start will dissociate plexes in the TEMP state. TEMPRM Plex StateA TEMPRM plex state resembles a TEMP state except that at the completion of the operation, the TEMPRM plex is removed. If the system goes down for any reason, a TEMPRM plex state indicates the operation is incompleate; a subsequent vxvol start will disassociate plexes and remove the TEMPRM plex. TEMPRMSD Plex StateThe TERMPRMSD plex state is used by vxassist when attaching new plexes. If the operation doesn't complete, the plex and its subdisk are removed. IOFAIL Plex StateThe IOFAIL plex state is associated with persistent state logging. On the detection of a failure of an ACTIVE plex, vxconfigd places that plex in the IOFAIL state so that it is disqualified from the recovery selection process at volume start time. --------------------------------------------------------------------VXEDIT USER GROUP OWNERSHIP PERMISSIONS To set the ownership and permissions on a Volume, use the vxedit command. vxedit set user=<user name> group=<group name> mode=<xxxx> <volume> EXAMPLE: vxedit set user=bchoi group=sysadm mode=755 webvol01 ---------------------------------------------------------------------

VXEDIT RENAME VOLUME To rename a volume, simply use the vxedit command. EXAMPLE: vxedit -v -g <disk group> rename <old name> <new name> ---------------------------------VXEDIT FORCE REMOVE VOLUME AND UNDERLYING OBJECTS To force remove a Volume and its Plexes and Subdisks: vxedit -g <disk group> -rf rm <volume name> ---------------------------------VXEDIT RESERVE VM DISK To protect a VM disk from a vxassist command that does not explicitly refer to that disk: vxedit -g <disk group> set reserve=on <disk> To disable reservation: vxedit -g <disk group> set reserve=off <disk> --------------------------------------------VXEDIT VM DISKS AND HOT-RELOCATION POOL To add a disk to the Hot-Relocation Pool: vxedit -g <disk group> set spare=on <disk> To remove a disk from the Hot-Relocation Pool: vxedit -g <disk group> set spare=off <disk> --------------------------------------------VXDG REMOVE A DISK FROM DISK GROUP To remove a disk from a Disk Group: vxdg -g <disk group> rmdisk <disk> EXAMPLE: vxdg -g oradg rmdisk disk01 NOTE: You must disable any affected volumes or offline a mirror to do this safely. To force a remove even if subdisks are on the disk: vxdg -g <disk group> -k rmdisk <disk> ---------------------------------------------

VXDISK TAKE VM DISK OFFLINE To take a disk out of service (stop all I/O to it), remove the disk from the Disk Group, and run: vxdisk offline <disk device> EXAMPLE: vxdisk offline c1t1d0s2 ------------------------------------------VXDISK REMOVE PHYSICAL DISK FROM VxVM CONTROL After you remove a disk from a Disk Group, you can remove it from VxVM control: EXAMPLE: vxdisk rm c1t1d0s2 ------------------------------------------NEWFS MKFS CREATE MOUNT FILESYSTEM ON VOLUME To create a UFS filesystem on a volume: newfs <raw_device_of_volume> EXAMPLE: newfs /dev/vx/rdsk/webdg/webvol1 To create a VxFS filesystem, you should use "mkfs". mkfs -F vxfs -o bsize=8192 -o logsize=2048 \ /dev/vx/rdsk/webdg/webvol1 To mount the filesystem, simply do: mount -F vxfs /dev/vx/dsk/webdg/webvol1 /mnt ----------------------------------VxFS FSADM LARGEFILES If you are using VxFS 3.x, you can enable large file support to allow the filesystem to hold files larger than 2GB in size, and up to 2TB is size. If the filesystem is currently mounted, you can use the fsadm command to enable or disable the largefile option: /usr/lib/fs/vxfs/fsadm -o [no]largefiles /<mount point> On an unmounted filesystem: fsadm -o [no]largefiles <special device>

To check whether a filesystem is currently largefiles enabled: fsadm /<mount point> ----------------------------------VxFS FSADM DEFRAGMENTATION To maintain performance levels, you may have to defrag your VxFS filesystems periodically. fsadm -ed /<mount point> To get reports about the current state of fragmentation: fsadm -ED /<mount point> ----------------------------------VxFS DISK LAYOUT VERSION FSTYP VXUPGRADE FSADM To check the disk layout version of a VxFS filesystem, execute as root: fstyp -v /dev/vx/dsk/<disk group>/<volume> New VxFS packages may have new layout versions. You can still mount filesystems with older layouts, but certain features may not work on them. You probably cannot mount a new filesystem layout under an older package. VxFS 1.x included version 1 VxFS 2.x introduced layout version 2 VxFS 3.x introduced layout version 4 NOTE: Layout version 3 is not supported on Solaris. Use the vxupgrade command to upgrade a mounted filesystem's layout type: EXAMPLE: vxupgrade -n <new version number> /<mount point> or vxupgrade -n <new version number> -r /dev/vx/rdsk/<disk group>/<volume> You may have to upgrade the inode format with `fsadm -c /<mount point>` also. ----------------------------------VXASSIST GROWING VOLUMES To extend a volume to a given size: vxassist -g <disk group> growto <volume name> <length>

EXAMPLE: vxassist -g webdg growto webvol2 2000s To extend a volume by a given size: vxassist -g <disk group> growby <volume name> <length> EXAMPLE: vxassist -g webdg growby webvol3 300m To see how much a volume can grow by: vxassist -g <disk group> maxgrow <volume name> ---------------------------------VXVOL CHANGE LENGTH OF VOLUMES To change the length of a volume: vxvol -g <disk group> set len=<value> <volume name> NOTE: This works with RAID5 volumes also. To change the length of a RAID5 log volume: vxvol -g <disk group> set loglen=<value> <volume name> ----------------------------------VXASSIST SHRINKING VOLUMES WARNING: Unless you are using the VERITAS VxFS Filesystem, shrinking a volume will destroy any data on it. To shrink a volume to a given size: vxassist -g <disk group> shrinkto <volume name> <length> EXAMPLE: vxassist -g webdg shrinkto webvol2 2000s To shrink a volume by a given size: vxassist -g <disk group> shrinkby <volume name> <length> EXAMPLE: vxassist -g webdg shrinkby webvol3 300m ----------------------------------VXASSIST MOVE VOLUMES FROM DISK TO DISK To move a volume that is on one disk to another: vxassist -g <disk group> <volume name> !<source disk> <target disk>

Example of moving a volume from disk01 to disk02: vxassist -g webdg webvol3 !disk01 disk02 ----------------------------------VXASSIST CREATE MIRRORED STRIPED VOLUMES To create a mirrored volume with DRL enabled using default settings: vxassist -g <disk group> make <volume name> <length> layout=mirror,log EXAMPLE: vxassist -g oradg make oravol1 2g layout=mirror,log In VxVM 3.x, you can create stripe-mirror volumes. EXAMPLE: vxassist -g homedg make USERS 68990m layout=stripe-mirror,log ----------------------------------VXASSIST ONLINE RELAYOUT In VxVM 3.x, you can use a new feature called Online Relayout to reorganize a volume layout, e.g. from stripe-mirror to mirror-stripe. EXAMPLE: vxassist -g homedg convert USERS layout=mirror-stripe,log To change it back to stripe-mirror, simply do: vxassist -g homedg convert USERS layout=stripe-mirror,log ----------------------------------VXASSIST CREATE MIRROR FOR EXISTING VOLUME vxassist -g <disk group> mirror <volume name> To specify exactly which disks you want to use in the mirror: vxassist -g <disk group> mirror <volume name> alloc="<disk1> <disk2> <disk3> <disk4> ..." ----------------------------------VXRESIZE SHRINK EXTEND GROW VOLUME FILESYSTEM To change the size of a volume AND the underlying filesystem, use the "vxresize" command, located in /etc/vx/bin. EXAMPLE: To grow a filesystem to a final size of 35GB (in the background):

/etc/vx/bin/vxresize -b -x -F vxfs -g homedg USERS 34495m NOTE: Some filesystems may need to be mounted or unmounted for this command to work. You can monitor the progress of vxresize with vxprint and vxstat, but not vxtask. Under VxVM 3.x, this command will work on mirror-stripe volumes. Use the "-s" option to shrink, and the "-x" option to expand. ----------------------------------VXMIRROR MIRROR ALL VOLUMES FROM DISK TO DISK To mirror all volumes that fit on 1 disk to another: vxmirror -g <disk group> <source disk> <target disk> NOTE: This command is useful in mirroring all the simple (non-striped) volumes on the root/boot disk to another disk. EXAMPLE: vxmirror -g <disk group> disk01 disk02 NOTE: In this example, "disk01" is the source or original disk, and "disk02" is the disk that will contain the new mirrored copies. --------------------------------------VXMIRROR MIRROR ALL VOLUMES To quickly mirror all Volumes to available disk space: vxmirror -g <disk group> -a --------------------------------------VXASSIST SNAPSHOT ONLINE BACKUP To do an online snapshot backup on a volume, a snapshot volume must be created: 1. vxassist -g <disk group> snapstart <volume name> 2. vxassist -g <disk group> snapshot <volume name> <snapshot volume> 3. fsck -y /dev/vx/rdsk/<snapshot volume> EXAMPLE: vxassist -g webdg snapstart webvol2 vxassist -g webdg snapshot webvol2 snapshotvol fsck -y /dev/vx/rdsk/snapshotvol The snapshot volume can then be backed up using whatever software desired. Remove the snapshot volume after the backup:

vxedit -g <disk group> -rf rm <snapshot volume> --------------------------------------VXASSIST ADD RAID5 OR DRL LOG PLEX To add a new or additional log plexes to RAID5 or RAID1 volumes: vxassist -g <disk group> addlog <volume name> NOTE: RAID5 log plexes are created by default when vxassist is used to create the RAID5 volume. Only one DRL log subdisk can exist per log plex. When vxassist is used to create a DRL log subdisk, a log plex is created by default to contain that subdisk. --------------------------------------VXPLEX ASSOCIATE PLEXES WITH VOLUMES To attach a plex with a volume: vxplex -g <disk group> att <volume name> <plex name> NOTE: You can use this method to attach RAID5 log plexes to RAID5 volumes. If the volume is not enabled, try this: vxmend -g <disk group> on <plex name> --------------------------------------VXMAKE CREATE VOLUME FROM PLEXES To create a volume from plexes: vxmake -g <disk group> -U <usetype> vol <volume name> len=<length> \ plex=<plex1>,<plex2>,... NOTE: For normal volumes, "usetype"=fsgen. For RAID5 volumes, "usetype"=raid5 For RAID5 volumes, the plexes can be RAID5 plexes and RAID5 log plexes. --------------------------------------VXMEND FIXING PLEXES If a Volume cannot start, try to use vxmend to fix it. Pick one plex as "stale", and another as "clean". The stale plex will then sync from the clean one. EXAMPLE: vxmend -g <Disk Group> fix stale <Plex2> vxmend -g <Disk Group> fix clean <Plex1> vxvol -g <Disk Group> start <Volume> ---------------------------------------

VXMEND VXPLEX OFFLINE DETACH PLEX FOR DISK REPAIR If a disk crashes, you may have to offline and detach the plexes on that disk. 1. Offline the plexes: vxmend -g <disk group> off <plex1> <plex2> ... 2. Detach each plex: vxplex -g <disk group> det <plex name> 3. After repairs are done, restart the volume if necessary: vxvol -g <disk group> start <volume name> If the volume cannot start, clean one of the plexes if necessary: vxmend -g <disk group> fix clean <plex name> Restart the volume. 4. Attach the plex to the volume if necessary. --------------------------------------VXPLEX DISASSOCIATE REMOVE PLEX To disassociate a plex from a volume and remove it: vxplex -g <disk group> -o rm dis <plex name> EXAMPLE: vxplex -g webdg -o rm dis webvol-02 NOTE: The same syntax works for RAID5 log plexes. The last plex in a volume cannot be removed using this method. You can do the same thing with: vxplex -g <disk group> dis <plex name> vxedit -g <disk group> -r rm <plex name> -------------------------------------EEPROM MIRROR THE BOOT DISK After you mirror the boot/root disk, run: eeprom "use-nvramrc?"=true NOTE: The eeprom command requires the quotation marks ("") for parameters with question marks (?). This will allow your system to automatically boot to a boot mirror disk in case your primary boot disk fails. ---------------------------------------------

EEPROM CHECK BOOT DISK ALIASES Depending on your Sun hardware, once you mirror your boot/root disk, you should see lines like this from the eeprom command output: nvramrc=devalias vx-rootdisk /sbus@3,0/SUNW,fas@3,8800000/sd@0,0:a devalias vx-rootdisk2 /sbus@2,0/QLGC,isp@2,10000/sd@6,0:a This means if you want to boot off the mirror, you can specify at the OK prompt: boot vx-rootdisk2 --------------------------------------------VXMAKE CREATE SUBDISKS To create subdisks, use the following command: vxmake -g <disk group> sd <subdisk> <VM disk>,<offset>,<length> EXAMPLE: To create a subdisk named disk02-01 that starts at the beginning of disk disk02 and has a length of 8000 sectors: vxmake -g webdg sd disk02-01 disk02,0,8000s Another method is: vxmake -g appdg sd appdg02-11 disk=appdg02 offset=15049881 len=1052163 --------------------------------------------VXMAKE CREATE PLEXES FROM SUBDISKS To create a plex from subdisks: vxmake -g <disk group> plex <plex name> sd=<subdisk1>,<subdisk2>,... EXAMPLE: To create a plex called webvol-02 from subdisks disk02-01, disk02-00 and disk02-02: vxmake -g webdg plex webvol-02 sd=disk02-01,disk02-00,disk02-02 --------------------------------------------VXSD ASSOCIATE SUBDISKS WITH EXISTING PLEX To add subdisks to an existing plex: vxsd -g <disk group> assoc <plex name> <subdisk1> <subdisk2> <subdisk3> ... EXAMPLE: vxsd -g webdg assoc webvol-02 disk02-04 disk02-05 disk02-06

If you want to add a subdisk as a DRL log subdisk: vxsd -g <disk group> aslog <plex name> <subdisk> EXAMPLE: To add subdisks at the end of each column of a RAID5 volume: vxsd -g <disk group> assoc <volume> <subdisk1>:0 <subdisk2>:1 <subdisk3>:2 --------------------------------------------VXSD DISASSOCIATING SUBDISKS FROM PLEX To disassociate subdisks from a plex: vxsd -g <disk group> dis <subdisk name> NOTE: This method works for RAID5 plexes also. To disassociate and remove a subdisk from a plex: vxsd -g <disk group> -orm dis <subdisk name> --------------------------------------------VXEDIT REMOVE SUBDISK To remove a subdisk: vxedit -g <disk group> rm <subdisk> --------------------------------------------VXSD REPLACING SUBDISK WITH ANOTHER To replace a subdisk with another, you "move" subdisks: vxsd -g <disk group> mv <old subdisk> <new subdisk> EXAMPLE: vxsd -g appdg -p ORA_ADMIN-01 -v ORA_ADMIN mv appdg04-09 appdg02-11 --------------------------------------------VXSD SPLITTING SUBDISK To split a subdisk into two pieces: vxsd -g <disk group> -s <size> split <subdisk> <new subdisk1> <new subdisk2> NOTE: The size is the size of the first new subdisk. The second subdisk will take the remaining space. --------------------------------------------VXSD JOINING SUBDISKS INTO ONE SUBDISK To join subdisks into one subdisk:

vxsd -g <disk group> join <subdisk1> <subdisk2> <new subdisk> --------------------------------------------VXDG DEPORT IMPORT DISK GROUPS To remove a Disk Group, you must deport it. After stopping all Volumes in a Disk Group: vxdg deport <disk group> Move the Disk Group to the new server, and import: vxdg import <disk group> NOTE: If you need to force an import, use the "f" option. If you need to clear locks, use the "C" option. Another way to clear locks is to run: vxdisk clearimport <disk device> ... Startup the Volumes in the Disk Group: vxrecover -g <disk group> -sb or vxvol -g <disk group> startall -----------------------------------VXDG DEPORT IMPORT ROOT DISK GROUP Since only one Disk Group named rootdg can exist at a time, you must rename this group when deporting and importing it. 1. Find out the Disk Group ID of rootdg from "vxdisk -s list". Move the Root Disk Group to another server. 2. Import the disk group on the new server. vxdg -tC -n <New Disk Group Name> import <Disk Group ID> 3. After fixing the problem, deport the disk group back to the original host. vxdg -h <Hostname> deport <Disk Group ID> NOTE: The "Hostname" is the hostname of the server the disk group originally came from. 4. Now move the disk group back to the original host and boot up. -----------------------------------VXPRINT VXMAKE BACKUP RECOVERY OBJECTS

To backup the configurations of Disk Groups and their volumes, plexes and subdisks, use the vxprint command. The output file can be read (i.e., imported) by vxmake. Any of these two will work: vxprint -g <disk group> -hmqQ > <file name> vxprint -g <disk group> -vpshm > <file name> NOTE: If you are using the new Layered Volumes under VxVM 3.x, you must also use the "-r" option. To import the configuration into your system, execute: vxmake -g <disk group> -d <file name> The file processed by this command is the file created by the vxprint command using the above options. ---------------------------------------VxVM PERFORMANCE MONITORING Commands that monitor VxVM performance: vxtrace vxstat When using vxstat, reset the statistics first with: vxstat -r For disk statistics: vxstat -d ----------------------------------------VXRECOVER VXSTAT LIST DISKS WITH FAILED PLEXES If VxVM e-mails you with a list of failed plexes, you can find out which disks and subdisks are affected by running: vxstat -s -ff webvol-02 oravol-03 After fixing the problem, recover the plexes for each volume affected: vxrecover -g <disk group> -b <volume name> --------------------------------------------------------------VXTASK CURRENT VERITAS VxVM JOBS RUNNING Under VERITAS VxVM 3.x, to see a list of VxVM commands that are being executed, run: vxtask list This will show you all the jobs queued up (e.g., synching plexes).

--------------------------------------------------------------VxVM COMMAND LIST The following is a list of most VxVM commands and the Man Page sections they are located in: vxintro(1M) vol_pattern(4) vxassist(1M) vxbootsetup(1M) vxconfig(7) vxconfigd(1M) vxdctl(1M) vxdg(1M) vxdisk(1M) vxdiskadd(1M) vxdiskadm(1M) vxdisksetup(1M) vxdmp(7) vxedit(1M) vxencap(1M) vxevac(1M) vxinfo(1M) vxinfo(7) vxio(7) vxiod(1M) vxiod(7) vxmake(1M) vxmake(4) vxmend(1M) vxmirror(1M) vxnotify(1M) vxplex(1M) vxprint(1M) vxr5check(1M) vxreattach(1M) vxrecover(1M) vxrelocd(1M) vxresize(1M) vxrootmir(1M) vxsd(1M) vxserial(1M) vxsparecheck(1M) vxstat(1M) vxtrace(1M) vxtrace(7) vxunroot(1M) vxvol(1M) ----------------------------------------MOVING HOT SWAP DISKS AND PLEXES To prepare for the moving of Disks and Plexes while the system is online (e.g. on a hot swap system): 1. vxmend off webvol1-02 webvol2-02 rootvol-02 ...

2. Move disks to new locations. 3. drvconfig ; disks (if necessary) 4. vxplex att webvol1 webvol1-02 vxplex att webvol2 webvol2-02 vxplex att rootvol rootvol-02 ... 5. vxvol start webvol1 (if necessary) vxvol start webvol2 (if necessary) vxvol start rootvol (if necessary) ... NOTE: If you must shutdown the system before moving the disks (e.g. if hot swap does not work), then you may not need to do anything. VxVM will relocate all volumes automatically. VxVM has the capability of automatically mapping new device files (created with "boot -r") to volumes, wherever the disks are moved to. -------------------------------------FSCK /USR ENCAPSULATED FILESYSTEM If you need to fsck the /usr encapsulated volume: If the Volume Manager is running, you may be able to just go into Single User Mode and run: fsck NOTE: fsck will then preen each filesystem. When it gets to /usr, it may be able to fix it safely. If the Volume Manager cannot run, you may have to boot from a Solaris CD-ROM local or remote, and mount the VERITAS VM CD-ROM and start Volume Manager. See the DISASTER RECOVERY section, and Appendix B of Sun's VxVM System Administrator's Guide for more information. ------------------------------------------------UFSRESTORE ROOT (/) AND /USR ENCAPSULATED VOLUMES Procedure for doing a ufsrestore of / and /usr volumes: 1. Boot off of a Solaris CD-ROM or boot server. 2. Mount the appropriate partition. EXAMPLE: mount /dev/dsk/c0t0d0s3 /mnt 3. Change to the directory of the filesystem, and run the ufsrestore command: ufsrestore xf /dev/rmt/0m 4. Shutdown the server and physically detach the cable to the mirror disk. Startup the server.

If you cannot be physically at the server, you should unencapsulate the root disk. SEE BELOW. 5. Once up, the OS and VxVM should be working properly, although VxVM will complain about failed plexes in rootdg. Shutdown the system and re-attach the mirror disk. ---------------------------------------------------DISASTER AND RECOVER PROCEDURES If the server will not reboot from the primary boot disk: 1. Try booting into single user mode, verify that VxVM is running, and do fsck on damaged volumes. 2. Try rebooting from a mirror boot disk. EXAMPLE: At the OK prompt: boot vx-rootdisk2 If the server cannot boot from any mirror of the boot disk: 1. Boot from a Solaris CD-ROM, and do fsck on damaged volumes. Try mounting VERITAS Volume Manager on another local CD-ROM drive, or from across the network, and running the Volume Manager application. After doing fsck or any type of repair or recovery, you might try using dd to copy the repaired volume to all other plexes. 2. Try booting from the network from a boot server and running VxVM from a local or remote CD-ROM drive. 3. Try running the diagnostic scripts from the VERITAS CD-ROM. fixsetup fixmountroot fixstartup a) Source the fixsetup file in <mount point>/scripts This will setup a VxVM friendly environment. b) Run fixmountroot This will start VxVM and mount the rootvol read-only. c) Mount the volume read-write to edit files or change VxVM configurations by running VxVM commands. d) If you can't mount any volumes, start VxVM with the fixstartup script. 4. Try unencapsulating the boot disk by editing the /etc/vfstab and /etc/system files, or using the vxunroot command. If you do this, you may have to remove all other plexes for root, usr, swap, etc. 5. If none of these methods work, you must re-install Solaris and VERITAS Volume Manager on the boot disk. As long as the other disks in the other Disk Groups are okay and unaffected, they should be able to be used after re-install and import. No data on those other Disk Groups should be lost.

-------------------------------------------UNENCAPSULATE ROOT DISK VOLUMES ON LIVE SYSTEM This procedure depends on running off of the secondary root disk while hammering on the primary. After unencapsulation, you can encapsulate again and re-mirror (e.g., after an OS upgrade). 1) Unmount and stop all filesystems except /, /usr, and /var. EXAMPLE: umount /home vxvol stop home 3) Save a copy of /etc/system and /etc/vfstab. 4) Remove the stopped volumes. vxedit -r rm home 5) Remove primary root disk mirrors: vxplex dis swapvol-01 vxedit -r rm swapvol-01 vxplex dis usr-01 vxedit -r rm usr-01 vxplex dis var-01 vxplex -r rm var-01 vxsd dis RootdiskPriv (if this exists) vxsd rm RootdiskPriv 6) Verify the root disk has no subdisks on it then... vxdg rmdisk rootdisk 7) Resize root disk partitions for any upgrades. 8) Create partitions on the drive and label. 9) Newfs fileystems for root, usr, var and swap. 10) Mount each new filesystem and ufsdump/ufsrestore the corresponding still alive plex over to it. For root, install a bootblk. Root example... fsck /dev/rdsk/c0t0d0s0 mount /dev/dsk/c0t0d0s0 /mnt cd /mnt ufsdump 0f - /dev/vx/rdsk/rootvol umount /mnt

ufsrestore rf -

installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk \ /dev/rdsk/c0t0d0s0 .

11) Mount the new root filsystem to make some changes: mount /dev/dsk/c0t0d0s0 /mnt Open the /etc/system and disable these two lines (use double asterisks). **rootdev:/pseudo/vxio@0.0 **set vxio:vol_rootdev_is_volume=1 Edit /etc/vfstab and setup mount for /, /usr, /var to mount in the traditional way. Setup swap on the native slice. Comment out all VxVM filesystems. 12) Reboot the system to level S, then come on up. 13) Contingency: Can't open boot file: nvalias newdisk /sbus@1f,0/SUNW,fas@e,8800000/sd@0,0 setenv boot-device newdisk 14) Remove all the old root volumes: vxedit vxedit vxedit vxedit -r -r -r -r rm rm rm rm swapvol var usr rootvol

15) Uncomment VxVM filesystems in /etc/vfstab. Reboot system. Should be on traditional root filesystems now. -------------------------------------------VXIOD VXDCTL VXCONFIGD RECOVERY PROCEDURE If you had to reinstall Solaris and VxVM, use the following procedure to restore (import) the Disk Groups that your disk drives are stored in. 1. rm /etc/vx/reconfig.d/state.d/install-db 2. vxiod set 10 3. vxconfigd -m disable 4. vxdctl init <hostid>* 5. vxdctl enable *NOTE: If the hostid for the system has changed from the previous installation of VxVM, specify the hostid in Step 4, otherwise it is not required. If you remove and reinstall a VxVM package, do this instead: 1. vxiod set 10

2. vxconfigd -m boot 3. vxdctl init <hostid> 4. vxdctl enable --------------------------------------------VOLUME RECOVERY BOTH MIRRORS BAD If a volume cannot startup because the plexes are bad, try starting the volume with one plex at a time. EXAMPLE: Assume you think Plex2 is the bad plex. vxplex -g <Disk Group> dis <Plex2> vxvol -g <Disk Group> -f start <Volume> fsck -y /dev/vx/rdsk/<Disk Group>/<Volume> If the Volume starts alright, and you can mount the filesystem read-write, attached Plex2 and you are done. vxplex -g <Disk Group> att <Volume> <Plex2> If the Volume or filesystem cannot run with Plex1, vxvol -g <Disk Group> stop <Volume> vxplex -g <Disk Group> dis <Plex1> vxplex -g <Disk Group> att <Volume> <Plex2> vxvol -g <Disk Group> -f start <Volume> fsck -y /dev/vx/rdsk/<Disk Group>/<Volume> If the Volume starts alright, and you can mount the filesystem read-write, attached Plex1 and you are done. vxplex -g <Disk Group> att <Volume> <Plex1> If you still cannot start the Volume or filesystem, your data is lost. ------------------------------------------------------UNENCAPSULATE BOOT DISK FROM CD-ROM The following procedure unencapsulates the boot disk on a down system that has been booted by CD-ROM. This is similar to the above scenario where you boot from the mirror and uncapsulate the boot disk. This procedure is derived from Sun InfoDoc 21725. 1. Fsck and mount the / filesystem. Modify the /etc/vfstab and and /etc/system file. Use the /etc/vfstab.prevm if available. Disable these 2 lines in /etc/system with double asterisks: **rootdev:/pseudo/vxio@0:0 **set vxio:vol_rootdev_is_volume=1

2. If you have filesystems on your boot disk other than /, /usr, /var and swap, or you have encapsulated another disk in rootdg, you will need to deal with those filesystems. There are 3 ways: a) Create the original Solaris hard partitions one at a time. fmthard -d <part>:<tag>:<flag>:<start>:<size> /dev/rdsk/cXtYdZs2 Use the /a/etc/vx/reconfig.d/disk.d/cXtYdZ/vtoc file. EXAMPLE (recreate partition 4): The vtoc file contains important lines like these: 0 1 2 3 4 5 6 7 0x2 0x3 0x5 0x0 0x0 0x0 0x0 0x0 0x200 0x201 0x201 0x200 0x200 0x200 0x200 0x200 0 66080 0 132160 273280 837760 0 0 66080 66080 1044960 141120 564480 206080 0 0

fmthard -d 4:0:00:273280:564480 /dev/rdsk/cXtYdZs2 b) Use prtvtoc and format to recreate the partitions. c) Re-partition the whole disk using fmthard and the vtoc file. Make sure the /a/etc/vx/reconfig.d/disk.d/cXtYdZ/vtoc file only has the partition lines and nothing else (as above). Edit it if needed. fmthard -s <vtoc file> 3. Remove all VxVM Private and Public Region partitions using format and prtvtoc. These are the partitions with Tag 14 and 15. 4. Prevent VxVM from starting upon the next reboot. touch /a/etc/vx/reconfig.d/state.d/install-db 5. Reboot the system using the boot disk. 6. When the system comes up, remove these files: /etc/vx/reconfig.d/state.d/install-db /etc/vx/reconfig.d/state.d/root-done 7. If you only have the boot and mirror disk in rootdg, just run vxinstall to re-encapsulate the boot disk. Re-mirror your bootdisk, and start your other volumes. 8. If all other disks are also in rootdg, do the following: vxiod set 10 vxconfigd -m disable

vxdctl init vxdctl enable Now, clean up rootdg: EXAMPLE: vxedit vxedit vxedit vxedit vxedit -fr -fr -fr -fr -fr rm rm rm rm rm rootvol swapvol usr var opt

Remove encapsulated volumes on other rootdg disks also. Remove the boot and mirror disks from rootdg if you want: EXAMPLE: vxdg rmdisk rootdisk vxdg rmdisk disk01 Re-encapsulate and re-mirror your boot disk. Start up your other volumes. ---------------------------------------------------DISK GROUP VERSIONS Here is a list of VxVM releases and supported Disk Group versions. VxVM version Disk Group Version Supported Disk Group Versions 1.2 10 10 1.3 15 15 2.0 20 20 2.2 30 30 2.3 40 40 2.5 50 50 3.0 60 20-40, 60 3.1 70 20-70 3.1.1 80 20-80 NOTE: Once you upgrade the Disk Group version, you cannot downgrade. -----------------------------------------------------CLUSTER VOLUME MANAGER FILESYSTEM To use VxVM's clustering features, you commands that are available through the VERITAS SANPoint Foundation Suite product, not VxVM. This commands include: vxvcmconfig vxclustadm (if you are using VERITAS Cluster Server) vxclust (if you are using Sun Cluster)

Cluster Filesystem is sold under SANPoint Foundation Suite-HA (SPFS HA). It is a licensable feature of VxFS and requires VxVM and VCS. -----------------------------------------------------NEW FEATURES VXVM New features for VxVM 3.0 and higher: 1. 2. 3. 4. 5. 6. 7. 8. 9. Support for Sun Dynamic Reconfiguration. Support for Solaris 8. Sun Solstice DiskSuite conversion tool. Striped-Mirror Volumes (Striped Pro) RAID-5 Snapshots. Online Relayout. Task monitor. New Storage Administrator GUI. Destroy diskgroup option.

New features for VxVM 3.1: 1. 2. 3. 4. 5. 6. Y2K compliant. Unrelocate subdisks. Fast Mirror Resynchronization (FastResync). License required. Support for Sun T3 Array. Includes Volume Replicator. License required. Some cluster functionality introduced.

New features for VxVM 3.1.1: 1. DMP can co-exist with Sun Alternate Pathing. 2. DMP can support Sun A5x00 Array. New features for VxVM 3.2: 1. 2. 3. 4. 5. 6. Persistent FastResync Diskgroup Split and Join Device Discovery Layer Enclosure-based naming Clusterization of Layered Volumes implemented Cluster support for up to 16 nodes (VxFS 3.4 Patch 2 required if using VxFS). 7. Support for Sun Fire 3800, 4800 and 6800 systems. 8. Increase private region size. -----------------------------------------------------------

You might also like