You are on page 1of 43

Add Node to an existing 10.2.0.4 ASM based RAC...........................................................1 References........................................................................................................................1 Start Blackout on Grid Control........................................................................................

1 Disable Cron Jobs on the Database Server......................................................................2 Shutdown and take backups of the existing Cluster........................................................2 Stop Database services on all nodes............................................................................2 Stop agent On all nodes...............................................................................................2 Stop CRS On all nodes................................................................................................2 Backup all nodes..........................................................................................................2 Take a BCV/Snapshot backup of the Database...........................................................3 Start CRS and database on All nodes of the Existing Cluster.........................................3 Add the new node to the Cluster (HW and OS)...............................................................4 Clean up remnants of Previous Oracle install on Node D...........................................4 Install Oracle Clusterware (CRS) on the New Node.......................................................6 Run addNode.sh as oracle............................................................................................6 Configure VIP service from Node A.........................................................................11 Configure ONS service from Node A........................................................................15 Verify Cluster install and health................................................................................16 Adding Nodes at the Oracle RAC Database Layer........................................................18 Extend RDBMS Software to the new Node..............................................................18 Extend ASM Software to the new Node....................................................................22 Update Listener configuration...................................................................................27 Run DBCA from Node A to extend ASM instance...................................................33 Run DBCA from Node A to add DB instance...........................................................36

Add Node to an existing 10.2.0.4 ASM based RAC


Existing Nodes: node-a, node-b, node-c To be Added: node-d (Node D)

References
Metalink Doc 269320.1: Removing a Node from a 10g RAC Cluster Metalink Doc 270512.1: Adding a Node to a 10g RAC Cluster http://download-west.oracle.com/docs/cd/B19306_01/rac.102/b14197/adddelunix.htm#BEICADHD http://www.oracle.com/technology/pub/articles/vallath-nodes.html http://blogs.oracle.com/AlejandroVargas/gems/RAC10gR2AddNode.pdf

Start Blackout on Grid Control

Disable Cron Jobs on the Database Server Shutdown and take backups of the existing Cluster
Stop Database services on all nodes
srvctl srvctl srvctl srvctl stop stop stop stop database -d ORCL asm -n node-a asm -n node-b asm -n node-c

srvctl stop nodeapps -n node-a srvctl stop nodeapps -n node-b srvctl stop nodeapps -n node-c

Stop agent On all nodes


cd /u01/app/oracle/product/10.2.0/agent10g/bin ./emctl stop agent

============ (Commands in Red MUST be executed As root)==============

Stop CRS On all nodes


sudo su /etc/init.crs stop . ~oracle/.profile $ORA_CRS_HOME/bin/oprocd stop /usr/sbin/slibclean

Ensure no libraries are loaded in memory


genld -l | grep /u01/app/crs/product/10.2.0/crs genkld | grep /u01/app/crs/product/10.2.0/crs

Backup all nodes


Backup files From Node A:
mkdir -p /backups/preAddNodeBackups sudo su cd /backups/preAddNodeBackups dd if=/dev/oracle_ocr of=ocr_backup.raw bs=8192 count=131072 gzip ocr_backup.raw dd if=/dev/oracle_vote of=vote_backup.raw bs=8192 count=131072 gzip vote_backup.raw cd /u01/app/crs/product/10.2.0/crs/cdata tar cpvf orcl_prod | gzip -c > /backups/preAddNodeBackups/orcl_prod.tar.gz

cd /backups/preAddNodeBackups tar cpvf crsinit.tar /etc/init.crs /etc/init.crsd /etc/init.cssd /etc/init.evmd /etc/rc.d/rc2.d/K96init.crs /etc/rc.d/rc2.d/S96init.crs gzip crsinit.tar cp -p /etc/inittab . tar cpvf etc_oracle.tar /etc/oracle /tmp/.oracle /etc/oratab /etc/oraInst.loc gzip etc_oracle.tar tar cpvf - /u01/app/crs/product/10.2.0/crs | gzip -c > crs_10.2.0.4.tar.gz tar cpvf - /u01/app/oracle/product/10.2.0/db01 | gzip -c > rdbms_10.2.0.4.tar.gz tar cpvf - /u01/app/asm/product/10.2.0/asm | gzip -c > asm_10.2.0.4.tar.gz tar cpvf - /u01/app/oraInventory | gzip -c > oraInventory.tar.gz

From All other Nodes:


mkdir -p /backups/preAddNodeBackups sudo su cd /backups/preAddNodeBackups tar cpvf crsinit.tar /etc/init.crs /etc/init.crsd /etc/init.cssd /etc/init.evmd /etc/rc.d/rc2.d/K96init.crs /etc/rc.d/rc2.d/S96init.crs gzip crsinit.tar cp -p /etc/inittab . tar cpvf etc_oracle.tar /etc/oracle /tmp/.oracle /etc/oratab /etc/oraInst.loc gzip etc_oracle.tar tar cpvf - /u01/app/crs/product/10.2.0/crs | gzip -c > crs_10.2.0.4.tar.gz tar cpvf - /u01/app/oracle/product/10.2.0/db01 | gzip -c > rdbms_10.2.0.4.tar.gz tar cpvf - /u01/app/asm/product/10.2.0/asm | gzip -c > asm_10.2.0.4.tar.gz tar cpvf - /u01/app/oraInventory | gzip -c > oraInventory.tar.gz

Take a BCV/Snapshot backup of the Database

Start CRS and database on All nodes of the Existing Cluster


sudo su /etc/init.crs start

As oracle
srvctl start nodeapps -n node-a srvctl start nodeapps -n node-b srvctl start nodeapps -n node-c srvctl start asm -n node-a srvctl start asm -n node-b srvctl start asm -n node-c srvctl start database -d ORCL

Add the new node to the Cluster (HW and OS)


Clean up remnants of Previous Oracle install on Node D
Node D:
rm rm rm rm rm rm cd rm rm rm rm rm rm rm rm rm rm -rf /etc/oracle -f /etc/oratab /etc/oraInst.loc -rf /u01/app/crs/product/10.2.0/crs/* -rf /u01/app/oracle/product/10.2.0/db01/* -rf /u01/app/asm/product/10.2.0/asm/* -rf /u01/app/oraInventory/* /etc; rm -rf ora_save* -rf /tmp/OraInstall* -rf /tmp/.oracle -rf /u01/app/oracle/product/10.2.0/db01/.patch_storage* /etc/init.cssd /etc/init.crs /etc/init.crsd /etc/init.evmd /etc/rc.d/rc2.d/K96init.crs /etc/rc.d/rc2.d/S96init.crs /etc/inittab.crs

-- Check on fixes and patches -- Verify the hardware requirements -- Verify same interface names on the new node -- Verify new node on the same subnet as existing nodes -- Verify the software requirements by comparing the outputs from all nodes
/usr/sbin/instfix -i -k "IY68989 IY68874 IY70031 IY61034 IY62191 IY60759 IY76807 IY76140" /usr/sbin/instfix -i -k "IY65305" lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat bos.perf.libperfstat bos.perf.proctools rsct.basic.rte rsct.compat.clients.rte xlC.aix50.rte xlC.rte bos.adt.prof bos.alt_disk_install.rte bos.cifs_fs.rte

-- Check /etc/hosts and make sure nslookup matches for all nodes (existing and new) -- Ping all private interconnects (existing and new) -- Check oracle user capabilities
lsuser -a capabilities oracle

The command above should return


oracle capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE

-- Check shared disks are accessible by the new node


ls -l /dev/oracle_ocr /dev/oracle_vote

chown chown chmod chmod cd dd dd rm

oracle:oinstall /dev/oracle_ocr oracle:dba /dev/oracle_vote 660 /dev/oracle_ocr 660 /dev/oracle_vote

/tmp if=/dev/oracle_ocr of=temp.out bs=8192 count=10 if=/dev/oracle_vote of=temp.out bs=8192 count=10 temp.out

-- Check on the user equivalence to and from the new nodes using ssh. -- Generate ssh key pair for oracle on Node D
ssh-keygen -t rsa

Password should be left empty -- Copy the public key to the existing nodes server Login to Node A
cd cd .ssh scp oracle@node-d:/home/oracle/.ssh/id_rsa.pub node-d.pub cat node-d.pub >> authorized_keys scp authorized_keys oracle@node-b:/home/oracle/.ssh/ scp authorized_keys oracle@node-c:/home/oracle/.ssh/ scp authorized_keys oracle@node-d:/home/oracle/.ssh/

-- Check umask returns 022 over ssh


ssh ssh ssh ssh node-a node-b node-c node-d umask umask umask umask

-- Check all nodes have the same system date


ssh ssh ssh ssh node-ai node-bi node-ci node-di date date date date

-- Check on oracle profile in the new node: Update .profile on the new node to point to the correct ORACLE_HOME etc. -- Make sure these directories exits and wipe out any remnants of a previous install
mkdir -p /u01/app/oracle/product/10.2.0/db01 mkdir -p /u01/app/crs/product/10.2.0/crs mkdir -p /u01/app/asm/product/10.2.0/asm chown -R oracle:oinstall /u01 ls -ld $ORACLE_HOME $ORACLE_BASE $ORA_CRS_HOME

-- Go through the pre-install checklist -- Make sure SED is turned off -- Run CLUVFY from node A: $ORA_CRS_HOME/bin/cluvfy stage -post hwos -n node-d -verbose

Install Oracle Clusterware (CRS) on the New Node


-- Copy over some pre-install scripts to the new node: on Node D ------------mkdir -p /u01/app/install_cds/CRS/Disk1/

On Node A --------------

cd /backups/install/clusterware/Disk1 tar cpvf rootpre.tar rootpre upgrade scp rootpre.tar oracle@node-d:/u01/app/install_cds/CRS/Disk1/

On Node D -------------cd /u01/app/install_cds/CRS/Disk1/ tar xpvf rootpre.tar sudo su umask 022 cd /u01/app/install_cds/CRS/Disk1/rootpre ./rootpre.sh

Run addNode.sh as oracle


On Node A -------------- Make sure CRS is running - Make sure ORA_CRS_HOME is set - Run as oracle from one of the existing nodes
export DISPLAY=10.2.9.36:0 cd $ORA_CRS_HOME/oui/bin ./addNode.sh

New node information: Public Node Name: node-d Private Node Name: node-di Virtual host name: node-dv

-- Run orainstRoot.sh on D as root: On Node D as root ---------------------sudo su umask 022 /u01/app/oraInventory/orainstRoot.sh

-- Run rootaddnode.sh on A as root: On Node A as root ---------------------sudo su -

vi /u01/app/crs/product/10.2.0/crs/install/rootaddnode.sh

and change this line (last line of the script) and save: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$SU $CRS_ORACLE_OWNER -c "$CH/bin/cluutil -sourcefile $OCRCONFIG -destfile $CH/srvm/admin/ocr.loc -nodelist $NODES_LIST" to $SU - $CRS_ORACLE_OWNER -c "$CH/bin/cluutil -sourcefile $OCRCONFIG -destfile $CH/srvm/admin/ocr.loc -nodelist $NODES_LIST"

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
sudo su umask 022

export DISPLAY=10.2.9.36:0 /u01/app/crs/product/10.2.0/crs/install/rootaddnode.sh

-- Run root.sh on D as root: On Node D as root ---------------------vi /u01/app/crs/product/10.2.0/crs/install/rootconfig

Either comment out this paragraph around line 580


if $CRS_VNDR_CLUSTER; then $ECHO "Checking to see if any 9i GSD is up" GSDNODE=`$LSDB -g` GSDCHK_STATUS=$? if [ $GSDCHK_STATUS != 0 ];then $ECHO "9i GSD is running on node '$GSDNODE'. Stop the GSD and rerun root.sh" exit 1 fi fi

Or, change:
GSDNODE=`$LSDB -g`

to
GSDNODE=`(LIBPATH=$ORA_CRS_HOME/lib ; export LIBPATH ; $LSDB -g)`

Then as root on Node D:


sudo su umask 022 export DISPLAY=10.2.9.36:0 /u01/app/crs/product/10.2.0/crs/root.sh

Configure VIP service from Node A


This step was not needed in 10.2.0.4 as the add Node step above configured the VIPs. This step however can still be executed to verify VIP configuration. On Node A as root
export DISPLAY=10.2.9.36:0 . ~oracle/.profile $ORA_CRS_HOME/bin/vipca

Configure ONS service from Node A


as oracle:
cat $ORA_CRS_HOME/opmn/conf/ons.config

----------------------------localport=6113 remoteport=6200 loglevel=3 useocr=on

----------------------------Note the remote port number from the output above and use that port number in the racgons command below. as root on Node A
sudo su cd /u01/app/crs/product/10.2.0/crs/bin ./racgons add_config node-d:6200

Verify Cluster install and health


$ORA_CRS_HOME/bin/cluvfy stage -post crsinst -n all -verbose

Output below
Performing post-checks for cluster services setup Checking node reachability... Check: Node reachability from node "node-a" Destination Node Reachable? ------------------------------------ -----------------------node-c yes node-d yes node-b yes node-a yes Result: Node reachability check passed from node "node-a". Checking user equivalence... Check: User equivalence for user "oracle" Node Name Comment ------------------------------------ -----------------------node-d passed node-c passed node-b passed node-a passed Result: User equivalence check passed for user "oracle". Checking Cluster manager integrity... Checking CSS daemon... Node Name Status ------------------------------------ -----------------------node-d running node-c running node-b running node-a running Result: Daemon status check passed for "CSS daemon". Cluster manager integrity check passed. Checking cluster integrity... Node Name -----------------------------------node-a node-b node-c node-d Cluster integrity check passed Checking OCR integrity... Checking the absence of a non-clustered configuration... All nodes free of non-clustered, local-only configurations. Uniqueness check for OCR device passed. Checking the version of OCR... OCR of correct Version "2" exists. Checking data integrity of OCR... Data integrity check for OCR passed. OCR integrity check passed. Checking CRS integrity... Checking daemon liveness...

Check: Liveness for "CRS daemon" Node Name Running ------------------------------------ -----------------------node-d yes node-c yes node-b yes node-a yes Result: Liveness check passed for "CRS daemon". Checking daemon liveness... Check: Liveness for "CSS daemon" Node Name Running ------------------------------------ -----------------------node-d yes node-c yes node-b yes node-a yes Result: Liveness check passed for "CSS daemon". Checking daemon liveness... Check: Liveness for "EVM daemon" Node Name Running ------------------------------------ -----------------------node-d yes node-c yes node-b yes node-a yes Result: Liveness check passed for "EVM daemon". Liveness of all the daemons Node Name CRS daemon ------------ -----------------------node-d yes yes node-c yes yes node-b yes yes node-a yes yes Checking CRS health... Check: Health of CRS Node Name -----------------------------------node-d yes node-c yes node-b yes node-a yes Result: CRS health check passed. CRS integrity check passed. Checking node application existence... Checking existence of VIP node application Node Name Required Status Comment ------------ ------------------------ ------------------------ ---------node-d yes exists passed node-c yes exists passed node-b yes exists passed node-a yes exists passed Result: Check passed. Checking existence of ONS node application Node Name Required Status Comment ------------ ------------------------ ------------------------ ---------node-d no exists passed node-c no exists passed node-b no exists passed node-a no exists passed Result: Check passed. Checking existence of GSD node application Node Name Required Status Comment ------------ ------------------------ ------------------------ ---------node-d no exists passed node-c no exists passed node-b no exists passed node-a no exists passed Result: Check passed. CRS OK? -----------------------CSS daemon -----------------------yes yes yes yes EVM daemon ----------

Post-check for cluster services setup was successful.

Adding Nodes at the Oracle RAC Database Layer


Run addNode.sh as oracle from Node A first for RAC and then for ASM HOME Make sure CRS is up.

Extend RDBMS Software to the new Node


From Node A as oracle ---------------------------export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db01 cd $ORACLE_HOME/oui/bin export DISPLAY=10.2.9.36:0 ./addNode.sh

-- Run root.sh On Node D as root ----------------------

sudo su umask 022 export DISPLAY=10.2.9.36:0 /u01/app/oracle/product/10.2.0/db01/root.sh

Extend ASM Software to the new Node


From Node A as oracle ---------------------------export ORACLE_HOME=/u01/app/asm/product/10.2.0/asm cd $ORACLE_HOME/oui/bin ./addNode.sh

-- Run root.sh On Node D as root ---------------------sudo su umask 022 export DISPLAY=10.2.9.36:0 /u01/app/asm/product/10.2.0/asm/root.sh

Update Listener configuration


on node D as oracle -----------------------export DISPLAY=10.2.9.36:0 netca

After netca is complete, make sure there are not extra entries in this file on all nodes (specially Node D)
/u01/app/oracle/product/10.2.0/db01/network/admin/listener.ora

-- Add entry for the 4th node in listener aliases in the tnsnames.ora on all nodes
vi /u01/app/oracle/product/10.2.0/db01/network/admin/tnsnames.ora The entries should look something like this LISTENERS_ORCL = (ADDRESS_LIST = (ADDRESS = (PROTOCOL (ADDRESS = (PROTOCOL (ADDRESS = (PROTOCOL (ADDRESS = (PROTOCOL )

= = = =

TCP)(HOST TCP)(HOST TCP)(HOST TCP)(HOST

= = = =

node-av)(PORT node-bv)(PORT node-cv)(PORT node-dv)(PORT

= = = =

8001)) 8001)) 8001)) 8001))

LISTENER_ORCL4 = (ADDRESS = (PROTOCOL = TCP)(HOST = node-dv)(PORT = 8001))

Run DBCA from Node A to extend ASM instance


export DISPLAY=10.2.9.36:0 export ORACLE_HOME=/u01/app/asm/product/10.2.0/asm $ORACLE_HOME/bin/dbca

After ASM is extended delete ASM1 related files from Node D On node D as oracle
cd /u01/app/asm/product/10.2.0/asm/dbs rm *ASM1*

Run DBCA from Node A to add DB instance


You will need following files for the new instance -- undo tablespace -- Redo Logs -- Standby redo logs (If using Datguard) Since ASM and Oracle Managed Files are being used, DBCA will create the above per the template used when creating the database.
export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db01 $ORACLE_HOME/bin/dbca

The next screen is going to take a long time to pop-up. Be patient..

alter system set cluster_database_instances=4 scope=spfile sid='*'; Update tnsnames.ora on all the nodes to reflect the new instance Restart the database

srvctl stop database -d ORCL srvctl start database -d ORCL

You might also like