Professional Documents
Culture Documents
Step by step installation Oracle 11g (11.1.0.6.0) RAC on Red hat Enterprise LINUX AS 4 with screenshots.
The following are the sequence of steps that are to be executed on the nodes.
Install the Linux Operating System Install Required Linux Packages for Oracle RAC (refer oracle doc for the required packages Packages varies depending on the version of the operating system). Network Configuration Using the Network Configuration application, you need to configure both NIC devices as well as the /etc/hosts file. Both of these tasks can be completed using the Network Configuration GUI. Notice that the /etc/hosts settings are the same for both nodes.
For example we need to specify the entries in the /etc/hosts file as below in both the nodes.
etc/hosts
127.0.0.1 localhost.localdomain localhost # Public Network - (eth0) 192.168.1.100 linux1 192.168.1.101 linux2 # Private Interconnect - (eth1) 192.168.2.100 linux1-priv 192.168.2.101 linux2-priv # Public Virtual IP (VIP) addresses - (eth0) 192.168.1.200 linux1-vip 192.168.1.201 linux2-vip
useradd -m -u 501 -g oinstall -G dba, asm -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle # mkdir -p /u01/app/oracle # chown -R oracle:oinstall /u01/app # chmod -R 775 /u01/app Creating directory for oracle clusterware. # mkdir -p /u01/app/crs # chown -R oracle:oinstall /u01/app/crs # chmod -R 775 /u01/app/crs
Create Mount Point for OCFS2 / Clusterware Let's now create the mount point for the Oracle Cluster File System, Release 2 (OCFS2) that will be used to store the two Oracle Clusterware shared files (OCR file and voting disk file)
# mkdir -p /u02/oradata/orcl # chown -R oracle:oinstall /u02/oradata/orcl # chmod -R 775 /u02/oradata/orcl
Edit the .bash_profile file and set the required environment variables in both the nodes.
PATH=$PATH:$HOME/bin export ORACLE_SID=hrms1 export ORACLE_HOME=/u02/app/oracle/db_home export ORA_CRS_HOME=/u02/app/oracle/crs_home export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/lib unset USERNAME
Swap Space Considerations Installing Oracle Database 11g Release 1 requires a minimum of 1GB of memory
(Refer any UNIX doc for configuring SSH or oracle RAC installation doc)
Reboot both nodes after configuring kernel level Install & Configure Oracle Cluster File System (OCFS2) parameters # rpm -Uvh ocfs2-tools-1.2.6-1.el5.i386.rpm # rpm -Uvh ocfs2-2.6.18-8.el5-1.2.6-1.el5.i686.rpm # rpm -Uvh ocfs2console-1.2.6-1.el5.i386.rpm
Configure OCFS2
$ su # ocfs2console &
Select [Cluster] -> [Configure Nodes...]. This will start the OCFS2 Cluster
On the "Node Configuration" dialog, click the [Add] button. This will bring up the "Add Node" dialog. In the "Add Node" dialog, enter the Host name and IP address for the first node in the cluster. Leave the IP Port set to its default value of 7777. In my example, I added both nodes using linux1 / 192.168.1.100 for the first node and linux2 / 192.168.1.101 for the second node Click [Apply] on the "Node Configuration" dialog - All nodes should now be "Active". After verifying all values are correct, exit the application using [File] -> [Quit].
Format the OCFS2 Filesystem Create a partition on the SAN or shared storage for storing ocrfile and voting disk files that are created at the time of cluster ware installation. (use fdisk command as root user and create a partition) NOTE: Always recommended to create 4 partitions so that we can maintain redundant copies of voting disk file and OCR file
$ su -
Configure OCFS2 to Mount Automatically at Startup We can do that by adding the following line to the /etc/fstab file on both Oracle RAC nodes in the cluster:
LABEL=ocfs2 /u02/oradata/orcl ocfs2 _netdev,datavolume,nointr 0 0
[root@hcslnx01 crs_home]# sh root.sh Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory Setting up Network socket directories Oracle Cluster Registry configuration upgraded successfully Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: hcslnx01 hcslnx01-priv hcslnx01 node 2: hcslnx02 hcslnx02-priv hcslnx02
Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Now formatting voting device: /ocfs2/voting_file Format of 1 voting devices complete. Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. Cluster Synchronization Services is active on these nodes. hcslnx01 Cluster Synchronization Services is inactive on these nodes. hcslnx02 Local node checking complete. Run root.sh on remaining nodes to start CRS daemons. [root@hcslnx01 crs_home]#
[root@hcslnx02 crs_home]# sh root.sh WARNING: directory '/' is not owned by root Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory Setting up Network socket directories Oracle Cluster Registry configuration upgraded successfully The directory '/' is not owned by root. Changing owner to root clscfg: EXISTING configuration version 4 detected. clscfg: version 4 is 11 Release 1. Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: hcslnx01 hcslnx01-priv hcslnx01 node 2: hcslnx02 hcslnx02-priv hcslnx02 clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. Cluster Synchronization Services is active on these nodes. hcslnx01 hcslnx02 Cluster Synchronization Services is active on all the nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps
Creating VIP application resource on (2) nodes... Creating GSD application resource on (2) nodes... Creating ONS application resource on (2) nodes... Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes... Starting ONS application resource on (2) nodes...
NOTE: The above command has to be executed manullay on the failed node by connecting as Oracle user.
NOTE:
VIP is running on node: linux1 GSD is running on node: linux1 Listener is running on node: linux1 ONS daemon is running on node: linux1
Display the configuration for node applications - (VIP, GSD, ONS, Listener)
$ srvctl config nodeapps -n linux1 -a -g -s -l VIP exists.: /linux1-vip/192.168.1.200/255.255.255.0/eth0 GSD exists. ONS daemon exists. Listener exists.