You are on page 1of 3

Verify Oracle Grid Infrastructure and Database Configuration

Checking the health of the cluster" operation which uses the Clusterized (Cluster Aware) Command: crsctl check cluster Check the Health of the Cluster - (Clusterized Command) Run as the grid user. [grid@racnode1 ~]$crsctl check cluster All Oracle Instances - (Database Status) [oracle@racnode1 ~]$srvctl status database -d racdb Single Oracle Instance - (Status of Specific Instance) [oracle@racnode1 ~]$srvctl status instance -d racdb -i racdb1 Node Applications - (Status) [oracle@racnode1 ~]$srvctl status nodeapps Node Applications - (Configuration) [oracle@racnode1 ~]$srvctl config nodeapps List all Configured Databases [oracle@racnode1 ~]$srvctl config database Database - (Configuration) [oracle@racnode1 ~]$srvctl config database -d racdb -a ASM - (Status) [oracle@racnode1 ~]$srvctl status asm ASM - (Configuration) $srvctl config asm -a TNS listener - (Status) [oracle@racnode1 ~]$srvctl status listener TNS listener - (Configuration) [oracle@racnode1 ~]$srvctl config listener -a SCAN - (Status) [oracle@racnode1 ~]$srvctl status scan SCAN - (Configuration) [oracle@racnode1 ~]$srvctl config scan

VIP - (Status of Specific Node) [oracle@racnode1 ~]$srvctl status vip -n racnode1 [oracle@racnode1 ~]$srvctl status vip -n racnode2 VIP - (Configuration of Specific Node) [oracle@racnode1 ~]$srvctl config vip -n racnode1 [oracle@racnode1 ~]$srvctl config vip -n racnode2 Configuration for Node Applications - (VIP, GSD, ONS, Listener) [oracle@racnode1 ~]$srvctl config nodeapps -a -g -s -l Verifying Clock Synchronization across the Cluster Nodes [oracle@racnode1 ~]$cluvfy comp clocksync -verbose All running instances in the cluster - (SQL) SELECT inst_id , instance_number inst_no , instance_name inst_name , parallel , status , database_status db_status , active_state state , host_name host FROM gv$instance ORDER BY inst_id; All database files and the ASM disk group they reside in - (SQL) select name from v$datafile union select member from v$logfile union select name from v$controlfile union select name from v$tempfile; ASM Disk Volumes - (SQL) SELECT path FROM v$asm_disk;

Starting / Stopping the Cluster


Stopping the Oracle Clusterware Stack on the Local Server Use the " crsctl stop cluster" command on racnode1 to stop the Oracle Clusterware stack: [root@racnode1 ~]#/u01/app/11.2.0/grid/bin/crsctl stop cluster The following will bring down the Oracle Clusterware stack on both racnode1 and racnode2: [root@racnode1 ~]#/u01/app/11.2.0/grid/bin/crsctl stop cluster -all

Starting the Oracle Clusterware Stack on the Local Server Use the " crsctl start cluster" command on racnode1 to start the Oracle Clusterware stack: [root@racnode1 ~]#/u01/app/11.2.0/grid/bin/crsctl start cluster

You can choose to start the Oracle Clusterware stack on all servers in the cluster by specifying -all: [root@racnode1 ~]#/u01/app/11.2.0/grid/bin/crsctl start cluster -all

You can also start the Oracle Clusterware stack on one or more named servers in the cluster by listing the servers separated by a space: [root@racnode1 ~]#/u01/app/11.2.0/grid/bin/crsctl start cluster -n racnode1 racnode2

Start/Stop All Instances with SRVCTL Finally, you can start/stop all instances and associated services using the following: [oracle@racnode1 ~]$srvctl stop database -d racdb [oracle@racnode1 ~]$srvctl start database -d racdb

You might also like