You are on page 1of 84

COOKBOOK

Quck nstaaton Gude Quck nstaaton Gude Quck nstaaton Gude Quck nstaaton Gude

Building an Building an Building an Building an
Oracle RAC 10g R1 cluster Oracle RAC 10g R1 cluster Oracle RAC 10g R1 cluster Oracle RAC 10g R1 cluster
for Linux for Linux for Linux for Linux on BM on BM on BM on BM


8ystem z9 & z8eries 8ystem z9 & z8eries 8ystem z9 & z8eries 8ystem z9 & z8eries


Version 1 Version 1 Version 1 Version 1
January 2006 January 2006 January 2006 January 2006


European European European European ORACLE ORACLE ORACLE ORACLE / / / / BM BM BM BM
Joint 8olution Joint 8olution Joint 8olution Joint 8olutions Center s Center s Center s Center

IBM - Jack Hoarau - Olivier Manet
Europe Advanced Technical Support
System z9 & zSeries New Technology Center

Oracle - Frederic Michiara
European Oracle/IBM Joint Solutions Center









Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 2/ 84




This document is based on our experiences.
This is not an official (Oracle or IBM) documentation.
This document will constantly updated
and were open to any add-on
or feedback from your own experiences,
on same or different storage solution !!!





Document history :
Version Date Update Who Validated by

1.00

December 2005

- Creation

Jack Hoarau
Olivier Manet
Frederic Michiara


Jack Hoarau
Olivier Manet
Frederic Michiara
Alain Roy








Contributors :
o Alain Roy : IBM France (EMEA ORACLE/IBM Joint Solutions Center)


Contact :
EMEA ORACLE / IBM Joint Solutions Center oraclibm@fr.ibm.com







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 3/ 84



Contents


1. Introduction ..................................................................................................................... 4
2. Pre-requisites ................................................................................................................... 5
2.1 Operating environment ...................................................................................................................................... 5
2.2 Architecture....................................................................................................................................................... 6
2.3 z/VM configuration ........................................................................................................................................... 7
z/VM user directory......................................................................................................................................... 7
Disk configuration........................................................................................................................................... 9
3. Installing Linux.............................................................................................................. 10
3.1 Linux installation............................................................................................................................................. 10
Step by step ................................................................................................................................................... 10
Putty Customization ...................................................................................................................................... 31
VNC set up.................................................................................................................................................... 32
3.2 Linux configuration......................................................................................................................................... 36
System resources checking............................................................................................................................ 36
Logical volume configuration ....................................................................................................................... 37
Kernel parameters change ............................................................................................................................. 39
Oracle group and oracle user accounts creation ............................................................................................ 39
Shell limits for Oracle user............................................................................................................................ 40
Network configuration................................................................................................................................... 41
SSH/SCP configuration................................................................................................................................. 43
Oracle environment variables........................................................................................................................ 44
Shared disks pool initialization...................................................................................................................... 44
Bind Raw Devices to shared Disk Devices ................................................................................................... 45
4. Installing Oracle RAC 10g cluster ............................................................................... 48
4.1. Step by step installation.................................................................................................................................. 48
Oracle CRS installation ................................................................................................................................. 48
Oracle RAC software installation.................................................................................................................. 57
Oracle Net Services Configuration................................................................................................................ 65
Oracle DB creation........................................................................................................................................ 68
5. Appendixes ..................................................................................................................... 84
5.1. Technical references....................................................................................................................................... 84








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 4/ 84


1. Introduction

This document provides to the reader the instructions for installing Oracle RAC 10g
Database Cluster on SuSE Linux Enterprise Server 9 (SLES 9) Service Pack 2 running
under IBM zSeries hardware (with IBM z/VM virtualization technology).

The included procedure describes a base product installation (using graphical wizard)
on two nodes, with custom options. The installation program needs a graphical user
interface. The VNC tool, included in the SuSE SLES 9 distribution, provides this
interface. It supports the Xwindow protocol.

The Following Oracle Middleware products will be installed:
Oracle CRS v10.1.3.0,
Oracle RAC v10.1.3.0.



















Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 5/ 84



2. Pre-requisites


2.1 Operating environment


Below are the hardware and software used on the target environment:

IBM eServer zSeries 900
IBM TotalStorage DS8000 (FICON Attachment)
IBM z/VM V5.1 Service Level 0501
Novell SuSE SLES9 SP2, Kernel version 2.6.5-7.191, Minimal Graphical
System
Oracle RAC 10g 10.1.0.3.0

Others required package installed versions:

gcc-3.3.3-43.34
gcc-c++-3.3.3-43.34
glibc-2.3.3-98.47
glibc-32bit-9-200506070135
glibc-locale-32bit-9-200506070135
glibc-devel-2.3.3-98.47
glibc-devel-32bit-9-200506070135
make-3.80-184.1
libaio-0.3.102-1.2
libaio-devel-0.3.102-1.2
openmotif-2.2.2-519.4
openmotif-libs-2.2.2-519.4





























Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 6/ 84






2.2 Architecture


The current operating environment is illustrated in the architecture diagram.


































Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 7/ 84








2.3 z/VM configuration


z/VM user directory

The user directory defines the Linux instances to build cluster of two Oracle RAC
nodes.

Each instance has been allocated four virtual CPUs.

Memory (Linux RAM) can be increased up to 768 MB. Default is 128 MB.

Each instance has access to a set of six minidisks for products and swap space.
Seven other minidisks are shared between the two RAC nodes: one for Voting and OCR
and six for ASM managed database. The first z/VM user owns the seven minidisks in
Read/Write. The second user links to the minidisks in Read/Write.


Utilization Number of minidisk and cylinder Approximative size
Linux product 1 minidisk 3000 cyl 2.1 GB
Linux swap 1 1 minidisk 338 cyl 242 MB
Linux swap 2 1 minidisk 1111 cyl 781 MB
Oracle product (LVM2) 3 minidiks of 3338 cyl each 6.8 GB
Voting + OCR 1 minisdik 1112 cyl 781 MB
Database ASM managed 6 minidisks of 1669 cyl each 7 GB


Two network connections are available. One is dedicated to an OSA Express GB
Ethernet and one is special for a guest LAN connection. The OSA Express is the
connection to the public network. The Guest LAN connection is the private network
(Oracle interconnect).

z/VM user directory entry samples:

03210 ** User Linux3
03211 **
03212 USER LINUX3 UNLOG 128M 768M G
03213 INCLUDE PRFLINUX
03214 SCR INA WHI NON STATA RED NON CPOUT YEL NON VMOUT GRE
NON INRED TUR NON
03215 ACCOUNT X25SP
03216 MACHINE ESA 4
03217 CRYPT APVIRT
03218 CPU 00 BASE
03219 CPU 01
03220 CPU 02
03221 CPU 03







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 8/ 84


03222 IUCV ALLOW
03223 IUCV ANY PRIORITY
03224 XAUTOLOG OPERATOR TODEVENT
03225 OPTION APPLMON TODEN MIH DEVI DEVM MAINTCCW RMCHINFO
03226 SHARE RELATIVE 2000
03227 ** OSA EX ETH LG X CHP XX
03228 DEDICATE 614 620
03229 DEDICATE 615 621
03230 DEDICATE 616 622
03231 ** GUEST LAN
03232 SPECIAL 814 QDIO 3 SYSTEM LANQ00
03233 ** Common linux install
03234 LINK LINUX 191 191 RR
03235 ** Main linux
03236 MDISK 0100 3390 1 3000 LINX02 MR READ WRITE
MULT
03237 ** Swap 1st level
03238 MDISK 0101 3390 3001 338 LINX03 MR
03239 ** Swap extension
03240 MDISK 0200 3390 1 1111 LINX14 MR
03246 ** Oracle Install
03247 MDISK 0300 3390 1 3338 LIORA7 MR
03248 MDISK 0301 3390 1 3338 LIORA8 MR
03249 MDISK 0302 3390 1 3338 LIORA9 MR
03250 ** Oracle voting and OCR
03251 MDISK 0340 3390 1113 1112 LIORAZ MW
03252 ** Oracle ASM DB
03253 MDISK 0350 3390 1 1669 LIORAD MW
03254 MDISK 0351 3390 1670 1669 LIORAD MW
03255 MDISK 0352 3390 1 1669 LIORAE MW
03256 MDISK 0353 3390 1670 1669 LIORAE MW
03257 MDISK 0354 3390 1 1669 LIORAF MW
03258 MDISK 0355 3390 1670 1669 LIORAF MW


............Cluster node linux4 has identical definitions except for MDISK:


03303 ** Common linux install
03304 LINK LINUX 191 191 RR
03305 ** Main linux
03306 MDISK 0100 3390 1 3000 LINX03 MR READ WRITE
MULT
03307 ** Swap 1st Level
03308 MDISK 0101 3390 3001 338 LINX04 MR
03309 ** Swap extensions
03310 MDISK 0200 3390 1 1111 LINX12 MR
03311 ** Oracle Install
03312 MDISK 0300 3390 1 3338 LIORB1 MR
03313 MDISK 0301 3390 1 3338 LIORB2 MR
03314 MDISK 0302 3390 1 3338 LIORB3 MR
03315 ** Oracle voting and OCR
03316 LINK LINUX3 0340 0340 MW
03317 ** Oracle rac ASM DB
03318 LINK LINUX3 0350 0350 MW
03319 LINK LINUX3 0351 0351 MW







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 9/ 84


03320 LINK LINUX3 0352 0352 MW
03321 LINK LINUX3 0353 0353 MW
03322 LINK LINUX3 0354 0354 MW
03323 LINK LINUX3 0355 0355 MW







Disk configuration

Below is an overview diagram of the disks allocation and distribution used in the target
environment.



























Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 10/ 84















3. Installing Linux


3.1 Linux installation

Step by step

This part of the workshop will guide you through the installation of the SuSE SLES 9 Linux system (64-bit
version). We will be using z/VM to host the Linux systems, so you will be installing Linux into a virtual
machine.

3.1.1- Initial Startup

The first part of the lab will help you install the starter Linux up to the point where it is ready to accept
network connections. At that point you will use Telnet/SSH from a workstation to proceed with the rest of
the installation.

Start a 3270 terminal session on and connect to the z/VM system.

Logon entering the following command into the command area of the VM logo screen:

log linuxn by teamn (where n is your team-number)

You will be prompted for a password (which your friendly teachers will give you). This password is
temporary. You must change it. Refer to the displayed password prompt message to find how to define and
verify a new password. Please use a simple one - this is not a security class.

At a successful login, hit the Clear-key (Pause/Attn-key on the PC-keyboard) until you get to the CMS
Ready-prompt.

Ready; T=0.01/0.02 09:06:46

Execute the REXX-EXEC- command to transfer the three installation files into your Virtual Reader, the
VM guest RDR:

pusles9x







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 11/ 84



Before proceeding with the next step we need to expand the storage size of the virtual machine to 512
Megabytes - this is the recommended starting point to install using a graphic interface, when the Linux
system is running in a virtual machine - 512M is not the default, so remember to reenter the value, prior
booting linux, any time after you logon into your VM guest following a full logoff:

def sto 512M

You are now ready to IPL (boot) the starter Linux from your virtual reader:

#cp i c clear

The starter Linux should now boot. Hit the Clear-key when required (More... Is displayed in the bottom
right corner of the screen) until Please select the type of your network device shows up. Select Option 3
(OSA-Gigabit Ethernet or OSA-Express fast Ethernet). You will be using an OSA-Express card with a
Gigabit-Ethernet.

You will now be prompted for details in your network configuration.
Before continuing, please get the values from the Local Network Topology- table in your handout
document.

OSA Device Numbers 0x0614, 0x0615, 0x0616
(Use the #cp q osa command to query the possible values)
Portname PORT123 (all in upper case)
Full Host Name oran.mop.ibm.com
(where n is your team number)
TCP/IP Address 9.100.193.xxx (from table)
Subnet Mask 255.255.252.0
Broadcast Address (accept default) 9.100.195.255
Gateway Address 9.100.193.69
DNS none
MTU 1500 (recommended for Ethernet)

Note: Proposed default values (between parentheses) are accepted as is by pressing Enter twice.

Now is your chance to review your network parameters. Control the values and enter YES to accept - or
NO if corrections are needed.

Enter an installation password when prompted - keep it simple, please. We propose you to use your team
number - teamxx.

The system will now try to activate the OSA card and the connection to the network. It will ping the local
address and the Gateway and then generate the Host Keys for the SSH (Secure Shell) Rlogin connection.

After successful pings and the RSA keys generation you will be asked to define the installation media

For the installation source, please specify FTP - choice 3

The installation packages are located on a separate server. You will be using non-anonymous ftp to transfer.
Actual version of SLES9 installer forces you to specify the path of the source directory relatively to the ftp
user home directory (need to specify ../../ in our case).







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 12/ 84



The IP-number for the server is : 9.100.192.155
The directory used for the packages : ../../home/distrib/D
The user id : common
The password : ftp1

Confirm with YES when the values are correctly entered, NO to re-enter.

Which terminal do want to use? For this lab we will use VNC to enable the installation GUI.

1) X-Window
2) VNC (VNC-Client or Java enabled Browser)
3) ssh

Choice: 2

Enter the Password for VNC-Access (6 to 8 characters): teamxx

Follow the instructions on the VM 3270 console. Go to your workstation desktop to open the vncviewer.



3.1.2- Installation using Yast2 Graphical User Interface (GUI)

You will be using VNC tool to support the graphical environment needed by Yast2.
Starting the VNC viewer on your desktop begins with the following prompts.








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 13/ 84






Type the TCP/IP address of your Linux, followed by column (:) and session display number, as shown. Enter
the VNC access password defined earlier

The vncviewer is the client running on your workstation, exchanging with Xvnc, the server activated in
Linux. Xvnc has a build in X window server managing the screen images in a buffer. The vncviewer
remotely read the buffer and displays it, rectangle by rectangle, refreshing changes. The vnc protocol uses
encoding methods to pass data (ie. Hextile for fast compression, or tight for best compression).

There are VNC and TightVNC versions. SLES9 implements Xvnc tight. We will be using the TightVNC
viewer on the workstation. This viewer is available in /dosutils subdirectory on CD1 of SLES9
distribution.

(Although not required - we recommend that you disconnect from the VM 3270 console at this point using
the #cp disc-command. If you choose to remain logged on to the console, please go back to it from time to
time to clear the screen whenever more is displayed in order to keep your Linux system running smoothly.)

When you get to the vncviewer, YaST2 will start-up automatically.








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 14/ 84




Accept the license.

Select your install language. English is highly recommended in case you need assistance from your workshop
instructors ( since it is the language at least spoken by your instructors!).


Click ACCEPT
At the DASD Disk Management-panel, make the appropriate selections to select and activate only DASD
100 and 101: 100 to hold the Linux operating system and applications and 101 to hold the swap space.








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 15/ 84




Select 0.0.0100 and 0.0.0101. Dont click anything, please see next page.



To select dasd-addresses 100 and 101 for your Linux system: highlight the line then hit Select or Unselect
each time, to mark.
Do not click Next yet, disks must be activated to continue:
Open Perform Action and select Activate to activate them. They will be put online.







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 16/ 84





Click NEXT.



Select the radio button New installation and click OK.














Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 17/ 84





You are now - hopefully - at the Installation Settings Panel. We now want you to prepare your two DASD
devices. One will be prepared to contain the Linux system and the other one will be prepared for being a
swap device for Linux



Click on the text Partitioning to open the Expert Partitioner page.









Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 18/ 84



Select the /dev/dasda1 line and click Edit.



On the popup panel select the Format radiobutton, and in the pulldown select Ext3 file-system. In the
pulldown for mount-point select /. Now click OK.

Back to Expert Partitionner, select the /dev/dasdb1 line and click Edit.



On the panel select the Format radiobutton, and in the pulldown select Swap. Click OK.







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 19/ 84



Click Next on the Expert Partitioner page.



Back on the Installation Settings panel click on the text Software.



On the Software Selection panel select the radiobutton Minimum Graphical System (without KDE) and click
on Detailed Selection.








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 20/ 84


The Linux guest mini-disk is only 3000 cylinders wide (2.1GB). To save space for installing the other
products (DB2, WebSphere) we want you to install a minimal system.



In the Filter pull-down on the top left panel select Package Groups.



In the Package Groups list go to the bottom and select zzz All
In the right panel mark or un-mark the corresponding check box to modify the default selections for
installation as follows:








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 21/ 84


Select these packages with a checkmark in the corresponding box:
compat, compat-32-bit, lvm2, openmotif, openmotif-libs, xinetd,
and for Oracle: gcc, gcc-c++, glibc-devel, glibc-devel-32bit, libaio, libaio-devel, make,
(libstdc++- devel is automatically included for dependency)

Unselect these packages (checkbox all blanck, two clicks!): dhcpcd, eject, finger, fvwm2, ntfsprogs,
providers, usbutils

Click Accept, then click Continue in Automatic Changes popup,

On return to the installation settings panel we want to set the time zone parameters. Click on the text Time
Zone. On the next panel select your region/country and in the pulldown for hardware clock, select UTC.



Click Accept.

Again back on the Installation Settings panel. We are not going to change the Default Runlevel, keep it at 5
(multi-user, network and GUI) - click Accept.








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 22/ 84




If you are satisfied that your settings are correct, click Yes, install on the green confirmation panel.

Your disks will now be formatted for the file system and the installation will start.










Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 23/ 84




This will take a while - so go get a cup of coffee or a softdrink!





Wake up !!!!



On the panel saying your system will now be shutdown....Click OK








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 24/ 84


The system will stop and the vncviewer window will disappear.







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 25/ 84



4.3- Configuration and Final installation

Go back to the VM guest 3270 console and logon to your Linux system.

Your brand new installed system is now ready to boot from DASD device 100:

#cp i 100 clear

When the boot is complete, go to your vncviewer and open it again.

(Although not required - we recommend that you disconnect from the VM console at this point using the #cp
disc-command. If you choose to remain logged on to the console, please go back to it from time to time, to
clear the screen whenever more is displayed, in order to keep your Linux system running smoothly.)

You will see that YAST2 is again automatically started. You are prompted for a final password for the user
root. Please use a simple password - this is not a security class.





Click Next.

















Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 26/ 84





In the network configuration panel, click on Network Interfaces.



Click Change under Already configured devices




Click Edit for the selected card








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 27/ 84




Click on Host name and name server



Check that Host name and domain name match your Linux instance name and domain name, click OK

To return to the initial Network Configuration panel, click Next, then Finish

After this step, host name file is validated in Linux configuration.

Click Next.










Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 28/ 84






Skip Test Internet Connection



Click Next.

Do not change anything about CA Management (certificate authority) and Ldap Server










Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 29/ 84


Click Next






The Linux user authentication method will be local to the system, using password and shadow files from /etc
directory.



Click Next

You are now prompted to define a new user. Do so using your team number (teamn, where n is your team
number).







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 30/ 84




Choose a simple, easy-to-remember password - please! Do not consider the warning messages about the
weakness of the password; again we are not conducting a security class. The entry will be accepted anyway.

Click Next.

The system will now finish the final setup of your new Linux system.

Take a breath reading the Release Notes

Click Next








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 31/ 84




enjoy the Congratulations.!



Click Finish

Putty Customization

We will be using PuTTY in this class - release 0.55 - to telnet/SSH into Linux.








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 32/ 84


As for the VNC viewer a PuTTY installer is also available in the /dosutils directory of CD1 of the SLES9
distribution.

SLES9 sshd (ssh server daemon) is configured to use SSH 2 protocol. An old version of PuTTY or a PuTTY
no configured to accept SSH2 will result in access denied. Any user login will be rejected.

Please open PuTTY and go through the following customisation steps:

Select Session
Type your Linux system IP-address into Host Name
Select the SSH radio button
Select Terminal
Check the box Use background colour to erase screen
(This is needed for YaST)
Select Keyboard
Check the Control-H radio button
(To enable Ctrl+Backspace to delete chars)
Select Window
Increase Rows to 32 (Better screen size)
Select Appearance
Use Change button to change font size if required
Select SSH
Set Preferred SSH protocol version to 2
Select Session (yes - once more)
Type a name in Saved Sessions (ex. wflinuxn)
Click the Save button
Click the Open button and the session will start.

The first time you login you will see a warning about an unknown host system finger prints. Accept the host
keys for putty to register them.

Login into your Linux system as root to open the session
VNC set up

This part of the workshop will guide you through the customization of the vnc-server.

We will use VNC (Virtual Network Computing) any time a user graphic interface is needed.

The objective is to show how to start the VNC server (Xvnc in linux), to create a virtual desktop to access
from a workstation VNC viewer.

Xinetd is combined with Xdm to start and open Xvnc instances on request. This method helps simplifying
remote VNC logins.

With SLES9, configuration files are installed with the VNC package to enable Xvnc to run as a tcpip service
under control of xinetd (the tcpip services enabler). Xvnc server instances are opened and closed on users
request only.

The X server built in Xvnc, likes any standard Xwindow server, has two modes of operations, passive and
query modes.
In query mode the X server uses xdmcp (xwindow display manager communication protocol) to query a
host running an xdm server and to obtain a user login screen for opening a desktop.








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 33/ 84


We are going to use this method to enable the Xvnc server to start automatically, on request, and present
logon screens to the user in his vnc viewer.

During installation we have selected runlevel 5 as the default run level. This takes care about running the
xdm server.

We also chose to install xinetd.

What we need to do now is to:
Enable vnc as a tcpip service under xinetd
Change the xdm server configuration, running locally, so it will accept queries from the Xvnc sessions and
open the logon screen.

Configure xinetd to start Xvnc whenever a connection is required. Edit vnc tcpip service configuration in
/etc/xinetd.d/

# vi /etc/xinetd.d/vnc

Modify the first entry: Enable the service, use a 904x676 desktop, disable the keyboard extension -kb.


# default: off
# description: This serves out a VNC connection which starts at a
KDM login \
# prompt. This VNC connection has a resolution of 1024x768, 16bit
depth.
service vnc1
{
disable = no
socket_type = stream
protocol = tcp
wait = no
user = nobody
server = /usr/X11R6/bin/Xvnc
server_args = :42 -inetd -once -query localhost
-geometry 904x676 -depth 16 -kb
type = UNLISTED
port = 5901
}

Set the xinetd service to start automatically when booting

# insserv xinetd (or # chkconfig s xinetd on)


Start the xinetd service. It is not currently running

# rcxinetd start

Configure the display manager (xdm)

Update xdm config to have xdm listening on port 177. It is the one used by default, but we want to
force it.

# vi /etc/X11/xdm/xdm-config







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 34/ 84





Uncomment the last line and modify

!
!DisplayManager.requestPort: 0
DisplayManager.requestPort: 177

Update Xaccess to force xdm to accept only queries from the local host, where Xvnc is running, and to
ignore broadcasted requests.

# vi /etc/X11/xdm/Xaccess

# for IndirectQuery messages only entries with right hand sides can
# match, for Direct and Broadcast Query messages, only entries
without
# right hand sides can match.
#

#* #any host can get a login win
localhost

#
# To hardwire a specific terminal to a specific host, you can
# leave the terminal sending indirect queries to this host, and
# use an entry of the form:
#

#terminal-a host-a


#
# The nicest way to run the chooser is to just ask it to broadcast
# requests to the network - that way new hosts show up
automatically.
# Sometimes, however, the chooser can't figure out how to
broadcast,
# so this may not work in all environments.
#

#* CHOOSER BROADCAST #any indirect host can get a
LISTEN localhost
#
# If you'd prefer to configure the set of hosts each terminal sees,
#

Edit the display manager parameter file in /etc/sysconfig to enable remote access

# vi /etc/sysconfig/displaymanager

## Type: yesno
## Default: no
#







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 35/ 84


# Allow remote access to your display manager (xdm/kdm). Please
note
# that a modified kdm or xdm configuration, e.g. by KDE control
center
# will not be changed.
#
#DISPLAYMANAGER_REMOTE_ACCESS="no"
DISPLAYMANAGER_REMOTE_ACCESS="yes"

## Type: string
## Default: no
#
# Allow remote access of the user root to your display manager
#
DISPLAYMANAGER_ROOT_LOGIN_REMOTE="no"

Run SuSE sysconfig script to register the change in the parameter

# SuSEconfig

Now start the xdm server:

To auto-start the xdm server as a service at boot time, enter

# insserv xdm

To currently start the xdm server enter

# rcxdm start

Open the VNC viewer on your workstation. Enter the TCPIP address of the Linux host and :1 as the
display session number. You should get a logon screen.

Log in to a linux user to open a session. The desktop should display. Remember we have installed a
minimum graphical system with open motif as the window manager (mwm). So dont expect to get a
sophisticated desktop with menus and icons. This is just enough for what we need to do. It uses a minimum
of the system resources.

To open an xterm window: Right click and hold while pointing on the vncdesktop backdrop. Select New
Window from the opening menu.

Exit the desktop and close the vnc session by selecting Quit in the menu. Click Ok to quit. The Xvnc
session ends the vnc viewer closes.

Use sux userxx to switch to userxx and passing the graphic environment.








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 36/ 84




To get the drop down menu list, right click and hold, then scroll to desired entry.

z/VM console should be now disconnected. From this console, enter the following CP command:

#cp disconnect (# sign is not a prompt but it is part of the command)

This is the end of the Linux instance installation.




3.2 Linux configuration

System resources checking

Enter the following commands to check memory and swap space:

# grep MemTotal /proc/meminfo

# grep SwapTotal /proc/meminfo

Oracle recommends a minimum of 512 MB of RAM memory and 1 GB of swap space.
Under z/VM, memory size and swap should be adjusted to optimize virtual resources
utilization. Refer to documentation about Performance considerations for Linux under
z/VM.

Currently the swap space is too small. In next step, we are going to add a disk. There is
an alternative to using YAST2 with bottom line commands.

From the terminal, enter:

# dasd_configure 0.0.0200 1 0

List all disks:







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 37/ 84



# lsdasd

Record the device name displayed (dasdx) for disk 0.0.0200 should be c

# mkswap /dev/dasdc1
# swapon /dev/dasdc1

Verify the new swap space:

# grep SwapTotal /proc/meminfo

It should be equal to 1043072 KB.

The change will be made permanent for the next boot, after rebuilding boot ramdisk and
zipl boot record. The /etc/fstab will be updated as well at the end of the next step.



Logical volume configuration

Oracle RAC 10g installation requires more disks space that initially available (See
Oracle installation requirements) on the Linux instance. Disk space needs to be
appended, this will be done by adding logical volumes.

Activating the new volumes:

# dasd_configure 0.0.0300 1 0
# dasd_configure 0.0.0301 1 0
# dasd_configure 0.0.0302 1 0

List all disks:

# lsdasd

Record the device names displayed (dasdx) for disk 0.0.0300, 0.0.0301, 0.0.0302 should
be respectively dasdd, dasde, dasdf.

Formatting the new volumes (Caution: double check the device number before
answering yes to the format command prompt):

# dasdfmt f /dev/dasdd b 4096 -p
# dasdfmt f /dev/dasde b 4096 -p
# dasdfmt f /dev/dasdf b 4096 p

Create only one partition on each disk:

# fdasd a /dev/dasdd
# fdasd a /dev/dasde
# fdasd a /dev/dasdf

Create the volume group to install Oracle 10g:

# pvcreate /dev/dasdd1 /dev/dasde1 /dev/dasdf1







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 38/ 84



Display the attributes of the PVs you have just created:

# pvdisplay

Create a volume group named ora10gVG with all the physical volumes created in the
previous step:

# vgcreate ora10gVG /dev/ dasdd1 /dev/dasde1 /dev/dasdf1

Display the attributes of this volume group.

# vgdisplay ora10gVG

Create a logical volume with the following attributes:
Name it ora10gLV
3 stripes with a strip size of 16K
6,8 GB in LV size

# lvcreate i3 I16 L6.8G nora10gLV ora10gVG

Display the attributes of the newly created LV:

# lvdisplay /dev/ora10gVG/ora10gLV

Check the logival volume is successfully created and the LV size is 6.81 GB.

Create a journalled ext2 (ext3) file system on the testlv logical volume. Mount the
logical volume and make sure everything looks correct:

# mkfs.ext3 /dev/ora10gVG/ora10gLV -b 4096

Create the mount points for Oracle home and CRS home:

# mkdir p /opt/oracle
# mount /dev/ora10gVG/ora10gLV /opt/oracle
# df -h

# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/dasda1 2.1G 826M 1.2G 42% /
tmpfs 248M 0 248M 0% /dev/shm
/dev/mapper/ora10gVG-ora10gLV
6.8G 33M 6.4G 1% /opt/oracle

Note: /opt/oracle is actually mounted over /dev/mapper/ora10gVG/ora10gLV

Update the file system automount table to mount the logical volume and the swap disk
(defined in previous step) at boot time. Good precaution is to make a backup copy of the
original fstab file before:

# vi /etc/fstab

Add the following lines:







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 39/ 84



/dev/dasdc1 swap swap pri=42 0 0
/dev/ora10gVG/ora10gLV /opt/oracle ext3 acl,user_xattr 1 1

Important: To make the change permanent for the next boot, rebuild boot ramdisk and
zipl boot record:

# mkinitrd
# zipl


Kernel parameters change

Edit /etc/sysctl.conf file to insert new kernel settings:

# vi /etc/sysctl.conf

Add the following lines:

net.ipv4.ip_local_port_range=1024 65000
kernel.sem=250 32000 100 128
kernel.shmmax=2147483648
fs.file-max=65536


To apply then verify changes, enter the following commands:

# sysctl -p
# sysctl -a

Ensure sysctl will run at boot to set these values. Enter the following command:

# chkconfig boot.sysctl

boot.sysctl must be on.


Oracle group and oracle user accounts creation

Verify that current active user ID is root:

# id

Create the oracle accounts and groups:

# groupadd dba
# groupadd oinstall
# useradd m c Oracle Software Owner g oinstall G dba oracle
# passwd oracle

Important: the group ID and user ID for oracle must be equals on each linux guest
participating in a RAC cluster. Check with id command:








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 40/ 84


# id

And compare values of uid, gid and groups between all Linux guests.

Oracle user is now created. Oracle installation directory must be set to be owned by
oracle:dba

As user root, enter:

# chown oracle:dba /opt/oracle

Shell limits for Oracle user

Verify current limits of open files and max user process with the following
command:

# ulimit a

These limits must be increased for oracle user. Edit /etc/security/limits.conf to change:

# vi /etc/security/limits.conf

Add the following lines before # End of file:

oracle soft nofile 4096
oracle hard nofile 65536

Ensure that pam_limits is configured in /etc/pam.d/sshd, /etc/pam.d/login and
/etc/pam.d/su.

# less /etc/pam.d/sshd

The entry should read like this:

session required pam_limits.so

Repeat same steps for /etc/pam.d/login, /etc/pam.d/su and /etc/pam.d/xdm. Add the
entry if not present.

The default limit for oracle user is now 4096 and the oracle user can increase the
number of file handles up to 63536.

Check with the following sequence of commands:

# su oracle
~> ulimit -n
4096
~> ulimit -n 63536
~> ulimit -n
63536

To make this change permanent, add ulimit -n 63536 into oracle user .profile

# vi /home/oracle/.profile







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 41/ 84



Insert at the end of file:

ulimit -n 63536

Close the oracle user session and return to user root.

Similar process is required to increase the max user process for oracle user.


These limits must be increased for oracle user. Edit /etc/security/limits.conf to change:

# vi /etc/security/limits.conf

Add the following lines before # End of file:

oracle soft nproc 2047
oracle hard nproc 16384

To make this change permanent, add ulimit -n 16384 into oracle user .profile
Log in or su to oracle:

~> vi .profile

Insert at the end of file:

ulimit -u 16384

Close the oracle user session and return to user root.


Network configuration

In an Oracle RAC configuration, each node has access to one public network and one
private cluster interconnect network.

In this just installed environment the public LAN is considered as configured during
Linux installation. A FAST ETHERNET OSA card provides the connection to the
public network.

The private network is yet to be configured. Under the current z/VM environment, a
guest lan is defined with a QDIO ETHERNET interface for devices at addresses 814,
815, and 816. The private network will use this adapter.

Create a hardware configuration file for device 814 by copying the configuration file of
the current OSA card:

# cp /etc/sysconfig/hardware/hwcfg-qeth-bus -ccw-0.0.0614
/etc/sysconfig/hardware/hwcfg-qeth-bus -ccw-0.0.0814

Edit the configuration file and change device address and port name:

# vi /etc/sysconfig/hardware/hwcfg-qeth-bus-ccw-0.0.0814








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 42/ 84


#!/bin/sh
#
# hwcfg-qeth-bus-ccw-0.0.0814
#
# Hardware configuration for a qeth device at 0.0.0814
# Automatically generated by netsetup
#

STARTMODE='auto'
MODULE='qeth_mod'
MODULE_OPTIONS=''
MODULE_UNLOAD='yes'

# Scripts to be called for the various events.
SCRIPTUP='hwup-ccw'
SCRIPTUP_ccw='hwup-ccw'
SCRIPTUP_ccwgroup='hwup-qeth'
SCRIPTDOWN='hwdown-ccw'

# CCW_CHAN_IDS sets the channel IDs for this device
# The first ID will be used as the group ID
CCW_CHAN_IDS='0.0.0814 0.0.0815 0.0.0816'

# CCW_CHAN_NUM set the number of channels for this device
# Always 3 for an qeth device
CCW_CHAN_NUM='3'

# CCW_CHAN_MODE sets the port name for an OSA-Express device
#CCW_CHAN_MODE='PORT456'

It is a guest LAN adapter in qdio mode. The port name is not used.




Proceed in a similar manner to create an interface configurataion file
Copy from the 614 network device:

# cp /etc/sysconfig/network/ifcfg-qeth-bus-ccw-0.0.0614
/etc/sysconfig/network/ifcfgqeth-bus-ccw-0.0.0814

Edit the interface configuration file to provide the new network definitions for this
interface:

# vi /etc/sysconfig/network/ifcfg-qeth-bus-ccw-0.0.0814

Set IPADDR to 10.10.10.4n - n being your team number in this class.

BOOTPROTO='static'
UNIQUE=''
STARTMODE='onboot'
IPADDR='10.10.10.4n'
MTU='1500'
NETMASK='255.255.255.0'
NETWORK='10.10.10.0'







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 43/ 84


BROADCAST='10.10.10.255'


Set hardware online with:

# hwup qeth-bus-ccw-0.0.0814

Bring network adapter online with:

# ifup eth1

Check the result with ifconfig command.

At this point, if the other node is ready, test the connection between the local and the
remote node with ping command (IP range address: 10.10.10.4x).

No DNS services are available. Host name resolution must be performed based on local
hosts file.

Edit /etc/hosts as root and add the following lines for two RAC nodes:

9.100.193.2xx oran.mop.ibm.com oran
9.100.193.2xx oram.mop.ibm.com oram
9.100.192.8x oranvip.mop.ibm.com oranvip
9.100.192.8x oramvip.mop.ibm.com oramvip
10.10.10.4x oranpriv.mop.ibm.com oranpriv
10.10.10.4x orampriv.mop.ibm.com orampriv









SSH/SCP configuration

Any oracle user must be capable to SSH to all RAC nodes without asking for a
passphrase.

Log in to a cluster member node as oracle user. Verify host name. It will be used for ssh
configuration. From user oracle home directory, generate a SSH private/public key pair:

# su oracle
~> hostname
~> ssh-keygen -t rsa -b 1024

Accept default file to save the key.
No passphrase

Private key file and public key file are created under /home/oracle/.ssh.
Repeat above steps on each node.








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 44/ 84


Publish keys between member nodes of the RAC cluster (both local and remote):

~> cd /home/oracle/.ssh
~> ssh-copy-id i id_rsa.pub oracle@<local host name public>
~> ssh-copy-id i id_rsa.pub oracle@<remote host name public>

Answer Yes to continue the connection with the remote node
Enter password of remote node

The public key is copied in /home/oracle/.ssh/authorized_keys.

Repeat same process for all cluster members.

Test SSH connection to local and each remote node:

~> ssh oracle@<local host name public> (local)
~> ssh oracle@<remote host name public> (remote)

The connection should establish without prompting for a password.

And cross check using all possible combinations with the following:

~> ssh oracle@<local host name public> hostname
~> ssh oracle@<local host name private> hostname
~> ssh oracle@<remote host name public> hostname
~> ssh oracle@<remote host name private> hostname

Accept the host RSA key fingerprint the first times.
The connection should establish without prompting for a password.
Oracle environment variables

Update .profile file for oracle user (/home/oracle/.profile) on the node, add the
following lines:
export ORACLE_BASE=/opt/oracle
export CRS_HOME=$ORACLE_BASE/crs
export ORACLE_HOME=$ORACLE_BASE/db
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$CRS_HOME/lib
export PATH=$ORACLE_HOME/bin:$CRS_HOME/bin:$PATH
umask 022

Shared disks pool initialization

From one RAC node only, perform shared disks activation and formatting. This group
of shared disks will be used for:
One volume dedicated to Voting and Oracle Cluster Registry (OCR)
Six volumes are reserved for Oracle database managed by Automatic Storage
Management (ASM)
As root, from a new graphical user interface window, execute the following command:

# yast2 dasd

Highlight lines for disks at addresses 0.0.0340, 0.0.0350 to 0.0.0355 and hit Space bar
each time to select.







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 45/ 84


Click Perform Action and select Activate.
Click Perform Action and select Format.
Accept 7 parallel formats.
Verify selected disks and click Yes.

Record the allowed device names:
Voting and OCR disks should be on /dev/dasdg
ASM disks should range from /dev/dasdh to /dev/dasdm

Voting and OCR are sharing the same physical disk. This disk must be splitted in two
partitions:

# fdasd /dev/dasdg

Follow the command menu to create two equal partitions (should be 8339 tracks -
approximatively 400 MB for the current configuration). Print the partition table, save
and exit.
The resulting devices should /dev/dasdg1 and /dev/dasdg2.

All other shared disks reserved for Oracle database will contain only one full disk
partition (1173 MB).
Create the partition with the following command:

# fdasd -a /dev/dasdh

And repeat this command up to /dev/dasdm.
The resulting devices should be /dev/dasdh1 up to /dev/dasdm1.

From the other RAC node, after completion of shared disks initialization from first
node, activate the disks on the other RAC nodes (using yast2 dasd command). Verify
the attached partitions by browsing in /proc/partitions:

# cat /proc/partitions








Bind Raw Devices to shared Disk Devices

After creating the partitions on the shared disk devices bind them to raw devices on
every RAC nodes.
At first, determine any what raw devices are already bound:

# raw -qa

To bind disk devices to the available raw devices, edit /etc/raw file:

# vi /etc/raw
Insert a line for each partition:








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 46/ 84


raw1:dasdg1
raw2:dasdg2
raw3:dasdh1
.
raw8:dasdm1

For raw devices configured for the OCR (for instance raw1), set owner and permissions
on the device file:

# chown oracle:oinstall /dev/raw/raw1
# chmod 660 /dev/raw/raw1

For raw devices configured for the Voting (for instance raw2), set owner and
permissions on the device file:

# chown oracle:dba /dev/raw/raw2
# chmod 660 /dev/raw/raw2

For raw devices configured for Oracle database managed by ASM (for instance raw3 up
to raw8), set owner and permissions on the device file:

# chown oracle:dba /dev/raw/raw3
# chmod 660 /dev/raw/raw3
# ..
# chown oracle:dba /dev/raw/raw8
# chmod 660 /dev/raw/raw8

To set owner and permissions permanent after reboot, change is required in udev
configuration (udev being the utility that runs on boot to recreate all device nodes and
permissions). Apply changes into /etc/udev/udev.permissions for raw devices. Make a
backup copy of the file:

# cp -p /etc/udev/udev.permissions /etc/udev/udev.permissionsORIG
# vi /etc/udev/udev.permissions

Search the line starting with:

raw/raw*:root:disk:660.

Replace with:

raw/raw1:oracle:oinstall:660.



Insert the following lines:

raw/raw2:oracle:oinstall:660
raw/raw3:oracle:dba:660
..
raw/raw8:oracle:dba:660


Bind the partition to the raw devices with the following command:

# rcraw start







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 47/ 84



Check with:

# rcraw status

To ensure that the raw devices are bound when the system restarts, enter the following
command:

# insserv raw

Check with:

# chkconfig raw

Repeat steps above on the other RAC node.

























Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 48/ 84



4. Installing Oracle RAC 10g cluster


4.1. Step by step installation

Products media have been made available on a NFS server under /home/E2 directory.
This directory must be mounted on the first RAC node at /mnt.
As user root, proceed as follow:

# mount t nfs 9.100.192.155:/home/E2 /mnt o ro,intr

Oracle CRS installation

In this standard installation, the first part consists of installing the Cluster Registry
Services component on each RAC node.

1. Log on as oracle
2. From the CRS installation directory, enter:

# ./runInstaller

3. On Welcome panel, click Next



4. Enter the full path of the inventory directory: /opt/oracle/oraInventory, and click
Next








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 49/ 84




5. Open a new terminal and switch to user root in graphic mode with sux -



6. Change to oracle Inventory directory: cd /opt/oracle/oraInventory








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 50/ 84




7. Execute:

# ./orainstRoot.sh

8. Go back to the current installation screen and click Continue
9. Enter the home name and the full path of the installation directory: CRS_HOME
and /opt/oracle/crs, click Next



10. Select English language and click Next








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 51/ 84




11. Enter the cluster configuration as mentioned in the table below and click Next

Cluster name crsxy (with x and y node number)
orax oraxpriv
oray oraypriv

The picture below illustrates a cluster configuration with ora1 and ora2 RAC node
names.



12. Specify proper Interface type and click Next.
13. Specify the OCR location: /dev/raw/raw1 and click Next
14. Enter the voting disk file name: /dev/raw/raw2 and click Next
15. Open a new terminal and switch to user root in graphic mode with sux -







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 52/ 84


16. Change directory:

# cd /opt/oracle/oraInventory

17. Execute:

# ./orainstRoot.sh

18. Repeat steps from 16 to 18 on the other RAC node
19. Go back to the previous screen and click Continue
20. Read the summary and click Install













Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 53/ 84




21. When the message about Setup Privileges appears, as root, open a new terminal and
enter:

# ./root.sh










Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 54/ 84















Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 55/ 84















Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 56/ 84




22. Click Exit to complete the CRS installation step.




















Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 57/ 84




Oracle RAC software installation

The second part consists of installing the database software on each RAC node in
sequence (not in parallel).

1. Log on as oracle
2. From the db/Disk1 installation directory, enter:

# ./runInstaller

3. On Welcome panel, click Next



4. For file locations, keep default for source path and destination path. Change
destination name, enter DB_HOME. Click Next








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 58/ 84








5. On next panel, select Cluster Installation Mode, verify node names in list and click
Select All. Click Next








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 59/ 84















6. For the Installation type, select Enterprise Edition and click Next









Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 60/ 84



7. For database configuration, select Do not create a starter database. Click Next





8. In Summary panel, verify that cluster nodes are present, then click Install.








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 61/ 84




9. The installation starts on the local node up to setup successful step, then switches
to the installation on the remote node. Wait until the Setup Privileges panel pops
up. Follow instructions on the panel: Run /opt/oracle/db/roo.sh as root on each
nodes (Remember to use sux to switch to root with passing the graphic
environment setup).










Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 62/ 84














Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 63/ 84




10. On Welcome panel of the VIP configuration assistant, click Next










Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 64/ 84




11. On next panel, select the network interface connected to the public network and
click Next



12. Enter IP alias names, oraxvip and orayvip, verify subnet mask, 255.255.252.0 and
click Next








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 65/ 84




13. Read Summary, verify VIP configuration and click Finish.
14. Click OK and Exit.
15. When the End of installation message appears, click Exit.
16. Check the network configuration and installed RAC components on each RAC
nodes:

# ifconfig -a
# crs_stat -t



Oracle Net Services Configuration

The third part consists of configuring the Oracle listener.


1. Log on as oracle







Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 66/ 84


2. Execute the following command to configure the Oracle Net Services
Configuration:

# netca

3. On first panel, select Cluster configuration and click Next



4. Select all nodes and click Next
5. On Welcome panel, select Listener configuration and click Next



6. Select Add and click Next
7. Keep LISTENER as Listener name and click Next
8. Select IPC protocol and click Next
9. Select Use the standard port number of 1521 and click Next
10. Enter EXTPROC for IPC Key value and click Next
11. Select No as answer to the question and click Next
12. When the Listener configuration complete message appears click Next








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 67/ 84




17. Check the new cluster configuration on each RAC nodes:

# crs_stat -t






















Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 68/ 84




























Oracle DB creation

Last part consists of creating the database itself.

1. As user oracle, start the installation using the install command:

# ./dbca

2. On Welcome panel, select Oracle Real Application Clusters database








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 69/ 84




3. On step 1 of 17, select Create a Database and click Next










4. On step 2 of 17, select All and click Next








Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 70/ 84




5. On step 3 of 17, select General Purpose and click Next

























Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 71/ 84



6. On step 4 of 17, enter DBGPx as Global Database Name and click Next



7. On step 5 of 16, keep default settings and click Next






















Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 72/ 84





8. On step 6 of 16, select Use the same password for all accounts, enter password
oracle and click Next



9. On step 7 of 16, select ASM option and click Next



















Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 73/ 84








10. On step 8 of 16, enter password oracle and click Next



11. When pop up about ASM instance creation appears, click OK









Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 74/ 84




12. On step 8 of 15, click Create New, enter DATA as Disk Group Name, select
External and check raw3, raw4, raw5, raw6 and click OK











Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 75/ 84













Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 76/ 84




13. Select the newly created disk group and click Next























Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 77/ 84




14. On step 9 of 15, select Use Oralce-Managed Files, ensure ASM disk group is
correct and click Next



15. On step 10 of 15, click Browse and select the ASM disk group. Enter 1024 for
Flash Recovery Area Size, check Enable Archiving and click Next


















Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 78/ 84








16. On step 11 of 15, Keep default settings and click Next




17. On step 12 of 15, highlight the existing Database Services entry, click Add and
enter oltp as Service name. Select Preferred for Instance details. Repeat steps to
create a second service name called batch with Preferred and Available for Instance
details. Click Next










Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 79/ 84






























Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 80/ 84





18. On step 13 of 15, keep default for Memory, Sizing, Character Sets and Connection
Mode settings and click Next



19. On step 14 of 15, read and browse database storage information and click Next


















Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 81/ 84









20. On step 15 of 15, keep default settings and click Finish.



21. On last panel, click Exit. See below screens related to last steps of database
configuration.











Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 82/ 84













Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 83/ 84












Building an Oracle 10g RAC Release 1 cluster for linux
on IBM System Z9 & zSeries

page 84/ 84




5. Appendixes



5.1. Technical references


Puschitz Web Site:
http://www.puschitz.com/InstallingOracle10gRAC.shtml

Oracle Technology Network Web site:
http://otn.oracle.com

IBM System z9 & zSeries Web Site:
http://www-03.ibm.com/servers/eserver/zseries/

IBM developerWorks Tuning Oracle Database Server 10g for Linux on System z9 &
zSeries:
http://www.ibm.com/developerworks/linux/linux390/perf/tuning_rec_database_OracleR
ec.html#begin

You might also like