Reference
Oracle® Clusterware Installation Guide 11g Release 1 (11.1) for Linux
Root.sh Unable To Start CRS On Second Node [ID 369699.1]
RAC and Oracle Clusterware Best Practices and Starter Kit (Linux) [ID 811306.1]
Procedure
Creating Users and Groups
From the procedure in Creating Custom Configuration Groups and Users for Job Roles , the following groups and users are created:
groupadd -g 501 oinstall groupadd -g 502 crs groupadd -g 503 asm groupadd -g 504 asmdba groupadd -g 505 dba useradd -u 501 -g oinstall -G crs crs useradd -u 502 -g oinstall -G dba,asmdba oracle useradd -u 503 -g oinstall -G asm,asmdba asm
Creating the Oracle Clusterware User and OraInventory Path
I used the GUI LVM to initialise
/dev/sdb
as a single partition in a new logical volume called
VolGroup01
.
Following the example given in Example of Creating the Oracle Clusterware User and OraInventory Path , I entered the following commands:
mkdir -p /u01/app/crs chown -R crs:oinstall /u01/app mkdir /u01/app/oracle chown oracle:oinstall /u01/app/oracle chmod 775 /u01/app/ mkdir /u01/app/asm chown asm:oinstall /u01/app/asm
Checking the Hardware Requirements
Followed the procedure in Checking the Hardware Requirements .
This will be a 32-bit installation.
I have created a VM image with 1GB RAM. The default installation created a swap file of 2GB.
The results of the checks are below:
[root@penrith1 ~]# grep MemTotal /proc/meminfo MemTotal: 1035140 kB [root@penrith1 ~]# grep SwapTotal /proc/meminfo SwapTotal: 2097144 kB [root@penrith1 ~]# df -k /tmp Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 5967432 2185548 3473868 39% / [root@penrith1 ~]# free total used free shared buffers cached Mem: 1035140 529880 505260 0 41868 331244 -/+ buffers/cache: 156768 878372 Swap: 2097144 0 2097144 [root@penrith1 ~]# uname -m i686
Checking the Network Requirements
Followed the procedure in Checking the Network Requirements .
Network Hardware Requirements
The allocation of NICs is as follows:
Interface Name | Usage | Subnet |
---|---|---|
eth0 | Public | 10.1.1.0/24 |
eth1 | Private Interconnect | 192.168.1.0/24 |
eth2 | NAS | 192.168.2.0/24 |
IP Address Requirements
The allocation of IP addresses is as follows:
Node | Host Name | Type | IP Address | Registered In |
---|---|---|---|---|
PENRITH1 | penrith1.yaocm.id.au | Public | 10.1.1.240 | DNS |
penrith1-vip.yaocm.id.au | Virtual | 10.1.1.241 | DNS | |
penrith1-priv.yaocm.id.au | Private | 192.168.1.1 | Hosts file | |
penrith1-nas.yaocm.id.au | NAS | 192.168.2.2 | Hosts file | |
PENRITH2 | penrith2.yaocm.id.au | Public | 10.1.1.242 | DNS |
penrith2-vip.yaocm.id.au | Virtual | 10.1.1.243 | DNS | |
penrith2-priv.yaocm.id.au | Private | 192.168.1.2 | Hosts file | |
penrith2-nas.yaocm.id.au | NAS | 192.168.2.3 | Hosts file |
The
/etc/hosts
is uploaded as
penrith1_hosts.txt
.
Node Time Requirements
The synchronisation of clocks between
PENRITH1
and
PENRITH2
is achieved by using the VMware Tools Toolbox function of synchronising the clock in the VM image with the clock on the host, and by running both VM images on the same host.
Network Configuration Options
The comment at Network Configuration Options is:
If certified Network-attached Storage (NAS) is used for Oracle RAC and this storage is connected through Ethernet-based networks, then you must have a third network interface for I/O. Failing to provide three separate interfaces in this case can cause performance and stability problems under load.
This is already catered for in the design described above.
Configuring the Network Requirements
Need to be mindful of the following requirement (see Configuring the Network Requirements ):
To prevent public network failures with Oracle RAC databases using NAS devices or NFS mounts, enter the following command as root to enable the Name Service Cache Daemon (nscd):
# /sbin/service nscd start
The command sequence is:
[root@penrith1 ~]# service nscd status nscd is stopped [root@penrith1 ~]# service nscd start Starting nscd: [ OK ] [root@penrith1 ~]# chkconfig --list nscd nscd 0:off 1:off 2:off 3:off 4:off 5:off 6:off [root@penrith1 ~]# chkconfig --level 35 nscd on [root@penrith1 ~]# chkconfig --list nscd nscd 0:off 1:off 2:off 3:on 4:off 5:on 6:off
These commands start the
ncsd
service immediately and ensure that it is started in multiuser mode (run levels 3 and 5).
Identifying Software Requirements
Followed the procedure in Identifying Software Requirements .
I am assuming that the Oracle Validated RPM has done its job and installed the correct packages.
Configuring Kernel Parameters
Followed the procedure in Configuring Kernel Parameters .
The following kernel parameters are set:
[root@penrith1 ~]# cat /proc/sys/kernel/sem 250 32000 32 128 [root@penrith1 ~]# cat /proc/sys/kernel/shmmax 4294967295 [root@penrith1 ~]# cat /proc/sys/kernel/shmmni 4096 [root@penrith1 ~]# cat /proc/sys/kernel/shmall 268435456 [root@penrith1 ~]# cat /proc/sys/fs/file-max 102263 [root@penrith1 ~]# cat /proc/sys/net/ipv4/ip_local_port_range 32768 61000 [root@penrith1 ~]# cat /proc/sys/net/core/rmem_default 110592 [root@penrith1 ~]# cat /proc/sys/net/core/rmem_max 131071 [root@penrith1 ~]# cat /proc/sys/net/core/wmem_default 110592 [root@penrith1 ~]# cat /proc/sys/net/core/wmem_max 131071
The modified
/etc/sysctl.conf
has been uploaded into this Wiki page.
Installing the cvuqdisk Package for Linux
Installed
cvuqdisk
from the extracted clusterware software.
[root@penrith1 ~]# cd /mnt/hgfs/OCM/clusterware/rpm [root@penrith1 rpm]# ls cvuqdisk-1.0.1-1.rpm [root@penrith1 rpm]# rpm -qi cvuqdisk package cvuqdisk is not installed [root@penrith1 rpm]# export CVUQDISK_GRP=oinstall [root@penrith1 rpm]# rpm -iv cvuqdisk-1.0.1-1.rpm Preparing packages for installation... cvuqdisk-1.0.1-1 [root@penrith1 rpm]# rpm -qi cvuqdisk Name : cvuqdisk Relocations: (not relocatable) Version : 1.0.1 Vendor: Oracle Corp. Release : 1 Build Date: Fri 03 Jun 2005 08:21:38 AM EST Install Date: Sat 24 Dec 2011 07:05:59 PM EST Build Host: stacs27.us.oracle.com Group : none Source RPM: cvuqdisk-1.0.1-1.src.rpm Size : 4168 License: Oracle Corp. Signature : (none) Summary : RPM file for cvuqdisk Description : This package contains the cvuqdisk program required by CVU. cvuqdisk is a binary that assists CVU in finding scsi disks. To install this package, you must first become 'root' and then set the environment variable 'CVUQDISK_GRP' to the group that will own cvuqdisk. If the CVUQDISK_GRP is not set, by default "oinstall" will be the owner group of cvuqdisk.
Configuring SSH on All Cluster Nodes
Followed the procedure in Configuring SSH on All Cluster Nodes
[root@penrith1 rpm]# su - crs [crs@penrith1 ~]$ ls -ld .ssh ls: .ssh: No such file or directory [crs@penrith1 ~]$ mkdir .ssh [crs@penrith1 ~]$ ls -ld .ssh drwxr-xr-x 2 crs oinstall 4096 Dec 24 20:32 .ssh [crs@penrith1 ~]$ chmod 700 .ssh [crs@penrith1 ~]$ ls -ld .ssh drwx------ 2 crs oinstall 4096 Dec 24 20:32 .ssh [crs@penrith1 ~]$ id uid=501(crs) gid=501(oinstall) groups=501(oinstall),502(crs) context=user_u:system_r:unconfined_t [crs@penrith1 ~]$ id crs uid=501(crs) gid=501(oinstall) groups=502(crs),501(oinstall) context=user_u:system_r:unconfined_t [crs@penrith1 ~]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/crs/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/crs/.ssh/id_rsa. Your public key has been saved in /home/crs/.ssh/id_rsa.pub. The key fingerprint is: ba:ec:41:6f:30:15:fc:6c:50:b6:45:92:1d:b9:dd:2b crs@penrith1.yaocm.id.au
I created keys for the folowing users on both
PENRITH1
amd
PENRITH2
:
- oracle
- crs
- asm
Configuring Software Owner User Environments
Followed the procedure in Configuring Software Owner User Environments .
Requirements for Creating an Oracle Clusterware Home Directory
Followed the procedure in Requirements for Creating an Oracle Clusterware Home Directory .
Understanding and Using Cluster Verification Utility
Followed the procedure in Understanding and Using Cluster Verification Utility .
Checking Oracle Clusterware Installation Readiness with CVU
Followed the procedure in Checking Oracle Clusterware Installation Readiness with CVU .
Checking for Successful Hardware and OS Installation
./runcluvfy.sh stage -post hwos -n penrith1,penrith2 -verbose >/tmp/cluvfy_post_hwos.lst
cluvfy_post_hwos.lst
has been uploaded.
Checking Prerequisites for Clusterware Installation
su - crs cd /mnt/hgfs/OCM/clusterware ./runcluvfy.sh stage -pre crsinst -n penrith1,penrith2 -c /u02 -q /u04 -verbose >/tmp/cluvfy_pre_crsinst.lst
cluvfy_pre_crsinst.lst
has been uploaded. An earlier version had a list of missing packages.
The issue with memory is strange. I was able to ignore it during the installation of clusterware.
Install Missing Packages
Mounted the RHEL 5.4 DVD at
/media
.
mount /dev/cdrom /media cd /media/Server yum localinstall compat-libstdc++-33-3.2.3-61.i386.rpm yum localinstall elfutils-libelf-devel-0.137-3.el5.i386.rpm yum localinstall libstdc++-devel-4.1.2-46.el5.i386.rpm yum localinstall sysstat-7.0.2-3.el5.i386.rpm yum localinstall unixODBC-2.2.11-7.1.i386.rpm yum localinstall unixODBC-devel-2.2.11-7.1.i386.rpm yum localinstall gcc-4.1.2-46.el5.i386.rpm yum localinstall libaio-devel-0.3.106-3.2.i386.rpm
Set Security Level
Ensure that the firewall is disabled and SELinux is set to permissive mode.
Following the advice in
RAC instabilities due to firewall (netfilter/iptables) enabled on the cluster interconnect [ID 554781.1]
and , I have disabled the firewall completely by stopping the
iptables
service. This is because I encountered the problem described in
Root.sh Unable To Start CRS On Second Node [ID 369699.1]
It was only later that I found the advice in RAC and Oracle Clusterware Best Practices and Starter Kit (Linux) [ID 811306.1] to be that:
Prevent root.sh failures by ensuring that the Linux Firewall (iptables) has been disabled. See Document 554781.1 for details.
The advice in RAC and Oracle Clusterware Best Practices and Starter Kit (Linux) [ID 811306.1] is that:
.For pre-11.2.0.2 installations, SELinux must be disabled. For 11.2.0.2, SELinux is supported but the recommendation (if possible) is to run with SELinux disabled. See Bug 9746474