2016-04-23 Install GI 12.1.0.2


Overview

Following the successful creation of two (2) VM images under XEN with OEL7 and using the revised network design, I now go ahead with the installation of Grid Infrastructure 12.1.0.2 on that cluster.

References

Preparation for Installation

Pre-installation RPM Installed

Following the procedure in 3.2 Installing the Oracle Preinstallation RPM From Unbreakable Linux Network , I ran the following command on both REDFERN1 and REDFERN2 (assuming what is valid for OEL6 is also valid for OEL7):

yum install oracle-rdbms-server-12cR1-preinstall

There was a large amount of output which I did not capture, but the installation appeared to be successful.

Changed ORACLE Password

Changed the password for the oracle user on both REDFERN1 and REDFERN2 .

Permanently Change Host Name

Unfortunately, the use of hostname does not permanently change the hostname. I had to do the following (as root ) on REDFERN1 , and a similar thing on REDFERN2 :

cat >>/etc/sysctl.conf <<DONE
kernel.hostname = redfern1.yaocm.id.au
DONE
reboot

Create Oracle GI Software Home

Create the Oracle Grid Infrastructure Software Home as follows on both REDFERN1 and REDFERN2 :

sudo mkdir /opt/app/grid_infra
sudo chown oracle:oinstall /opt/app/grid_infra

Installed OEM Agent

From CRONULLA , the OEM agent was pushed out to both REDFERN1 and REDFERN2 . I ignored the warning about RHEL 7 not being supported.

Change Ownership for Shared Disks

Ran the following command to change the ownership of the shared disks so that they can be discovered by ASM:

sudo chown oracle:dba /dev/xvd[d-h]

The ownership of these shared disks was verified as follows:

ls -l /dev/xvd[d-h]

And the ouput is:

brw-rw----. 1 oracle dba 202,  48 Jan  1 20:48 /dev/xvdd
brw-rw----. 1 oracle dba 202,  64 Jan  1 20:48 /dev/xvde
brw-rw----. 1 oracle dba 202,  80 Jan  1 20:48 /dev/xvdf
brw-rw----. 1 oracle dba 202,  96 Jan  1 20:48 /dev/xvdg
brw-rw----. 1 oracle dba 202, 112 Jan  1 20:48 /dev/xvdh

Locate GI Software

Instead of unloading the GRID software onto the local host, I now use the NFS directory as set up through Use NFS for Oracle Software .

Start Installation

From PENRITH , I ran the following commands in an XTerm session:

xhost + 192.168.1.140
ssh -Y oracle@192.168.1.140

Once on REDFERN1 , I ran the following commands:

cd /opt/share/Software/grid/linuxamd64_12102/grid
./runInstaller

Installation Attempt #1

Step 1: Installation Option

Left the default option as Install and Configure Oracle Grid Infrastructure :


Clicked Next .

Step 2: Cluster Type

Left the default option as Configure a Standard Cluster :


Clicked Next .

Step 3: Installation Type

Changed option to Advanced Installation :

Clicked Next .

Step 4: Select Product Language

Left the default language English :

Clicked Next .

Step 5: Grid Plug and Play Information

Made the following changes:

Scan Name: redfern-crs.yaocm.id.au
Configure GNS: No

Clicked Next .

Step 6: Cluster Node Information

Got the following screen:

Clicked Add... .

Added the following details for REDFERN2 as follows:

Clicked OK to get the following screen:

Clicked SSH connectivity... , and entered the password:

Clicked Setup . After a few minutes, the following message appears:

Clicked OK . Then clicked Next .

Step 7: Network Interface Usage

Got the following screen—no changes were made:

Clicked Next .

Step 8: Storage Option Information

No changes were made on the following screen:

Clicked Next .

Got a warning message which was expanded to get the following details:

Clicked OK .

Clicked Back to go back to Step 7 and made the following change to use eth1 as private only:

Clicked Next and on the next screen as well.

Step 9: Create ASM Disk Group

On the following screen,

  • Changed Redundancy to External .

Clicked on Change Discovery Path... to /dev/xvd* as follows:

Clicked OK .

Created VOTE disk group on /dev/xvdh :

Clicked Next .

Got a warning message which was expanded to get the following details:

Clicked OK .

Clicked Cancel to stop the installation.

Correct Size of VOTE Disk

Shutdown REDFERN Cluster

I shutdown the REDFERN cluster.

Increased Size of VOTE Disk Group

On VICTORIA , I increased the size of the shared VOTE_01 disk as follows:

cd /OVS/running_pool/REDFERN/shared
dd if=/dev/zero of=VOTE_01 bs=1G count=6

Start Up REDFERN Cluster

The REDFERN cluster was then started.

Change Disk Ownership

Once the cluster was running, I had to change the ownership of the shared disks again:

sudo chown oracle:dba /dev/xvd[d-h]

Installation Attempt #2

The installation process was started again.

Step 09: Create ASM Disk Group with Larger Disk

Now the screen is:

Clicked Next .

Step 10: ASM Password

Filled in the passwords on the following screen:

Clicked Next .

Step 11: Failure Isolation

Only one (1) option was available on the following screen:

Clicked Next .

Step 12: Management Options

Filled in the OEM details as follows:

Clicked Next .

Step 13: Operating System Groups

Filled in the OS group details as follows:

Clicked Next .

Got the following message after expanding the details:

Clicked Yes .

Step 14: Installation Location

Set up the software location as follows:

Clicked Next .

Step 15: Root script execution

Since this is a home system, I can use root 's password as follows:

Clicked Next .

Step 16: Prerequisite Checks

The following progress screen appears:


After a while, the following results appeared:

There is only one (1) failed check. The details for the resolv.conf integrity are:

This can be ignored as the domain ( yaocm.id.au ) is only defined on my home network.

Clicked Close , then Fix & Check Again . The following screen appears:

Clicked OK to let OUI run the fix-up script using the credentials provided in Step 15. The following screen shows that the fix-up was successful:

However, the following problems remain:

Chose to ignore all of these problems, and clicked Next . The following warning appears:

Clicked Yes .

Step 17: Summary

The following summary screen appears:

Clicked Install .

Step 18: Install Product

The following progress screen appears:

After a while, the following message appears:

Clicked Yes .

However, this failed with the following error message:

According to the log at ~/oraInventory/logs/installActions2016-01-02_12-50-14PM.log :

2016/01/02 21:01:27 CLSRSC-12: The ASM resource ora.asm did not start
2016/01/02 21:01:27 CLSRSC-258: Failed to configure and start ASM
Died at /opt/app/grid_infra/12.1.0/grid/crs/install/crsinstall.pm line 2017.
The command '/opt/app/grid_infra/12.1.0/grid/perl/bin/perl -I/opt/app/grid_infra/12.1.0/grid/perl/lib -I/opt/app/grid_infra/12.1.0/grid/crs/install /opt/app/grid_infra/12.1.0/grid/crs
/install/rootcrs.pl  -auto -lang=en_AU.UTF-8' execution failed

And in /opt/app/grid_infra/12.1.0/grid/cfgtoollogs/crsconfig/rootcrs_redfern1_2016-01-02_08-53-53PM.log :

2016-01-02 21:01:21: Executing cmd: /opt/app/grid_infra/12.1.0/grid/bin/crsctl status resource ora.asm -init
2016-01-02 21:01:22: Checking the status of ora.asm
2016-01-02 21:01:27: Executing cmd: /opt/app/grid_infra/12.1.0/grid/bin/clsecho -p has -f clsrsc -m 12
2016-01-02 21:01:27: Command output:
>  CLSRSC-12: The ASM resource ora.asm did not start 
>End Command output