10g RAC on Linux x86 Installation by 1GQKdP35


									             10g RAC on Linux x86_64
                                                        2006 – Q3

                                           Confidential - For Internal Use Only

9d4e8c70-6232-441b-9dd3-96274423af87.doc                                          Page 1 of 23
                                           Table of Contents

1. PRE-INSTALLATION                                                            3
2. ORACLE CLUSTERWARE (FORMERLY CRS) AND ASM                                  12
INSTALL ORACLE CLUSTERWARE                                                    14
POST-INSTALLATION ADMINISTRATION INFO                                         15
3. ORACLE DATABASE 10G WITH RAC – SOFTWARE (BINARIES)                         17
PRE-INSTALL NOTES                                                             17
INSTALL                                                                       17
4. PATCH ORACLE DATABASE SOFTWARE                                             18
DOWNLOAD AND INSTALL PATCHES                                                  18
FIX AN INSTALL BUG (5117016)                                                  18
FIX A PERMISSION BUG (PATCH 5087548)                                          19
5. RAC DATABASE USING THE DBCA WITH ASM                                       20
PRE-INSTALL                                                                   20
INSTALL                                                                       20
6. POST-INSTALLATION TASKS                                                    22
7. ORACLE FILES                                                               23
LOCAL FILES                                                                   23
SHARED ORACLE DATABASE FILES                                                  23
SHARED DATABASE FILES FOR THE APPLICATION                                     23

9d4e8c70-6232-441b-9dd3-96274423af87.doc                       Page 2 of 23
1. Pre-Installation
Source: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide
10g Release 2 (10.2) for Linux

     1. Required Software
             Redhat Enterprise Linux 3 AS Update 3 (kernel 2.4.21-20) x86_64
                   o uname –r
                   o cat /etc/redhat-release
                   o cat /etc/issue
             Oracle Database Enterprise Edition 10.2 x86_64
             Oracle Clusterware 10.2 x86_64
             Oracle Patch (p4547817_10202_Linux-x86-64)
             Oracle Permissions Patch (p5087548_10202_Linux-x86-64)
             Oracle Critical Patch Update Jul 2006 (p5225799_10202_Linux-x86-64)
             Oracle ASMLib 2.0
             Oracle Cluster Verification Utility 1.0
             Oracle Client 10.2.x
     2. Minimum hardware requirements for each RAC node
             1 GB of physical RAM
                     o cat /proc/meminfo | grep MemTotal
             1.5 GB of swap space (or the same size as RAM)
                     o cat /proc/meminfo | grep SwapTotal
             400 MB of disk space in the /tmp directory
                     o df -h /tmp
             Up to 4 GB of disk space for the Oracle software
             Optional: 1.2 GB of disk space for a preconfigured database that uses file
               system storage
             Shared Database Disk: 2TB usable, 33G LUNs, RAID 1+0
                     o /sbin/fdisk -l
     3. Networking hardware requirements
             Each node must have at least two network adapters; one for the public
               network interface and one for the private network interface (the RAC
             The interface names associated with the network adapters for each
               network must be the same on all nodes
             For increased reliability, you can configure redundant public and private
               network adapters for each node.
             For the public network, each network adapter must support TCP/IP.
             For the private network, the interconnect must support the user datagram
               protocol (UDP) using high-speed network adapters and switches that
               support TCP/IP (Gigabit Ethernet or better recommended).
             UDP is the default interconnect protocol for RAC and TCP is the
               interconnect protocol for Oracle CRS.
     4. IP Address requirements for each RAC node

9d4e8c70-6232-441b-9dd3-96274423af87.doc                                                         Page 3 of 23
                   An IP address and an associated host name registered in the domain name
                    service (DNS) for each public network interface. If you do not have an
                    available DNS, then record the network name and IP address in the system
                    hosts file, /etc/hosts.
                   One unused virtual IP address and an associated virtual host name
                    registered in DNS that you will configure for the primary public network
                    interface. The virtual IP address must be in the same subnet as the
                    associated public interface. After installation, you can configure clients to
                    use the virtual host name or IP address. If a node fails, its virtual IP
                    address fails over to another node.
                   A private IP address and optional host name for each private interface.
                    Oracle recommends that you use non-routable IP addresses for the private
                    interfaces, for example: 10.*.*.* or 192.168.*.*. You can use the /etc/hosts
                    file on each node to associate private host names with private IP
                          o cat /etc/hosts
                          o /sbin/ifconfig –a


Node     Interface Name          Type          IP Address      Registered In
rac1     rac1                    Public   DNS (if available, else the hosts file)
rac1     rac1-vip                Virtual   DNS (if available, else the hosts file)
rac1     rac1-priv               Private        Hosts file
rac2     rac2                    Public   DNS (if available, else the hosts file)
rac2     rac2-vip                Virtual   DNS (if available, else the hosts file)
rac2     rac2-priv               Private        Hosts file

    5. Linux x86 software requirements
           To see installed packages
                   o rpm –qa
                          o    rpm -q kernel --queryformat "%{NAME}-%{VERSION}.%{RELEASE} (%{ARCH})\n"
                          o rpm -q <package_name>
Item                           Requirement
Operating systems x86                         Red Hat Enterprise Linux AS/ES 3 (Update 4 or later)
                                               Red Hat Enterprise Linux AS/ES 4 (Update 1 or later)

                                              SUSE Linux Enterprise Server 9 (Service Pack 2 or

Kernel version x86 (64-            The system must be running one of the following kernel

9d4e8c70-6232-441b-9dd3-96274423af87.doc                                         Page 4 of 23
Item                               Requirement
                                   versions (or a later version):

                                   Red Hat Enterprise Linux 3 (Update 4):


                                   Note: This is the default kernel version.

                                   Red Hat Enterprise Linux 4 (Update 1):


                                   SUSE Linux Enterprise Server 9 (Service Pack 2):

Red Hat Enterprise Linux The following packages (or later versions) must be installed:
3 (Update 4) Packages
                         compat-db 4.0.14-5.1
                         glibc-devel-2.3.2-95.20 (32 bit)
                         glibc-devel-2.3.4-2.13.i386 (32-bit)
                         gnome-libs- (32 bit)

                                   Note: XDK is not supported with gcc on Red Hat Enterprise
                                   Linux 3.

9d4e8c70-6232-441b-9dd3-96274423af87.doc                                   Page 5 of 23
Item                               Requirement
Red Hat Enterprise Linux The following packages (or later versions) must be installed:
4 (Update 1):Packages

                                   Note: XDK is not supported with gcc on Red Hat Enterprise
                                   Linux 4.
SUSE Linux Enterprise              The following packages (or later versions) must be installed:
Server 9 Packages
PL/SQL native                      Intel C++ Compiler 8.1 or later and the version of GNU C
compilation, Pro*C/C++,            and C++ compilers listed previously for the distribution are
Oracle Call Interface,             supported for use with these products.
Oracle C++ Call
Interface, Oracle XML              Note: Intel C++ Compiler v8.1 or later is supported.
Developer's Kit (XDK)              However, it is not required for installation.

                                   On Red Hat Enterprise Linux 3, Oracle C++ Call Interface
                                   (OCCI) is supported with version 2.2 of the GNU C++
                                   compiler. This is the default compiler version. OCCI is also
                                   supported with Intel Compiler v8.1 with gcc 3.2.3 standard
                                   template libraries.

                                   On Red Hat Enterprise Linux 4.0, OCCI does not support
                                   GCC 3.4.3. To use OCCI on Red Hat Enterprise Linux 4.0,
                                   you need to install GCC 3.2.3.

9d4e8c70-6232-441b-9dd3-96274423af87.doc                                   Page 6 of 23
Item                               Requirement
                                   Oracle XML Developer's Kit is not supported with GCC on
                                   Red Hat Linux 4.0. It is supported only with Intel C++
                                   Compiler (ICC).
Oracle JDBC/OCI                    You can use the following optional JDK versions with the
Drivers                            Oracle JDBC/OCI drivers; however, they are not required for
                                   the installation:

                                              Sun JDK 1.5.0 (64-bit)
                                              Sun JDK 1.5.0 (32-bit)
                                              Sun JDK 1.4.2_09 (32-bit)

Oracle Real Application            For a cluster file system, use one of the following options:
                                   Red Hat 3: Oracle Cluster File System (OCFS)

                                              Version 1.0.13-1 or later

                                   OCFS requires the following kernel packages:


                                   In the preceding list, the variable kernel_version represents
                                   the kernel version of the operating system on which you are
                                   installing OCFS.

                                   Note: OCFS is required only if you want to use a cluster file
                                   system for database file storage. If you want to use Automatic
                                   Storage Management or raw devices for database file storage,
                                   then you do not need to install OCFS.

                                   Obtain OCFS kernel packages, installation instructions, and
                                   additional information about OCFS from the following URL:


                                   Red Hat 4: Oracle Cluster File System 2 (OCFS2)

                                              Version 1.0.1-1 or later

9d4e8c70-6232-441b-9dd3-96274423af87.doc                                    Page 7 of 23
Item                               Requirement
                                   For information about Oracle Cluster File System version 2,
                                   refer to the following Web site:


                                   For OCFS2 certification status, refer to the Certify page on

                                   SUSE 9: Oracle Cluster File System 2 (OCFS2)

                                                  OCFS2 is bundled with SuSE Linux Enterprise Server
                                                   9, Service Pack 2 or higher.
                                                  If you are running SUSE9, then ensure that you are
                                                   upgraded to the latest kernel (Service Pack 2 or
                                                   higher), and ensure that you have installed the
                                                   packages ocfs2-tools and ocfs2console.

                                   For OCFS2 certification status, refer to the Certify page on

    6. Additional RAC specific software requirements
           See ASMLib downloads at:

Real Application Clusters            ASMLIB 2.0 for Red Hat 3.0 AS

                                     Library and Tools

                                                   oracleasm-support-2.0.3-1.x86_64.rpm
                                                   oracleasmlib-2.0.2-1.x86_64.rpm

                                     Driver for kernel 2.4.21-40.EL

                                                   oracleasm-2.4.21-40.ELsmp-1.0.4-1.x86_64.rpm

    7. Create the Linux groups and users on each RAC node
                    o dba group (/usr/sbin/groupadd dba)
                    o oinstall      (/usr/sbin/groupadd oinstall)
                    o oracle user (/usr/sbin/useradd -G dba oracle)
                    o nobody user
           The Oracle software owner user and the Oracle Inventory, OSDBA, and
               OSOPER groups must exist and be identical on all cluster nodes. To create

9d4e8c70-6232-441b-9dd3-96274423af87.doc                                          Page 8 of 23
            these identical users and groups, you must identify the user ID and group
            IDs assigned them on the node where you created them, then create the
            user and groups with the same name and ID on the other cluster nodes
   8. Configure SSH on each RAC node
          Login as oracle
          Create the .ssh directory in oracle’s home directory (then chmod 700 .ssh)
          Generate an RSA key for version 2 of the SSH protocol
            /usr/bin/ssh-keygen -t rsa
          Generate a DSA key for version 2 of the SSH protocol
            /usr/bin/ssh-keygen -t dsa
          Copy the contents of the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub files to
            the ~/.ssh/authorized_keys file for all nodes and share
            ~/.ssh/authorized_keys to all cluster nodes
          chmod 644 ~/.ssh/authorized_keys
          Enable the Installer to use the ssh and scp commands without being
            prompted for a pass phrase
                  o exec /usr/bin/ssh-agent $SHELL
                  o /usr/bin/ssh-add
                  o At the prompts, enter the pass phrase for each key that you
                  o Test ssh connections and confirm authenticity message
          Also test ssh and confirm the authenticity message back to the node you
            are working on. Example: If you are on node1, ssh to node1.
          Ensure that X11 forwarding will not cause the installation to fail
                  o Edit or create the ~oracle/.ssh/config as follows
                  o Host *
                          ForwardX11 no
          If necessary, start required X emulation software on the client
          Test: /usr/X11R6/bin/xclock
   9. Configure kernel parameters on each RAC node
          Values should be equal or greater than those in the following table on all
            nodes (/etc/sysctl.conf)
                  o /sbin/sysctl -a | grep sem
                  o /sbin/sysctl -a | grep shm
                  o /sbin/sysctl -a | grep file-max
                  o /sbin/sysctl -a | grep ip_local_port_range
                  o /sbin/sysctl -a | grep net.core
Parameter               Value                     File
semmsl semmns                     250 32000 100 128     /proc/sys/kernel/sem
semopm semmni
Shmmax                            Half the size of      /proc/sys/kernel/shmmax
                                  physical memory (in
Shmmni                            4096                  /proc/sys/kernel/shmmni

9d4e8c70-6232-441b-9dd3-96274423af87.doc                               Page 9 of 23
Parameter                         Value                       File
Shmall                            2097152                     /proc/sys/kernel/shmall
file-max                          65536                       /proc/sys/fs/file-max
ip_local_port_range               Minimum: 1024               /proc/sys/net/ipv4/ip_local_port_range

                                  Maximum: 65000
rmem_default                      262144                      /proc/sys/net/core/rmem_default
rmem_max                          262144                      /proc/sys/net/core/rmem_max
wmem_default                      262144                      /proc/sys/net/core/wmem_default
wmem_max                          262144                      /proc/sys/net/core/wmem_max

               To change values:
                      o Edit             with values below
                      o Once edited, execute /sbin/sysctl -p to apply changes manually
                      kernel.shmall = 2097152
                      kernel.shmmax = 2147483648
                      kernel.shmmni = 4096
                      kernel.sem = 250 32000 100 128
                      fs.file-max = 65536
                      net.ipv4.ip_local_port_range = 1024 65000
                      net.core.rmem_default = 1048576
                      net.core.rmem_max = 1048576
                      net.core.wmem_default = 262144
                      net.core.wmem_max = 262144
    10. Set shell limits for the oracle user on all nodes to improve performance
             Add the following lines to /etc/security/limits.conf file
                oracle          soft nproc 2047
                oracle          hard nproc 16384
                oracle          soft nofile 1024
                oracle          hard nofile 65536
             Add or edit the following line in the /etc/pam.d/login file
                session required /lib/security/pam_limits.so
             Edit /etc/profile with the following
                            if [ $USER = "oracle" ]; then
                                  if [ $SHELL = "/bin/ksh" ]; then
                                       ulimit -p 16384
                                       ulimit -n 65536
                                       ulimit -u 16384 -n 65536
    11. Create Oracle software directories on each node
            Oracle Base
               ex. /u01/app/oracle

9d4e8c70-6232-441b-9dd3-96274423af87.doc                                      Page 10 of 23
                      o Minimum 3G available disk space
                      # mkdir -p /u01/app/oracle
                      # chown -R oracle:oinstall /u01/app/oracle
                      # chmod -R 775 /u01/app/oracle
             Oracle Cluster Ready Services
                ex. /u01/crs/oracle/product/10/crs
                      o Should not be a subdirectory of the Oracle Base directory
                      o Minimum 1G available disk space
                      # mkdir -p /u01/crs/oracle/product/10/crs
                      # chown -R oracle:oinstall /u01/crs
                      # chmod -R 775 /u01/crs
             Note, the Oracle Home directory will be created by the OUI
                      o Oracle Home directories will be listed in /etc/oratab
    12. Oracle database files and Oracle database recovery files (if utilized) must reside
        on shared storage:
             ASM: Automatic Storage Management
             NFS file system (requires a NAS device)
             Shared Raw Partitions
    13. The Oracle Cluster Registry and Voting disk files must reside on shared storage,
        but not on ASM. You cannot use Automatic Storage Management to store OCR
        or Voting disk files because these files must be accessible before any Oracle
        instance starts.
             These files MUST be raw files on shared storage. Files:
                      o ora_ocr: 100M each, 2 for redundancy
                      o ora_vote: 20M each, 3 for redundancy
             Clustered sharing of these raw files will be handled by CRS. Third party
                clusterware is not required.
    14. Configure the oracle user’s environment
             PATH
                      o In PATH: $ORACLE_HOME/bin before /usr/X11R6/bin
             ORACLE_BASE (ex. /u01/app/oracle)
             ORACLE_HOME (ex. $ORACLE_BASE/product/<version>)
             ORA_CRS_HOME (ex. $ORACLE_BASE/crs)
             DISPLAY
             umask 022
             Test X emulator
    15. Ensure a switch resides on the network between the nodes
    16. For Oracle Clusterware (CRS) on x86 64-bit, you must run rootpre.sh
             Loaded with Clusterware software
    17. Oracle Database 10g installation requires you to perform a two-phase process in
        which you run Oracle Universal Installer (OUI) twice. The first phase installs
        Oracle Clusterware 10g Release 2 (10.2) and the second phase installs the Oracle
        Database 10g software with RAC. These steps are documented below.

9d4e8c70-6232-441b-9dd3-96274423af87.doc                            Page 11 of 23
2. Oracle Clusterware (formerly CRS) and ASM
Source: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide
10g Release 2 (10.2) for Linux

     1. Verify user equivalence by testing ssh to all nodes
             ssh may need to be in /usr/local/bin/.
                    o Softlinks may need to be created for ssh and scp in /usr/local/bin/.
     2. IP Addresses: In addition to the host machine's public internet protocol (IP)
        address, obtain two more IP addresses for each node
             Both nodes require a separate public IP address for the node's Virtual IP
                address (VIP). Oracle uses VIPs for client-to-database connections.
                Therefore, the VIP address must be publicly accessible.
     3. The third address for each node must be a private IP address for inter-node, or
        instance-to-instance Cache Fusion traffic. Using public interfaces for Cache
        Fusion can cause performance problems.
     4. Oracle Clusterware should be installed in a separate home directory. You should
        not install Oracle Clusterware in a release-specific Oracle home mount point.

Pre-Install of Clusterware files (OCR and Voting Disk)
     5. Oracle Clusterware files to be installed are:
             Oracle Cluster Registry (OCR): 100M: ora_ocr
             CRS Voting Disk: 20M: ora_vote
     6. The CRS files listed above must be on shared storage (OCFS, NFS, or raw) and
        bound and visible to all nodes.
             You cannot use Automatic Storage Management to store Oracle CRS files,
                because these files must be accessible before any Oracle instance starts.
     7. If using raw, do the following on all nodes as root
             To identify device names: /sbin/fdisk –l
                    o devicename examples: /dev/sdv OR /dev/emcpowera
             Create (raw) partitions: /sbin/fdisk <devicename>
                    o Use the “p” command to list the partition table of the device.
                    o Use the “n” command to create a partition.
                    o After creating required partitions on this device, use the “w”
                       command to write the modified partition table to the device
             Bind partitions to the raw devices
                    o See what devices are already bound: /usr/bin/raw -qa
                    o Add a line to /etc/sysconfig/rawdevices for each partition created:
                            /dev/raw/raw1 </path/partition_name>
                    o For the raw device created
                            chown root:dba /dev/raw/raw1
                            chmod 640 /dev/raw/raw1
                    o To bind the partitions to the raw devices, enter the following
                            /sbin/service rawdevices restart

9d4e8c70-6232-441b-9dd3-96274423af87.doc                                                         Page 12 of 23
Pre-Install of Database files for ASM (Automatic Storage
     8. Determine how many devices and free disk space required
              Determine space needed for Database files
              Determine space needed for recovery files (optional)
     9. ASM redundancy level: determines how ASM mirrors, the number of disks
         needed for mirroring, and amount of disk space needed.
              External Redundancy: ASM does not mirror
              Normal Redundancy: Two-way ASM mirroring. Minimum of 2 disks are
                  required. Useable disk space is 1/2 the sum of the disk space.
              High Redundancy: Three-way ASM mirroring. Minimum of 3 disks are
                  required. Useable disk space is 1/3 the sum of the disk space.
     10. ASM metadata requires additional disk space. Use the following calculation to
         determine space in megabytes:
              15 + (2 * number_of_disks) + (126 * number_of_ASM_instances)
     11. Failure groups for ASM disk group devices: optional. Associating a set of disk
         devices in a custom failure group.
              Only available in Normal or High redundancy level
     12. Guidelines for disk devices and disk groups
              All devices in an ASM disk group should be the same size and have the
                  same performance characteristics
              Do not specify more than one partition on a single physical disk as a disk
                  group device. ASM expects each disk group device to be on a separate
                  physical disk.
              Although you can specify a logical volume as a device in an ASM disk
                  group, Oracle does not recommend their use. Logical volume managers
                  can hide the physical disk architecture, preventing ASM from optimizing
                  I/O across the physical devices.
     13. If necessary, download the required ASMLIB packages from the OTN Web site:
              http://www.oracle.com/technology/software/tech/linux/asmlib/rhel3.html
     14. Install the following three packages on all nodes, where version is the version of
         the ASMLIB driver, arch is the system architecture, and kernel is the version of
         the kernel that you are using:
              oracleasm-support-version.arch.rpm
              oracleasm-kernel-version.arch.rpm
              oracleasmlib-version.arch.rpm
     15. On all nodes, install the packages as root:
              rpm -Uvh oracleasm-support-version.arch.rpm \
                             oracleasm-kernel-version.arch.rpm \
              check kernel modules: /sbin/modprobe -v oracleasm
     16. Run the oracleasm initialization script as root on all nodes:
              /etc/init.d/oracleasm configure
              When requested, select owner (oracle), group (dba), and start on boot (y)

9d4e8c70-6232-441b-9dd3-96274423af87.doc                                                     Page 13 of 23
Configure the Disk Devices to Use the ASM Library Driver
    17. Install or configure the shared disk devices that you intend to use for the disk
        group(s) and restart the system.
    18. Identify the device name for the disks: /sbin/fdisk -l
    19. Use either fdisk (or parted) to create a single whole-disk partition on the disk
        devices that you want to use.
             On Linux systems, Oracle recommends that you create a single whole-disk
                 partition on each disk
             To identify device names: /sbin/fdisk –l
                     o devicename examples: /dev/sdv OR /dev/emcpowera
             Create (raw) partitions: /sbin/fdisk <devicename>
                     o Use the “p” command to list the partition table of the device.
                     o Use the “n” command to create a partition.
                     o After creating required partitions on this device, use the “w”
                         command to write the modified partition table to the device
    20. Mark the disk(s) as a ASM disk(s). As root run:
             /etc/init.d/oracleasm createdisk DISK1 /dev/sdb1
             /etc/init.d/oracleasm createdisk DISK2 /dev/sda1
             Where DISK1 and DISK2 are the name you want to assign to the disks. It
                 MUST start with an uppercase letter.
    21. On each node, to make the disk available on the other cluster nodes, enter the
        following command as root :
             /etc/init.d/oracleasm scandisks
    22. On each node confirm disks
             /etc/init.d/oracleasm listdisks

    23. If you are using EMC Powerpath on Red Hat 3 add the following line to the
             ORACLEASM_SCANEXCLUDE="emcpower"

Install Oracle Clusterware
    24. During the installation, hidden files on the system (for example, .bashrc or .cshrc)
        will cause installation errors if they contain stty commands. you must modify
        these files to suppress all output on STDERR, as in the following examples:
             Bourne, Bash, or Korn shell:
                if [ -t 0 ]; then
                   stty intr ^C
             C shell:
                test -t 0
                if ($status == 0) then
                   stty intr ^C
    25. As root, run rootpre.sh which is located in the ../clusterware/rootpre directory on
        the Oracle Database 10g Release 2 (10.2) installation media.

9d4e8c70-6232-441b-9dd3-96274423af87.doc                             Page 14 of 23
    26. Using an X Windows emulator, start the runInstaller command from the
        clusterware directory on the Oracle Database 10g Release 2 (10.2) installation
             /mountpoint/clusterware/runInstaller

               When OUI displays the Welcome page, click Next.
               On the “Specify Home Details” page, remember, the Clusterware home
                CANNOT be the same as the ORACLE_HOME.
    27. When the OUI is complete, Run orainstRoot.sh and root.sh on all the nodes when
    28. Without user intervention, OUI runs
              Oracle Notification Server Configuration Assistant
              Oracle Private Interconnect Configuration Assistant,
              Cluster Verification Utility (CVU). These programs run without user
    29. If the CVU fails because of a missing VIP, this could be just because all of the IP
        Addresses are incorrectly considered private by Oracle (because they begin with
        172.16.x.x - 172.31.x.x, 192.168.x.x, or 10.x.x.). In a separate window as root
        run the vipca manually.
              DO NOT exit the OUI
              As root, launch VIPCA (ex: /apps/crs/oracle/product/10.2/crs/bin/vipca)
              Enter the VIP node names and IP address for every node.
              Exit VIPCA
              Back in the OUI, Retry the Cluster Verification Utility

Post-Installation Administration info
    30. init.crs: should have been added to server boot scripts to stop/start CRS
    31. The following are the CRS (CSS) background processes that must be running for
        CRS to function. These are stopped and started with init.crs:
              evmd -- Event manager daemon that starts the racgevt process to manage
              ocssd -- Manages cluster node membership and runs as oracle user; failure
                 of this process results in cluster restart.
              crsd -- Performs high availability recovery and management operations
                 such as maintaining the OCR. Also manages application resources and
                 runs as root user and restarts automatically upon failure.
    32. To administer the ASM library driver and disks, use the oracleasm initialization
        script (used in the previous steps) with different options, as follows:
              /etc/init.d/oracleasm configure
                     o To reconfigure the ASM library driver
              /etc/init.d/oracleasm enable OR disable
                     o Change the behavior of the ASM library driver when the system
                         boots. The enable option causes the ASM library driver to load
                         when the system boots

9d4e8c70-6232-441b-9dd3-96274423af87.doc                             Page 15 of 23
                  /etc/init.d/oracleasm restart OR stop OR start
                       o Load or unload the ASM library driver without restarting the
                  /etc/init.d/oracleasm createdisk DISKNAME devicename
                       o Mark a disk device for use with the ASM library driver and give it
                            a name
                  /etc/init.d/oracleasm deletedisk DISKNAME
                       o To unmark a named disk device. You must drop the disk from the
                            ASM disk group before you unmark it..
                  /etc/init.d/oracleasm querydisk {DISKNAME | devicename}
                       o to determine whether a disk device or disk name is being used by
                            the ASM library driver.
                  /etc/init.d/oracleasm listsdisks
                       o To list the disk names of marked ASM library driver disks
                  /etc/init.d/oracleasm scandisks
                       o To enable cluster nodes to identify which shared disks have been
                            marked as ASM library driver disks on another node.

9d4e8c70-6232-441b-9dd3-96274423af87.doc                              Page 16 of 23
3. Oracle Database 10g with RAC – Software (binaries)
Source: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide
10g Release 2 (10.2) for Linux

Pre-Install Notes
     1. The Oracle home that you create for installing Oracle Database 10g with the RAC
        software cannot be the same Oracle home that you used during the CRS
     2. During the installation, unless you are placing your Oracle home on a clustered
        file system, the OUI copies software to the local node and then copies the
        software to the remote nodes. On UNIX-based systems, the OUI then prompts
        you to run the root.sh script on all the selected nodes.

     3. Using an X Windows emulator, start the runInstaller command from the database
        directory on the Oracle Database 10g Release 2 (10.2) installation media.
             /mountpoint/database/runInstaller
             Execute a normal Oracle install except where noted below.
     4. Ensure the OUI is cluster aware
             After the Specify Home Details page you should see the Specify Hardware
                Cluster Installation Mode pages.
             If you do not, the OUI is not cluster aware and will not install components
                required to run RAC.
             View the OUI log in <oraInventory>/logs/. for install details
     5. On the Select Configuration Option page, select “Install Database Software only”
     6. Complete Install

9d4e8c70-6232-441b-9dd3-96274423af87.doc                                                         Page 17 of 23
4. Patch Oracle Database Software
Source: Oracle Metalink - http://metalink.oracle.com

At this time, is the latest GA version for Linux x86. The Oracle CD pack used
for this install is

Download and Install Patches
Refer to the OracleMetaLink Web site for required patches for your installation and to
download required patches:

     1. Use a Web browser to view the OracleMetaLink Web site:
     2. Log in to OracleMetaLink.
     3. On the main OracleMetaLink page click Patches & Updates tab.
     4. Click Simple Search link, then Advanced Search button.
     5. On the Advanced Search page click the search icon next to the Product or Product
         Family field.
     6. In the Search and Select: Product Family field, enter RDBMS Server in the For
         field and click Go.
     7. Select RDBMS Server under the Results heading and click Select. RDBMS
         Server appears in the Product or Product Family field and the current release
         appears in the Release field.
     8. Select your platform from the list in the Platform field and click Go.
     9. Any available patches appear under the Results heading.
     10. Click the number of the patch that you want to download.
     11. On the Patch Set page, click View README and read the page that appears. The
         README page contains information about the patch set and how to apply the
         patches to your installation.
     12. Return to the Patch Set page, click Download, and save the file on your system.
     13. Use the unzip utility provided with Oracle Database 10g to uncompress the Oracle
         patches that you downloaded from OracleMetaLink. the unzip utility is located in
         the $ORACLE_HOME/bin directory.

Fix an install bug (5117016)
     14. cd /apps/oracle/10.2/rdbms/lib/ -- we need to make a copy
     15. cp libserver10.a libserver10.a.base_cpOHRDBMSLIB
     16. cd /apps/oracle/10.2/lib
     17. mv libserver10.a libserver10.a.base_cpOHLIB
     18. mv /apps/oracle/10.2/rdbms/lib/libserver10.a .
     19. ls -al $ORACLE_HOME/bin/oracle*
     20. relink oracle
     21. ls -al $ORACLE_HOME/bin/oracle*
     22. If oracle does not relink stop and contact support

9d4e8c70-6232-441b-9dd3-96274423af87.doc                           Page 18 of 23
Fix a Permission bug (patch 5087548)
    23. Transfer the 10.2 patch file to the server
    24. Unzip the patch file: unzip <filename>
    25. Set 10g oracle environment
    26. cd 5087548/
    27. Run OPatch: /apps/oracle/10.2/OPatch/opatch apply
    28. cd $ORACLE_HOME/install
    29. . ./changePerm.sh
    30. Hit 'y' and enter
     -- permission changes should take around 10-15 minutes
     -- *** if the script hangs then exit the window/job
     -- verify permission changes in /apps/oracle/10.2/bin/
     -- most permissions (sqlplus) should show: -rwxr-xr-x

9d4e8c70-6232-441b-9dd3-96274423af87.doc                      Page 19 of 23
5. RAC Database using the DBCA with ASM
Source: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide
10g Release 2 (10.2) for Linux

     1. Database creation requirements of the ASM Library Driver
            You must use Database Configuration Assistant (DBCA) in interactive
              mode to create the database. You can run DBCA in interactive mode by
              choosing the Custom installation type or the Advanced database
              configuration option.
            You must also change the default disk discovery string to ORCL:*.

     2. Run CVU to verify that your system is prepared to create Oracle Database with
             /mountpoint/crs/Disk1/cluvfy/runcluvfy.sh stage -pre dbcfg -n node_list -d
                oracle_home [-verbose]
             Example: /dev/dvdrom/crs/Disk1/cluvfy/runcluvfy.sh stage -pre dbcfg -n
                node1,node2 -d /oracle/product/10.2.0/
     3. Start the DBCA using an X Windows emulator: $ORACLE_HOME/bin/dbca
             Execute a normal Oracle install except where noted below.
     4. Ensure the DBCA is cluster aware
             The first page should be the Welcome page for RAC.
             If not, the DBCA is not cluster aware
             To diagnose
                    o Run the CVU: /mountpoint/crs/Disk1/cluvfy/runcluvfy.sh stage -
                        post crsinst -n nodename
                    o Run olsnodes
     5. When asked “Select the operation that you want to perform”, choose Create a
     6. Select the Custom Database template to manually define datafiles and options
     7. If you choose to manage the RAC database with Enterprise Manager, you can also
         choose on of the following
             Grid Control
             Database Control
     8. On the Storage Options page
             The Cluster File System option is the default.
             Change to ASM
     9. For ASM, you will need to create an ASM instance (if one does not already exist).
         You will be taken to the ASM Instance Creation page
             Unless $ORACLE_HOME/dbs/. is a shared filesystem, you will not be
                able to create an SPFILE. Use an IFILE.
             Let ASM create a listener if prompted.
     10. On the ASM Disk Group page:

9d4e8c70-6232-441b-9dd3-96274423af87.doc                                                         Page 20 of 23
              Click the “Create New” button. The disk groups configured above in the
               ASM Library Driver install should appear.
             On the Create Disk Group page, your ASM disk(s) should appear. If not,
               exit the DBCA and restart.
             At the top, choose a Disk Group Name
             Choose your redundancy level (external)
             Then check disks to belong to the Disk Group, click OK
    11. On the Recovery Configuration page, for Cluster File System, the optional flash
        recovery area defaults to $ORACLE_BASE/flash_recovery_area
    12. Following remaining steps in a typical database creation.
    13. Before creating the database, choose Generate Database Creation Scripts

9d4e8c70-6232-441b-9dd3-96274423af87.doc                          Page 21 of 23
6. Post-Installation Tasks
Source: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide
10g Release 2 (10.2) for Linux

     1. Ensure NETCA has run to configure Oracle Networking components.
     2. Backup the voting disk
            Also make a backup of the voting disk after adding or removing a node

9d4e8c70-6232-441b-9dd3-96274423af87.doc                                                         Page 22 of 23
7. Oracle Files
Creation of these files is not necessary when using ASM. They are listed here to assist
with Database planning and sizing.

Local Files
These files are local to each node and do not need to be OCFS or ASM files.
 - archived redo logs
 - init file

Shared Oracle Database Files
These files (except ora_ocr and ora_vote) may live in ASM disk groups.

  -   ora_ocr                              100M                    raw file for CRS cluster registry
  -   ora_vote                             20M                     raw file for CRS voting disk
  -   controlfile_01                       500M
  -   controlfile_02                       500M
  -   system_01                            1G
  -   system_02                            1G
  -   sysaux_01                            800M                    300M + 250M for each instance
  -   sysaux_02                            800M
  -   srvcfg_01                            500M                    optional (for server management file)
  -   sp_file_01                           100M                    optional (for server parameter file)
  -   example_01                           200M                    optional
  -   cwmlite_01                           200M                    optional
  -   xdb_01                               100M                    optional
  -   odm_01                               300M                    optional
  -   indx_01                              100M                    optional
  -   tools_01                             500M
  -   drsys_01                             500M                    optional (for intermedia)
  -   drsys_02                             500M                    optional (for intermedia)
  -   snaplogs_01                          2G                      optional (for replication)
  -   users_01                             500M
  -   temp_01                              2G
  -   temp_02                              2G                      (for default temp TS switching)
  -   undo_i1_01                           2G
  -   undo_i1_02                           2G
  -   undo_i1_03                           2G                      (for undo TS switching)
  -   undo_i1_04                           2G                      (for undo TS switching)
  -   undo_i2_01                           2G
  -   undo_i2_02                           2G
  -   undo_i2_03                           2G                      (for undo TS switching)
  -   undo_i2_04                           2G                      (for undo TS switching)
  -   redo_i1_01                           100M
  -   redo_i1_02                           100M
  -   redo_i1_03                           100M
  -   redo_i1_04                           100M
  -   redo_i1_05                           100M
  -   redo_i1_06                           100M                    (for   high   trans.   growth)
  -   redo_i1_07                           100M                    (for   high   trans.   growth)
  -   redo_i1_08                           100M                    (for   high   trans.   growth)
  -   redo_i1_09                           100M                    (for   high   trans.   growth)
  -   redo_i1_10                           100M                    (for   high   trans.   growth)
  -   redo_i2_01                           100M
  -   redo_i2_02                           100M
  -   redo_i2_03                           100M
  -   redo_i2_04                           100M
  -   redo_i2_05                           100M
  -   redo_i2_06                           100M                    (for   high   trans.   growth)
  -   redo_i2_07                           100M                    (for   high   trans.   growth)
  -   redo_i2_08                           100M                    (for   high   trans.   growth)
  -   redo_i2_09                           100M                    (for   high   trans.   growth)
  -   redo_i2_10                           100M                    (for   high   trans)

Shared Database Files for the Application
<list files here>

9d4e8c70-6232-441b-9dd3-96274423af87.doc                                                      Page 23 of 23

To top