Docstoc

Implementing_RAC

Document Sample
Implementing_RAC Powered By Docstoc
					Implementing RAC

Author: Balakrishnan R.

Wipro Technologies June 2004

TABLE OF CONTENTS PURPOSE OF THE DOCUMENT ………………………………………………………………3 DOCUMENT USAGE ……….……………………………………………………………………3 RAC CONCEPTS…………….……………………………………………………………………4 ADVANTAGES OF RAC…………………………………………………………………………5 RAC ARCHITECTURE.………………...………………………………………………………..7 SUN CLUSTER AND RAC ……………..………………………………………………………11 STEPS FOR CREATING RAC DATABASE FROM SINGLE INSTANCE DB …………...12 3

Implementing RAC Database

Purpose of the document
This document provides a overview of the RAC Concepts, Advantages and Architecture. With the above background, the document explains the Implementation Steps for RAC Database creation. This document helps to gain knowledge in an easy way without spending much time to understand fully the technicalities of RAC. More importantly it explains in simple statements the steps involved in converting a single node Database to a RAC Database which would be much helpful from the DBA viewpoint.

Document Usage
This document is basically intended for Oracle DBAs to get a clear picture of implementing RAC Databases. This document would also be of help to any Oracle User to get a fairly good idea of the RAC Concepts, Advantages and Architecture.
.

Wipro Technologies

Confidential

Page 3 of 14

Implementing RAC Database

1. The Concept of Real Application Clusters
What is Real Application Cluster Real Application Cluster in simple terms is an architecture wherein multiple computers (servers) are interconnected by means of hardware and software to form a robust computing environment. What is Oracle Real Application Cluster (RAC) One Oracle Instance associated to a single database runs on each of the interconnected computers (servers) to access a shared database which is called a RAC Database. What are the Main Components of a RAC Real Application Clusters includes the software component that provides the necessary Real Application Clusters scripts, initialization files, and datafiles to make the Oracle9i Enterprise Edition an Oracle9i Real Application Clusters database The Main Features of RAC Real Application Clusters is a breakthrough cluster software architecture with scalability and high availability features that exceed the capabilities of previous Oracle clusterenabled software products. In Real Application Clusters environments, all active instances can concurrently execute transactions against a shared database. Real Application Clusters coordinates each instance's access to the shared data to provide data consistency and data integrity. Harnessing the power of clusters offers obvious advantages. A large task divided into subtasks and distributed among multiple nodes is completed sooner and more efficiently than if you processed the entire task on one node. Cluster processing also provides increased performance for larger workloads and for accommodating rapidly growing user populations. With Real Application Clusters, you can scale applications to meet increasing data processing demands without changing the application code. As you add resources such as nodes or storage, Real Application Clusters extends the processing powers of these resources beyond the limits of the individual components. Data warehouse applications that access read-only data are prime candidates for Real Application Clusters. In addition, Real Application Clusters successfully manages Online Transaction Processing systems and hybrid systems which combine the characteristics of both read-write and read-only applications. Real Application Clusters also serves as an important component of robust high availability solutions, tolerating failures with little or no downtime.

Wipro Technologies

Confidential

Page 4 of 14

Implementing RAC Database

2. The Advantages of Real Application Clusters
This section describes the following advantagess of Real Application Clusters:
• • • • •

Low Cost and Less Investment Increased Scalability High Availability Transparency Buffer Cache Management

Low Cost and Less Investment Real Application Clusters lowers the overall cost of ownership more effectively than other cluster database products. This is due in great part to the single-system image afforded by the Real Application Clusters architecture. It is not necessary that a very high end server has to be acquired keeping in mind the future requirements of scalability. It is fine to take care of only the near future requirements and in case much more scalability is required additional computers can be added to the cluster later thus avoiding blocking of huge investments.
Expanded Scalability

A scalable environment enables you to improve performance and add capacity by adding nodes. On some platforms you can add nodes dynamically while your cluster is running. The number of nodes that Real Application Clusters can actually support is significantly more nodes than any known implementation. Small systems configured primarily for high availability might only have two nodes. Large configurations, however, might have 32 to 64 nodes.
High Availability

High availability refers to systems with redundant components that provide consistent, uninterrupted service, even during failures. In most high availability configurations, nodes are isolated from each other so that a failure on one node does not affect the entire system. In an Oracle Applications RAC database, if a node/server fails, the components/services that run on the node are switched over to the other node thus ensuring high availability.
Transparency

The concept of transparency implies that Real Application Clusters environments are functionally equivalent to single-instance Oracle database configurations. In other words, you do not need to make code changes to deploy applications on Real Application Clusters if your applications ran efficiently on single-instance Oracle configurations. The Cache Fusion feature, which implies a fusing of the multiple buffer caches in a cluster, simplifies the administration of Real Application Clusters environments. With
Wipro Technologies Confidential Page 5 of 14

Implementing RAC Database

Real Application Clusters and Cache Fusion, you also do not need to perform capacity planning. Transparency is also realized by efficient resource use at both the application and system levels. For example, you do not need to perform time-consuming resource configurations by examining data access patterns because Real Application Clusters does this automatically.
Buffer Cache Management

Oracle stores resources, such as data block information, in a buffer cache that resides in memory. Storing this information locally reduces database operations and disk I/O. Because each instance has its own memory, Real Application Clusters coordinates the buffer caches of multiple nodes while minimizing disk I/O. This optimizes performance and expands the effective memory to be nearly equal to the sum of all memory in your cluster database. To do this, Real Application Clusters uses the Global Cache Service (GCS) to coordinate operations among the multiple buffer caches and to optimize Oracle's high performance features. The Global Enque Service (GES) also assists in synchronization by managing intranode communications.

Wipro Technologies

Confidential

Page 6 of 14

Implementing RAC Database

3. Real Application Cluster Architecture

The following sections describes the system components and architectural models that typify most cluster database environments. It describes the hardware for nodes along with the hardware and software that unites the nodes into a cluster database. Overview of Cluster Database System Components A cluster database comprises two or more nodes that are linked by an interconnect. The interconnect is a communication link between the nodes. It’s a high-speed operating system-dependent transport component that transfers messages between instances on different nodes and is alslo referred to as the interprocess communication (ipc). Each Oracle instance uses the interconnect for the messaging that synchronizes each instance's use of shared resources. Oracle also uses the interconnect to transmit data blocks that the multiple instances share. The primary type of shared resource is the datafiles that all the nodes access. The below diagram is a high-level view of how the interconnect links the nodes in a cluster database and how the cluster accesses the shared datafiles that are on storage devices.
1. Cluster Components for Cluster Database Processing

This illustration shows Node 1, node 2, node 3, and node 4 access the Storage Area Network through a Switch/Hub Interconnect.
Wipro Technologies Confidential Page 7 of 14

Implementing RAC Database The cluster and its interconnect are linked to the storage devices, or shared disk subsystem, by a storage area network. The following sections describe the nodes and the interconnect in more detail: Nodes and Their Components

A node has the following main components:
• • • •

CPU--The main processing component of a computer which reads from and writes to the computer's main memory. Memory--The component used for programmatic execution and the buffering of data. Interconnect--The communication link between the nodes. Storage--A device that stores data. This is usually persistent storage that must be accessed by read-write transactions to alter its contents.

Cluster Interconnect and Interprocess Communication (Node -to-Node)

Real Application Clusters uses a high-speed interprocess communication component for internode communications. The IPC defines the protocols and interfaces required for Real Application Clusters environments to transfer messages between instances. Messages are the fundamental units of communication in this interface. The core IPC functionality is built on an asynchronous, queued messaging model.
Memory, Interconnect, and Storage

All cluster databases use CPUs in generally the same way. However, you can deploy different configurations of memory, storage, and the interconnect for different purposes. The architecture on which you deploy Real Application Clusters depends on your processing goals. Each node in a cluster database has one or more CPUs. Nodes with multiple CPUs are typically configured to share main memory. This enables you to deploy a scalable system.
The High-Speed IPC Interconnect

The high-speed interprocess communication(ipc) interconnect is a high-bandwidth, low latency communication facility that links the nodes in the cluster. The interconnect routes messages and other cluster communications traffic to coordinate each node's access to resources. You can use Ethernet, a Fiber Distributed Data Interface (FDDI), or other proprietary hardware for your interconnect. It is also better to consider installing a backup interconnect in case the primary interconnect fails. The backup interconnect enhances high availability and reduces the likelihood of the interconnect becoming a single point-offailure.

Wipro Technologies

Confidential

Page 8 of 14

Implementing RAC Database

Real Application Clusters supports user-mode and memory-mapped IPCs. These types of IPCs substantially reduce CPU consumption and IPC latency.

Real Application Clusters-Specific Daemon and Instance Processes
The Global Services Daemon
The Global Services Daemon (GSD) runs on each node with one GSD proces s per node. The GSD coordinates with the cluster manager to receive requests from clients such as the DBCA, EM, and the SRVCTL utility to execute administrative job tasks such as instance startup or shutdown. The GSD is not an Oracle instance background process and is therefore not started with the Oracle instance.

Instance Processes Specific to Real Application Clusters
A Real Application Clusters database has the same processes as single -instance Oracle databases such as process monitor (PMON), database writer (DBWRn), log writer (LGWR), and so on. There are also additional Real Application Clusters-specific processes as shown in figure below. The exact names of these processes and the trace files that they create are platform -dependent. n Global Cache Service Processes (LMSn), where n ranges from 0 to 9 depending on the amount of messaging traffic, control the flow of messages to remote instances and manage global data block access. LMS n processes also transmit block images between the buffer caches of di fferent instances. This processing is part of the Cache Fusion feature. n The Global Enqueue Service Monitor (LMON) monitors global enqueues and resources across the cluster and performs global enqueue recovery operations. Enqueues are shared memory structures that serialize row updates. n The Global Enqueue Service Daemon (LM D) manages global enqueue and global resource access. Within each instance, the LMD process manages incoming remote resource requests. n The Lock Process (LCK) manages non -Cache Fusion resource requests such as library and row cache requests. n The Diagnosability Daemon (DIAG) captures diagnostic data about process failures within instances. The operation of this daemon is automated and it updates an alert log file to record the activity that it performs.

Wipro Technologies

Confidential

Page 9 of 14

Implementing RAC Database

Shared Disk Storage and the Cluster File System Advantage Real Application Clusters requires that all nodes have simultaneous access to the shared disks to give the instances concurrent access to the database. The implementation of the shared disk subsystem is based on your operating system: you can use either a cluster file system or place the files on raw devices. Cluster file systems greatly simplify the installation and administration of Real Application Clusters. Memory access configurations for Real Application Clusters are typically uniform. This means that the overhead for each node in the cluster to access memory is the same. However, typical storage access configurations are both uniform and non-uniform. The storage access configuration that you use is independent of your memory configuration.

Wipro Technologies

Confidential

Page 10 of 14

Implementing RAC Database

As for memory configurations, most systems also use uniform disk access for Real Application Clusters databases. Uniform disk access configurations in a cluster database simplify disk access administration.

4. SUN Cluster and RAC
The Oracle Parallel Server and Real-Application Cluster configurations are characterized by two nodes that access a single database image. OPS/RAC configurations are often used for throughput applications. When a node fails, an application does not move to a backup system. OPS/RAC uses a distributed lock management (DLM or IDLM) scheme to prevent simultaneous data modification by two hosts. The lock ownership information is transferred between cluster hosts across the cluster transport system. When a failure occurs, most of the recovery work is performed by the OPS/RAC software, which resolves incomplete database transactions. The Sun Cluster software performs a relatively minor role. It initiates a portion of the database recovery process. RAC makes greater use of interconnect traffic for inter-node communications and, hence, supports SCI, regular Ethernet and Gigabit Ethernet for its interconnect technology.

Sun Cluster 3.0 supports up to eight cluster nodes. Sun plan to extend this to 64 nodes eventually. Sun Cluster v2.2 had a previous limit of four nodes. Real Application Clusters (RAC) is supported with Solaris Operating System (SPARC) 8 and Sun Cluster 3.0, though only two nodes are supported and limited four node support at this current stage. The limitation on SC 3.0/RAC is due to the implementation of SCSI 3 Persistent Group Reservation (PGR in the storage subsystem. SC 3.0 uses PGR for quorum management for more than 2 nodes. SCSI 2 Reserve is used by SC 2.2 and 3.0 for 2 node clusters. SC 2.2 uses a terminal concentrator for greater than 2 nodes. Sun will support PGR in T3 storage in an update but Sun will not support PGR in earlier storage offerings (e.g., photons A5200). EMC, however, does supports PGR and these third-party vendors are now responsible for certification against Sun Cluster. Up to four node OPS/RAC is supported with Sun Cluster 3.0 Update 2 (or higher versions) in several configurations. The key component in each four node configuration being the shared storage device. 4 node support is now available for Sun T3 Single Brick (requires VxVM 3.2) and SE 9910/9960 (SUN Enterprise Series, OEM of Hitachi HDS).

Wipro Technologies

Confidential

Page 11 of 14

Implementing RAC Database

5. Steps for Creating a RAC Database from Single Instance Database
Pre-Requisities The nodes are interconnected and the cluster software is installed. For the files of the RAC database to be accessed by multiple nodes, it requires the use of a clustered file system which can be proprietory or typically raw partitions on UNIX and unformatted disk partitions on Windows. Broad Steps 1. Make a backup of the Single Node Database which can also be called as the Source Database. It is ideal to have a cold backup. 2. Create Oracle User and set environment variables on each node of cluster. 3. Have the raw partitions or clustered file systems created to match the Source DB. 4. Connecting as Oracle User, Install Database Executables (Oracle Software) all nodes of cluster. on

5. Copy the Source Database using the backup, to the Target Database location. If this involves converting from filesystem to raw devices, then copy the database datafiles, control files, redo log files and server parameter files to the corresponding raw devices using the dd command on UNIX or the OCOPY on Windows. 6. Re-create the control file of the Target Database 7. Shutdown the Instance 8. Include RAC parameters in the init.ora file (given at end) and Startup the Instance 9. If the source database was using automatic undo management, create undo tablespace for each instance. 10. If the source database was using manual undo management, create atleast two private rollback segments for each instance. 11. Create a redo thread having atleast 2 redo log groups for each instance. 12. Copy the Oracle password file from the initial node to the additional nodes after replacing the ORACLE_SID name in each password file appropriately. 13. Configure the net service entries for LOCAL_LISTENER and REMOTE_LISTENER in tnsnames.ora in each node. 14. Open the Database.
Wipro Technologies Confidential Page 12 of 14

Implementing RAC Database

15. Connect as Oracle User on the secondary node. 16. Repeat steps 8 through 14. 17. Start the Database listeners on all the nodes where the instances are running.

Init.ora Parameters for RAC
i. ii. iii. iv. v. vi. vii. viii. ix. *.cluster_database=true *.cluster_database_instances=2 *.<instance_name>.instance_name=<instance_name> *.<instance_name>.thread = 1 *.<instance_name>.instance_number=1 *.service_names=<db_name> *.remote_login_passwordfile=exclusive *.<instance_name>.local_listener="<instance_name>" *.remote_listener="<listener_name>"

Wipro Technologies

Confidential

Page 13 of 14

Implementing RAC Database

Wipro Technologies

Confidential

Page 14 of 14


				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:174
posted:8/29/2009
language:English
pages:14