10gR2 New Features: RAC Enhancements
10gR1 revamped Oracle clustered database management and features. 10gR2 builds on this success with a long list of improvements and enhancements. Oracle has streamlined the installation process and provided more filesystem options, made some performance and monitoring improvements, and improved manageability with a half-dozen administration enhancements. This article will take a look at the major changes.
Before we begin, let's cover the basic nomenclature and documentation changes.
Oracle CRS (Cluster Ready Services) is now known as Oracle Clusterware. You can now use Oracle Clusterware for single-instance Oracle databases within clustered environments.
The non-platform-specific Oracle RAC and CRS documentation has been merged into a single book, Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide.
Many Real Applications Clusters use ASM for their database and recovery files, so the improved ASM installation process is welcome news. OUI and DBCA can now cooperate to configure an ASM instance right after installing ASM in its own ORACLE_HOME. You can choose your diskgroup, choose redundancy, and add disks right in OUI. This streamlines the install process. If you then invoke DBCA to create a database in a different ORACLE_HOME, DBCA automatically picks up on ASM running in the other ORACLE_HOME. (However, DBCA still defaults to creating a database in the same ORACLE_HOME that ASM uses.)
In addition, 10gR2 offers the ability to consolidate ASM storage on each node. The RAC instances and the single-instance databases on a node can now share one common ASM instance on that node.
OCFS is Oracle's homegrown CFS, and the only CFS that is supported for RAC on Linux and Windows. Previously, OCFS was available only for Linux and Windows. However, Oracle has said that OCFS will be available following the 10gR2 release. I haven't yet seen any firm dates or evidence that it's available, but when it comes out, it will provide another filesystem option for Solaris DBAs. (You can read more about the filesystems available for RAC on each platform in my OraFAQ article RAC Filesystem Options.)
You can now use OUI to clone RAC nodes and clusterware. The R2 Oracle Universal Installer comes with a perl script, $ORACLE_HOME/clone/bin/clone.pl, which automates the cloning process. A companion script, prepare_clone.pl, prepares the source ORACLE_HOME for cloning by archiving and compressing it; you copy the files over yourself, unarchive, and run clone.pl.
This is now the preferred method to add nodes and instances to RAC databases: simply clone an existing node. This significantly improves manageability and the scaleability of deployments over 10gR1.
You can also use EM Grid Control to clone a RAC node in a multi-node cluster to a new single-node RAC cluster. The target host must have CRS installed. Then, you can use Grid Control's Clone Oracle Home tool to clone any node of your existing cluster into a new single-node RAC on the target machine.
Silent install is now supported for RAC installations. (A silent installation is a non-interactive installation; instead of choosing options in GUI dialog boxes, the DBA supplies a text file of the responses he or she would supply to these dialog boxes. This file is known as a response file.) Run oraInstRoot.sh, then call setup.exe with your response file. Use the new -formCluster flag to install Oracle Clusterware. You can specify which nodes in the cluster to install onto using the new -local flag and the CLUSTER_NODES, REMOTE_NODES, and LOCAL_NODE session variables.
Silent install support for RAC, like RAC cloning, is a key improvement for large deployments. It's much faster to install the software on one node, create response files during the installation, and then use those response files to set up the rest of the nodes.
For more information on installing and setting up Oracle in a clustered environment, refer to Chapter 6 of the 10gR2 Oracle Universal Installer and OPatch guide.
You can now have Oracle multiplex the OCR and Voting Disk by simply choosing this option during the Clusterware installation. The Voting Disk no longer requires redundant storage if you choose this option; Oracle manages the redundancy itself. Similarly, you can also choose to have Oracle mirror the OCR. If you don't mirror the OCR at install time, you can go back and mirror it later.
You now have the choice of upgrading your ASM and database instance at the same time, or separately. The DBUA will also automatically handle the upgrade of a 10gR1 listener to a 10gR2 listener, and likewise for a 10gR1 Database Control configuration.
This is a new optimizer feature that improves performance on a RAC by reducing the amount of data traffic. Parallel join bitmap filtering reduces the amount of data sent by the right side of a join based on the bitmap created by the left side of the join. You can control this behavior with two new optimizer hints. PX_JOIN_FILTER forces the optimizer to use parallel join bitmap filtering; NO_PX_JOIN_FILTER prevents it.
Oracle has expanded EM monitoring features in a handful of RAC-related areas. For instance, performance monitoring has been improved to allow for better monitoring of a larger number of nodes: the Performance page now shows max, min and average loads across the cluster hosts, not just average load per node.
Previously, Transparent Application Failover (TAF) was configured on the client side, meaning that the client had to supply a lengthy TAF connection string. Basic TAF can now be configured on the server side by using dbms_service.modify_service to define a failover policy for a service. If an instance providing that service fails, then clients connected to that service will be failed over to another instance providing that service. The client does not need to provide a special connect string.
Oracle 10gR1 established services and the AWR as the basis for RAC workload management. Services allow granular definition of workload, which can then be spread among instances with connection load balancing.
10gR2 extends the 10gR1 building blocks of services & connection load management with the new High Availability Framework. 10gR2 provides the Load Balancing Advisory, which monitors the workload activity across the cluster and provides a percentage value, for each instance providing a given service, indicating what percentage of the incoming workload for that service should be assigned to that instance. These values are entered in the AWR and published as Fast Application Notification (FAN) events. The easiest way to make use of FAN events is to use a client that's integrated with FAN: OCI Session Pools, CMAN session pools, and JDBC and ODP.NET connection pools.
Another new feature, Runtime Connection Load Balancing, is tightly integrated with the new Load Balancing Advisory. It balances work requests across instances running the required service to select the best instance to process a given request, based on the workload metrics & the policies you've established. You can use it with JDBC and ODP.NET connection pools.
To take advantage of the FAN events published by the Load Balancing Advisory, you can use Runtime Connection Load Balancing, or applications can subscribe directly to the FAN events.
Enable the load balancing advisory and Runtime Connection Load Balancing by setting a goal on the server side (eg. service_time, throughput); making sure your clients are using a connection pool; and enabling FastConnectionFailoverEnabled (JDBC) or setting Load Balancing=true (ODP.NET) on the client side.
See Chapter 6 of the Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for more information on the Load Balancing Advisory, Runtime Connection Load Balancing, and FAN events.
You can now use Fast Connection Failover (FCF) with any FAN integrated clients, such as JDBC, OCI, or ODP.NET.
Fast Start Failover, a new feature in Data Guard Broker, involves failover to a standby database with no manual intervention. This is relevant to RAC configurations using the Maximum Availability Architecture (MAA), ie, RAC + Data Guard. With this configuration, Fast Start Failover initiates failover only when no instances of the primary database are available. See Chapter 5.5 of the Oracle Data Guard Broker manual for more information on Fast Start Failover.
10gR2 introduced a major ASM manageability improvement, the ASM command line utility. Using asmcmd, DBAs can view and manage ASM files & directories using familiar UNIX-style commands like cd and ls. This makes ASM an even stronger choice for RAC datafile & recovery file storage.
asmcmd is covered in more detail in this OraFAQ article.
If a command line utility isn't your style, Oracle has enhanced DBCA for this release so that DBCA can not only create an ASM instance, but can also manage ASM and its diskgroups. This is a stand-alone capacity -- you don't need to start a database create in order to access it.
If you use an Oracle-mirrored OCR (see above), then you can use ocrconfig to replace, repair or remove an OCR location while Oracle Clusterware is running and using the OCR. Just make sure one of the other OCR locations is online.
Oracle Label Security is now usable on RAC. If a policy is created or changed on one node, the changes are available to all other instances in the RAC immediately. No restarts are required. And, session security settings are preserved by Transparent Application Failover.
The CVU is a broadly useful new command-line utility that you can use to verify a number of RAC components and to compare nodes. You can verify system requirements, storage, connectivity, users and permissions, nodes, installations, clusterware components, and cluster integrity. The CVU is used during the install process; it can also be used in troubleshooting, and to verify your environment when you're performing administrative operations on the cluster such as storage management and node addition. The CVU is all the more useful because, unlike some other Oracle verifiication tools like DBV, it can be run at any time; it doesn't adversely affect the environment or installed software. You can even run it before you've actually installed Oracle Clusterware.
The command is cluvfy; see Appendix A of the Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for syntax and further information.
Oracle has improved diagnostics and enhanced logging for Oracle Clusterware; more diagnostic information is now provided, too.
You can now use Oracle Clusterware to manage other applications, not just your database. You can define how Oracle Clusterware should monitor your application and how it should respond to any changes in its status. This information is stored in the OCR. You can then use standard Oracle Clusterware commands to manage your application. See Chapter 14 of the Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for more information.
Oracle provides even more clusterware flexibility in this release: they have released a C API to Oracle Clusterware. You can use the API to register and manage resources; it communicates directly with the crsd process using IPC.
Natalka Roshak is a senior Oracle and Sybase database administrator, analyst, and architect. She is based in Kingston, Ontario. More of her scripts and tips can be found in her online DBA toolkit at http://toolkit.rdbms-insight.com/.