Re: Clusterware tests

From: Alessandro Vercelli <>
Date: Fri, 2 Oct 2009 18:13:04 +0200
Message-Id: <KQWADS$>

Hi Michael,

many thanks for your comments and remarks.

In the meanwhile the project team refined the requirements: we want sell to our customer an HA cluster with 4 Oracle dbs and 1 PostgreSQL, each db must have its own vip (5 vips, then) high computation power is not considered a problem but software costs are very important. On production, We can rely on two servers with 2 x 4-core CPUs and 12Gb each. Furthermore, the cluster will be normally managed by some 24x7 operators who have little skill with Oracle and clusters.

So we choose to use Oracle Standard + Clusterware instead of Enterprise + RAC and also a common and standardized (and possibly simple) set of commands are required for managing clustered resources by the operators.

I used OCFS instead of ASM because of PostgreSQL; of course I could use ASM For Oracle ad OCFS for Postgres, but for standardization.....

With these fixed points in my mind, I began to build a test/development cluster for internal use and possibly reference for the production.

For what you said, I recreated the nodeapps and tried a new solution. Strangely, when I installed Clusterware, the listeners where not automatically created, I had to create them with netca with cluster option, if I wanted them.

First, I created 3 new crs profiles for the remaining vips.

The first 2 DBs and listeners where bounded into resource groups and original vips with the following schema:

( A <- B means: B depends on A)

node1-vip <- stop_group1 <- listener1 <- database1 <- start_group1

The two groups are made of same resources (listener and db) in reverse order of dependance.

In this way, the operator can use the following commands:

crs_start start_group1 -f -c <node> (for starting listener and db in correct order and also stop_group1; vip is already active)

crs_stop stop_group1 -f -c <node> (for stopping the whole group)

crs_relocate start_group1 -f -c <node> (for relocation)

The same on node2.

It's better to run the above commands with -f option, to force a vip to reallocate if needed.

For the remaining DBs, the remaining 3 vips where created with usrvip; for Oracle I created two other resource groups as above whereas PostgresSQL db service (which doesn't use listener) is considered as a standalone resource with no group. The vips are not considered part of any group and, most probably, they will be configured as autostart.

One of the scripts of Oracle WhitePaper got an error, so I decided to rewrite them in bash (which I know much better than perl....)

At this stage of development, I believe this will be the definitive implementation.

The OS is RHEL 5.3


>Hi Alessandro,
>I understand that you are trying to build on top of Oracle cluster something
>similar to what we have in RH cluster, HP SG cluster,Veritas or Sun cluster.
>In all cluster systems mentioned above for a classic single database (not
>RAC) we always created a package that included all required resources , like
>floating IP,DB startup/shutdown/check scripts and the same for the listener.
>As you can see from this paper this is possible to do in Oracle cluster as
>well but still this method is not officially supported , at least the
>example that is published in Oracle paper
>Please do not call Oracle Support to discuss the scripts in this paper, this
>is an un-supported example
>In my company we discussed the option to start using Oracle cluster instead
>of RH cluster for example but found very quickly that at this point Oracle
>cluster is not mature enough and robust to function as a normal cluster for
>any application in the world.
>Oracle cluster is very good for RAC and ASM database management , it was
>designed for this purpose, it has all required predefined resources and
>dependencies for this.
>But this is not the same situation when you are trying to use Oracle cluster
>software for something else.
>srvctl utility is usually used to register and operate CRS predefined
>resources like Oracle database or ASM , when you do this you do not have to
>write startup/shutdown scripts by your own.
>In this example it was suggested to treat database as any other external
>resource that was not predefined by Oracle and supply to the cluster
>startup/shutdown/check scripts.
>Deleting nodeapps was not so good idea, because nodeapps (gsd, default vip
>and default listener) are integral part of CRS resources. What about CRS log
>files, do you see any critical errors after you deleted nodeapps ?
>By the way , what OS are we talking about ?
>On Wed, Sep 30, 2009 at 12:06 PM, Alessandro Vercelli <>wrote:
>> Hello everybody,
>> I'm building my first Clusterware couple of nodes (RHEL 5.3, Clusterware
>> 11gR1, OCFS2 on ISCSI) with 2 Oracle 11gR1 databases (_single instance_);
>> one db for each node, with failover on the other one.
>> My aim is to create a resource group for each DB (vip + listener + db
>> service) with dependance so that i can start/stop/relocate the whole
>> resource group; I used an Oracle WhitePaper (
>> as brief reference.
>> Here my experiments (with crs commands):
>> 1. When I installed (with OUI) Clusterware, it created automatically a
>> couple of VIPs, one for each node. These are nodeapps, so I cannot modify
>> them to be dependant from a main resource group; so I left these vips and
>> created a resource group listener+db with dependance on vip, so i relocate
>> the vip and the resource group follows.
>> 2. To override the vip situation, I deleted nodeapps on both nodes and
>> created a resource group with vip(usrvip)+listener+db; now I manage these 3
>> resources with correct start/stop order, using a single resource group name.
>> After I found some links that tell to use srvctl for Oracle databases but
>> it to me seems this tool is for clustered databases, maybe not single
>> instance?
>> Are there any suggestions for this implementation?
>> Many thanks,
>> Alessandro
>> --
>Best Regards
>Michael Elkin

Received on Fri Oct 02 2009 - 11:13:04 CDT

Original text of this message