Re: how much cpu on a database rac cluster

From: Mladen Gogala <>
Date: Mon, 27 Aug 2012 02:25:10 +0000 (UTC)
Message-ID: <>

On Sun, 26 Aug 2012 17:49:25 -0700, dshproperty wrote:

> 32 core on each nodes. All nodes are equal.
> What do you mean each database on each cluster. So for 6 databases, we
> would need 6 clusters?

No, you should consolidate all 6 databases into a single database. Having 6 databases on a single grid is an enormous waste of resources. You have 5 SGA areas on each node, you have 6 LGWR processes, 6 DBWR processes, 6 ARCH processes, 6 global lock areas and you are wasting far too much memory and semaphores. It also makes monitoring much harder, because if you have a runaway query, you first need to find the database it executes on. Let me conclude with a biblical reference to words of St. Thomas of Oracle:

and we said...
o do I support this

nope, not at all. One machine, one host - ONE INSTANCE

o any performance issues?

all of the time, every day, in every way. Think about it - how could it not be? instance db11 takes 100% of the cpu on node1, queries against db2 are reported as slow. db2 reports "all is well in the world". Now what?

one machine/host, one instance

o sure you can, but there are LOTS of things you "can" do. There are fewer things that you "should do"

and having more than an instance per host (especially with RAC, you'd be defeating the purpose entirely) is not one of them..

o nothing could help this, short of a decision to go with one instance per machine.

Those are the words of St. Thomas of Oracle. He's a great prophet, Larry speaks trough him and the words of Tom are the words of Larry. You should do as he commands or thou shalt be cast to Redmond, WA forever and ever and not have a chance to watch the America Cup from Scoma's restaurant over scallop risotto and chardonnay.
For those who don't know, Scoma's is an excellent fish restaurant in San Francisco, located on a pier and allowing a beautiful view on the harbor. I wholeheartedly recommend it. The web page is here: )

Mladen Gogala
Received on Sun Aug 26 2012 - 21:25:10 CDT

Original text of this message