Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> Re: add node , oracle software a bit error

Re: add node , oracle software a bit error

From: Dan Norris <dannorris_at_dannorris.com>
Date: Tue, 10 Jul 2007 21:15:55 -0700 (PDT)
Message-ID: <71583.91286.qm@web35404.mail.mud.yahoo.com>


Hi Alex,

Thanks for the notes.

While I agree that keeping a number of OHs more than 1 and less than the number of nodes creates complexity, I don't think that I'd advocate one OH per node in a "large" cluster. I do know that managing CRS can be complicated by the fact that you need to keep careful track of the OH for each instance somehow, but CRS won't have any problems with that configuration--the admins will just have to keep careful track of the configuration. I don't think that's any more complicated than doing rolling upgrades with a new OH and "switching" the OH configuration in CRS to the new OH.

To the point about rolling upgrades, I know that when I think about rolling upgrades, I think of a process involving a) shut down an instance, b) apply a patch, c) start instance, d) repeat on the next node. To use a new OH changes things quite a bit and requires some manipulation of CRS that isn't required in what I'd consider the "normal" process for rolling upgrades.

Ultimately, I think we're saying essentially the same things--just different ways. The main difference I perceive is that, for "large" numbers of nodes, I would advocate more than one OH and less than one OH per node. I don't think that the CRS management is really that serious of an issue in an environment that's already fairly complex.

I welcome your additional thoughts and any others from the list.

Dan

Dan, see few notes inline.

On 7/10/07, Dan Norris <dannorris_at_dannorris.com> wrote:
> In the scenarios involving larger numbers of nodes, I would advocate *some*
> sharing of OHs, but never a single OH for the whole cluster. In discussions

Well, it's a bit confusing to keep OH1 for nodes 1 and 2 and OH2 for nodes 2 and 4.
In fact, CRS configuration for a RAC database with instances running from different homes will be a mess.
When you register RAC database with CRS you specify Oracle home with in "srvctl add database ...". When you later add instances - you don't have an option to specify oracle home again using "srvctl add instance".
It will work when you do it manually but CRS would most probably be "confused".

> I'd offer a clarification on the last point about rolling patches.
> You said that it is easier to apply rolling patches--I think you mean to say
> that it is *possible* to apply rolling patches with non-shared OHs. I
> suppose that you could apply rolling patches with a shared home, but, at
> least for part of the process, you don't have a shared OH.

You can do that by installing a completely new Oracle home and apply the patch there. Then you would switch databases there one by one. You shouldn't forget CRS configuration issue I mentioned above. At the end we will need to remove all resources from CRS and re-register with the new home. Possibly you can update it using "crs_stat -p" + "crs_register" but I noticed that for databases and instances it's not just registering .cap files -- srvctl does something else somewhere so it should be used instead.

> So, I still haven't found a compelling reason to use shared OHs. I'm all
> eyes for anyone that can make the good case for it.

Well, on development system and relatively low profile production systems. Also with certain architectures that are centered around simple provisioning of additional nodes. However, they still suffer from all limitations mentioned.

Cheers,
Alex

-- 
Alex Gorbachev, Oracle DBA Brewer, The Pythian Group
http://www.pythian.com/blogs/author/alex http://www.oracloid.com
BAAG party - www.BattleAgainstAnyGuess.com





--
http://www.freelists.org/webpage/oracle-l
Received on Tue Jul 10 2007 - 23:15:55 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US