Re: Multiple DBs on One RAC - Adding New Nodes and Different Storage

From: Jeremy Schneider <jeremy.schneider_at_ardentperf.com>
Date: Fri, 27 Dec 2013 08:47:44 -0600
Message-ID: <CA+fnDAa-d2R9FfH_nLe7yrbaZ2m0SU4WoH2dxJxcXyD=zVJ71A_at_mail.gmail.com>



You might give some thought to the "server pools" concept; sounds like you want two pools and one server which can join either based on the situation?  I've done a lot of work recently with multiple databases on individual clusters (using a pool concept heavily). I haven't yet had to dual-connect two SANs. I suspect that you could get away with the connectivity situation you are describing, Tim's point about quorum(vote)/ocr notwithstanding.

I would however suggest that you try to keep things as simple as possible.  Multiple databases on a cluster gets very complicated very fast. You have to consider all kinds of resource sizing (SGA, semaphores, file handles, etc) and make sure that each node has enough of each resource for each possible failover scenario with each database on the cluster. Otherwise, you might suddenly find that instances won't start when you get into a failover situation - or you have to customize resource usage on each node and then your app will have to run in a degraded mode whenever the services fail over to a node with fewer resources allocated to the instance. This is why the pools concept is so helpful when you start doing database consolidation on clusters; at the cost of slightly less flexibility in resource utilization, it elegantly solves many of these problems. Huge manageability win.

Just a few thoughts off the top of my head...

--
http://about.me/jeremy_schneider


On Thu, Dec 26, 2013 at 3:51 PM, David Barbour <david.barbour1_at_gmail.com>wrote:


> Fail-Over - not as in standby, but as in TAF for instance/node down.
> Unfortunately I don't have the option of putting all the storage on one
> platform or another. The VMAX lease is coming due and 'something' is going
> to change. Meanwhile, we've purchased Compellent and the dictate is the
> new application WILL be on Compellent. AND it's supposed to be a
> three-node RAC, but they've only purchased two servers. Node 3 of the
> existing RAC has a light enough load to take up the slack if Node 4 or Node
> 5 goes down. Or Node 1 or Node 2 for that matter. Or Node 2 and Node 5.
> You get the drift.
>
> Okay - I guess I've gotten enough feedback on this. Thanks. It's
> stimulated some thinking after too much food yesterday. I've got a couple
> of ideas. I'll let you know how it does or doesn't work out.
>
>
> On Thu, Dec 26, 2013 at 2:46 PM, Mark W. Farnham <mwf_at_rsiz.com> wrote:
>
>> My first question is “what do you mean by fail-over?”
>>
>>
>>
>> IF someone asked me about adding nodes to host a new application to a
>> shop with a new disk farm from a different vendor than the existing one,
>> I’d be inclined to keep them separate and make this a standby recovery
>> (home rolled or possibly passive or active dataguard, depending on your
>> licenses and requirements for time until application use resumes) rather
>> than increasing the complexity of the existing ASM complex.
>>
>>
>>
>> You would need to be able to write from lgwr or arch to some location
>> that the nodes on the existing cluster can see, but that seems a lot less
>> demanding a storage task to me than fully integrating new storage and nodes
>> into ASM. And since they would fail separately, I think your overall
>> availability would increase. Quite possibly the two new nodes might serve
>> as standby recovery targets for the existing cluster in case that ASM
>> complex fails.
>>
>>
>>
>> On the other hand if the peaks and valleys of your applications across
>> time are such that some might benefit from 5 nodes enough of the time to
>> make it worth the trouble, then I think the best way to go is to fully
>> integrate the ASM requiring, as Tim wrote, at least visibility to the
>> configuration and coherency bits of ASM. You might make disk groups
>> exclusively on the new storage, but at least one node of the first three
>> will need to be able to see that disk to facilitate rac instance type fail
>> over (as opposed to standby failover).
>>
>>
>>
>> To make a real recommendation we’d need to know a lot about your mission,
>> goal, and resources.
>>
>>
>>
>> mwf
>>
>>
>>
>> *From:* oracle-l-bounce_at_freelists.org [mailto:
>> oracle-l-bounce_at_freelists.org] *On Behalf Of *David Barbour
>> *Sent:* Thursday, December 26, 2013 2:23 PM
>> *To:* oracle-l mailing list
>> *Subject:* Multiple DBs on One RAC - Adding New Nodes and Different
>> Storage
>>
>>
>>
>> Merry Christmas (to those of you who celebrate), Happy Holidays (to those
>> who may not but have something going on - including New Year's), and Good
>> Afternoon:
>>
>> We are currently running a 3-node RAC on RHEL 6.3(version
>> 2.6.32-279.22.1.el6.x86_64), Oracle 11.2.0.3.0 on ASM. Storage is
>> fibre-connected EMC VMAX. Cluster cache communication is handled via
>> Infiband. We are adding a new application and two nodes. The question
>> arises with the storage. We're putting the new application on a Dell
>> Compellent SAN. Also fibre connected. The plan is to make the 2 new nodes
>> the primary nodes for this application, and make node 3 of the current
>> cluster the fail-over.
>>
>> Can anyone who has used multiple different SANs before provide any
>> hints/tips/tricks/pitfalls?
>>
>> Will we have to serve the current EMC Storage to the new nodes? I don't
>> see why as we're not expanding the number of instances of the currently
>> running application. But we will have to zone the new storage to connect
>> to the third node of the current cluster. Any thoughts on how to install
>> this new application? I don't really want instances of it on Nodes 1 & 2
>> at all. If I make Node 3 the master node and install from there, I should
>> be able to pick Nodes 4 & 5 on which to create the other new instances.
>>
>> Any thoughts?
>>
>
>
-- http://www.freelists.org/webpage/oracle-l
Received on Fri Dec 27 2013 - 15:47:44 CET

Original text of this message