Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Symantec Storage Foundation for Oracle RAC

RE: Symantec Storage Foundation for Oracle RAC

From: Caviness, Jay A <>
Date: Tue, 30 Oct 2007 10:22:11 -0500
Message-ID: <>

It depends on the definition of "disaster proof" and how much money a company is willing to spend on infrastructure. For local recovery, up to about 50km realistically, RAC can be used in what are known as stretch clusters, but that requires very high bandwidth for both storage and the interconnect. I have worked with clients that separate nodes in their clusters by blocks to remove them from the same building or place them on opposite ends of a campus. That protects them from more localized events such as a building fire. For major events, a standby solution would be best either employing Oracle's logical or physical standby (less of a bandwidth issue, but still some to ship archive logs) or whatever the modern versions or the old EMC SRDF solutions for san replication, which is quite expensive.  

Most anything can be automated, but that depends on the system.  

It really comes down to how much playing the odds is worth compared to normal data availability.  



From: [] Sent: Monday, October 29, 2007 9:33 PM
Cc: Tim Gorman;; Subject: Re: Symantec Storage Foundation for Oracle RAC


I'm not sure what you're trying to get at, and don't really see why you wouldn't come right out and say it, but anyway..  

So, the particular solution that I'm looking at employs a Global Clustering Option. So, whatever happens at the primary site to cause a failure (earthquake, wild fire, etc..) that the company deems as an outage, would kick off the remote site to come online as the primary database.  


In what way does RAC (or any clustering solution, such as SFOR) protect against such threats? <> wrote:


As Finn mentioned, I'm looking at this to provide HA. This is for a location in SoCal, so its prone to earthquakes, and most recently, wild fires.  



RAC/SFOR/HACMP/VCS etc are for high availability. Not disaster recovery. As such, it's for the type of "disasters" that involve losing 1 server. Anything else you would need a DR setup to handle.  


On 10/29/07, Tim Gorman < <> > wrote:

Well......out of all the possible (and probable) range of faults and failures, exactly what types of "disasters" does clustering such as RAC or SFOR protect against? <> wrote:

>It has to be a very selective disaster for clustering (i.e. RAC, HACMP,
etc) to provide much protection.  


Sorry, I don't understand what that means.  

Ken <> wrote:

Dan, thanks for the feedback.  

We're trying to protect more than just the Oracle DB. While CRS and Dataguard work well to provide HA, it doesn't take into account the Siebel, IIS, etc installs that form the entire application stack.  

With this solution, we're hoping to lower the TCO in the event of a disaster.  


I have used the SFOR previously, but not on the current versions and not with 10g DB. I had no problems with the SFOR software.

If I were implementing a cluster today with 10g, I wouldn't use any non-Oracle clusterware. Instead, I'd just use Oracle Clusterware as it provides all the HA you'll need for the DB. Maybe you have other reasons for using SFOR...I hope you do because I couldn't justify the investment given the current architecture.

Others have posted similarly on this list and in OTN forums as well.


Wecre looking to implement Symantec Storage Foundation HA for Oracle RAC to offer HA for our Oracle 10g RAC on RHEL.  

Oracle has fully certified most of the components within this Symantec solution, except for the automatic failover piece (GCO).

This component is certified on all platforms except for Red Hat, could have something to do with Oraclecs OEL initiative.  

Is anyone using this or any other Symantec SF products without any issues?  


Received on Tue Oct 30 2007 - 10:22:11 CDT

Original text of this message