Re: creative use of storage snapshots.
Date: Wed, 22 Dec 2010 16:15:45 -0800
As an FYI, small side note : It's let's make a deal time at Delphix. If you have ever had the fun of closing an Oracle deal at their fiscal year end, you know what happens. Not only is it fiscal year end but Delphix is also millimeters away from adding an extra figure to their end of quarter total and I'm betting that now is the last time these low of prices will ever be seen - just my view point from being inside the castle.
If you can imagine the ease and savings of virtualizing databases
- Super fast provision – three clicks and a few minutes to stand up a fully functional, point-in-time 10g/11g database copy/clone
- Storage savings - a new database copy basically only consists of some pointers and the space it takes for private redo and temp then this might be of interest.
Delphix doesn't require filesystem snashots or EMC or NetApps. It only requires x86 box with about the same amount of the disk space of the database you want to virtualize. The source database is copied onto the Delphix machine with RMAN calls, thus validating the data, the data is compressed by Delphix and Delphix handles the snapshots and provisioning of virtual databases. Virtual database can be provisioned from the original source copy, any incremental snapshot or any SCN. Then you can make as many copies as you want, with in reason, for almost free in terms of storage.
Here is a demo to give you and idea of how it works http://delphix.com/resources.php?tab=product-demo
Appologies if this sounds like sales pitch, it is, I'm excited about what Delphix is doing, but it's also a rare opportunity. Unlike, DB Optimizer opportunity, this deal is definitely a departmental level buy, so I imagine this deal is only of interest for departmental heads that have some extra budget that will be lost at year end and are looking for a good way to use that budget.
PS for these cut rate deals, everything has to be in by next Friday at the
absolute latest. For best info contact
kaycee.lai_at_delphix.com and/or garrett.stanton_at_delphix.com
On Wed, Dec 22, 2010 at 11:27 AM, David Roberts < big.dave.roberts_at_googlemail.com> wrote:
> While I accept your primary argument that loosing data on a SAN
> is difficult. I would also observe that the not 'out of the box' precautions
> you have taken in addition to this also reduce the chances of data loss.
> Nevertheless, there are those that will be using non-SAN based replication
> (We in the past used SNDR from a local server to a remote server) and there
> are these persistent tales of data loss from SANs, the validity and
> numerical significance of which are difficult to judge.
> By nature, disasters tend to be unexpected and different in nature to those
> that you have tackled before.
> I do admit that at a certain level of protection it might not be cost
> effective or economically justifiable to implement the highest levels of
> data resilience for all organisations. However, the comfort that DG could be
> replicating my data between 2 systems at a higher level (than a SAN or
> operating system does) would give me a greater degree of confidence. In one
> case (with DG) I could be replicating from one manufactures hardware and
> operating system to another manufactures hardware and operating system. And
> I would tend to trust the highest level of replication (apart from bespoke
> replication codded by local developers and not implemented elsewhere) more
> than that provided by hardware providers.
> Always remember, 'There's software in those BIOSs'
> David Roberts
> On Wed, Dec 22, 2010 at 10:59 AM, Nuno Souto <dbvision_at_iinet.net.au>wrote:
>> That is an argument often invoked to support DG, but it doesn't take into
>> account how replication is done in most modern SAN devices.
>> For example: in our EMC we replicate the FRA with the last RMAN backup,
>> every 2 hours we archive redo logs to FRA and replicate them using
>> command-line initiated replication. Due to the way EMC does replication,
>> potential disparity between blocks will get corrected on the next send.
>> And it
>> has not happened once in nearly 2 years we've been doing it!
>> Oh, BTW: it's not the disk controller that does that, it's a completely
>> different mechanism than the SAN disk io. I suspect whoever came up with
>> "danger" really hasn't used a late generation SAN.
>> I'll wear the risk of two consecutive transmission errors on FC - recall
>> that it
>> is subjected to parity and ECC as well - against what it'd cost us to get
>> IP-based connection resilient and performant enough to do DG at our volume
>> performance point. In fact, I know exactly what it'd cost us and it's
>> not feasible or cost-effective.
>> What Oracle should do is make DG independent of the transport layer. IE,
>> if I
>> want to use Oracle's IP-based transport, or ftp, or scp, or a script, or
>> navicli, or dark fibre non-IP, or carrier pigeons/smoke signs, it's
>> entirely up
>> to me and let me do it. There is really no reason why DG has to be
>> Nuno Souto
>> David Roberts wrote,on my timestamp of 22/12/2010 6:17 AM:
>>> One point, that I don't see mentioned (unless I missed it) is if you are
>>> some form of block level replication as a DR solution, what happens when
>>> disaster is the disk controller writing garbage to your disk.
>>> If you are using DG, then depending on the type you will
>>> get varying early opportunities to spot the corruption or opportunities
>>> recover from it. Opportunities that are lacking when you blindly have
>>> hardware copping data blocks.
>>> I agree that these are fine solutions to providing development and
>>> environments, but I would suggest caution with regards adopting these
>>> technologies for DR purposes.