Re: block chg tracking

From: Jeremy Schneider <jeremy.schneider_at_ardentperf.com>
Date: Thu, 12 Feb 2015 10:44:27 -0600
Message-ID: <CA+fnDAbsGMgnzy7mOUgjDP_TH7cRA-du4Fc0mTF5NoANGdgtyQ_at_mail.gmail.com>



On Thu, Feb 12, 2015 at 3:02 AM, Mladen Gogala <dmarc-noreply_at_freelists.org> wrote:
> However, for larger databases incremental backups are
> exceedingly rare. If something like hurricane Sandy happens, recovery will
> take much longer if incremental backups need to be restored.

In my experience, this is not true. Incrementals are a lot faster than rolling forward logs (the only other alternative way to replay between fulls). I definitely have not seen them becoming "rare" by any means.

Anyway it's irrelevant. The relevant question is frequency between fulls, not whether you use incrementals. There are a few strategies for more frequent fulls (independent of dedupe).

  1. you can use a physical standby and run more frequent fulls there
  2. you can use incrementals to roll forward an image copy on a backup appliance and take snapshots of it
  3. some appliances can receive logs and synthetically generate fulls - I think Oracle's backup appliance might do this, Delphix might do something similar too?

More frequent fulls will of course reduce RTO. But weekly fulls really are good enough for most business' RTO requirements; especially if you can switch to cumulative incrementals instead of the default differentials. Cumulative incrementals really don't have a big impact on RTO unless you have a truly extraordinary amount of change.

> At some point,
> it pays off to do only full backups, with de-duplication, despite the
> slightly larger amount of storage used.

My issue with dedupe - *especially* on large databases - is that solutions I'm familiar with disallow rman compression and don't use the SBT interface.

The bottleneck in nearly all large restores that I've seen over the past few years has been the network connection between the backup appliance and the target server. At best I've seen 10G connections, and generally it's still 1G. So shipping uncompressed data over the wire would dramatically increase your recovery time.

An SBT interface could solve this, but I've mostly seen NFS or iSCSI used for dedupe with oracle backups.

-Jeremy

--
http://about.me/jeremy_schneider
--
http://www.freelists.org/webpage/oracle-l
Received on Thu Feb 12 2015 - 17:44:27 CET

Original text of this message