Re: block chg tracking

From: Mladen Gogala <mgogala_at_yahoo.com>
Date: Sun, 15 Feb 2015 16:07:40 -0500
Message-ID: <54E10A9C.8060603_at_yahoo.com>



Oh, and it's not just Commvault. Most of the modern backup software, like Avamar, NetBackup and Tivoli have de-duplication. VTL software like Data Domain too. There are even "de-duplication appliances". The one I have encountered most frequently is Quantum DX. Contrary to popular belief, de-duplication is more efficient than compression. As for BCT, the problem is that restoring incremental backups takes time. One used to take incremental backups to speed the backup up and save storage. The price to pay was the recovery time, the most critical aspect of any backup strategy. In the time when it was normally possible to take the system down over the weekend, that was good enough. If your company needs to maintain the constant web presence, in the 7x24x365 fashion, longer recovery time may no longer be acceptable. There is usually a SLA regulating the required recovery time. Failure to comply is usually a resume generating event.

On 02/15/2015 03:14 PM, Ram Raman wrote:
>
> Mladen, why would you say the 'goal is not reducing the reads' in this
> context? I would prefer lesser reads (lesser CPU and IO) while doing
> backups. BCT is very handy not just for prod, but for several non prod
> DBs too where we do not pay for the standby solutions like DG, etc. I
> will take BCT over commvault for the DBs we maintain, most of them
> under a TB.
> What is the size of the DB for which commvault's compression beats BCT
> significantly in recovery time.
>
>
> On Sun, Feb 15, 2015 at 11:24 AM, Mladen Gogala
> <dmarc-noreply_at_freelists.org <mailto:dmarc-noreply_at_freelists.org>> wrote:
>
> On 02/15/2015 11:47 AM, Jared Still wrote:
>
> Hi Mladen,
>
> Though dedup reduces the size on the backend, it does nothing
> for database impact.
>
> As you know BCT reduces the reads on the db by RMAN, so I am a
> little confused by this statement.
>
>
> That is true, but the goal is not reducing the reads, the goal is
> to restore database in the shortest time possible. De-duplication
> also speeds things up, because everything is read, but not
> everything is written. You get far fewer disk writes and, if the
> storage medium is connected by network, far less network traffic.
> Given that a full backup must be restored, the difference is
> whether there will be a need for applying additional incremental
> backups.
>
>

-- 
Mladen Gogala
Oracle DBA
http://mgogala.freehostia.com


--
http://www.freelists.org/webpage/oracle-l
Received on Sun Feb 15 2015 - 22:07:40 CET

Original text of this message