Re: block chg tracking

From: Ram Raman <veeeraman_at_gmail.com>
Date: Sun, 15 Feb 2015 14:14:04 -0600
Message-ID: <CAHSa0M0fQoCdQoJiqWLnQ2BupVy=74JOkxctoGmNyuw8qd2KPQ_at_mail.gmail.com>



Mladen, why would you say the 'goal is not reducing the reads' in this context? I would prefer lesser reads (lesser CPU and IO) while doing backups. BCT is very handy not just for prod, but for several non prod DBs too where we do not pay for the standby solutions like DG, etc. I will take BCT over commvault for the DBs we maintain, most of them under a TB.

What is the size of the DB for which commvault's compression beats BCT significantly in recovery time.

On Sun, Feb 15, 2015 at 11:24 AM, Mladen Gogala <dmarc-noreply_at_freelists.org
> wrote:

> On 02/15/2015 11:47 AM, Jared Still wrote:
>
>> Hi Mladen,
>>
>> Though dedup reduces the size on the backend, it does nothing for
>> database impact.
>>
>> As you know BCT reduces the reads on the db by RMAN, so I am a little
>> confused by this statement.
>>
>
> That is true, but the goal is not reducing the reads, the goal is to
> restore database in the shortest time possible. De-duplication also speeds
> things up, because everything is read, but not everything is written. You
> get far fewer disk writes and, if the storage medium is connected by
> network, far less network traffic. Given that a full backup must be
> restored, the difference is whether there will be a need for applying
> additional incremental backups.
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Sun Feb 15 2015 - 21:14:04 CET

Original text of this message