Re: Minimize recovery time

From: Lok P <loknath.73_at_gmail.com>
Date: Wed, 27 Apr 2022 16:04:50 +0530
Message-ID: <CAKna9VY8v3wgMD_m=_VjT7YKzmVsBkx4h1Oimbv8Db0FHThPSQ_at_mail.gmail.com>



Yes we have dataguard setup , but this agreement is in place in case of both primary and dataguard DB fails because of disaster or corruption etc.

On Wed, 27 Apr 2022, 3:30 pm Andy Sayer, <andysayer_at_gmail.com> wrote:

> Have you considered Dataguard? You’d have a secondary database always
> ready to failover to.
>
> Thanks,
> Andy
>
>
> On Wed, 27 Apr 2022 at 10:50, Lok P <loknath.73_at_gmail.com> wrote:
>
>> Hello Listers, We have an Oracle Exadata (X7) database with 12.1.0.2.0
>> and it's now grown up to size 12TB now. As per client agreement and
>> criticality of this application the RTO(Recovery time objective) has to be
>> within ~4hrs. The team looking after the backup recovery has communicated
>> the RTO(recovery time objective) as ~1hrs for ~2TB of data with current
>> infrastructure. So going by that, this current size of the database will
>> have RTO ~6hrs which is more than the client agreement(which is ~4hrs).
>>
>> Going through the top space consumers, we see those are table/index
>> sub-partitions and non partitioned indexes. Should we look into table/index
>> compression here? But then i think there is also downside of that too on
>> the DML performance.
>>
>> Wanted to understand Is there any other option to get this achieved
>> (apart from exploring possible data purge) to have this RTO faster or under
>> the service agreement? How should we approach.
>>
>>
>> Regards
>>
>> Lok
>>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Apr 27 2022 - 12:34:50 CEST

Original text of this message