Re: RTO Challenges

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Tue, 27 Mar 2018 10:13:53 +0000
Message-ID: <LO1P123MB09776E10CD9B888BFA6B4670A5AC0_at_LO1P123MB0977.GBRP123.PROD.OUTLOOK.COM>


I think the answer comes in two parts.

There are companies that haven't done a proper analysis of RTO for different disasters and haven't considered simple time calculations for time to restore, volume to recover.

Companies who have done the analysis don't expect to recover "sufficiently large" databases to the original machine but use a standby strategy that allows minimum file shipping and recovery .

Regards
Jonathan Lewis



From: oracle-l-bounce_at_freelists.org <oracle-l-bounce_at_freelists.org> on behalf of Dominic Brooks <dombrooks_at_hotmail.com> Sent: 27 March 2018 10:51:47
To: ORACLE-L
Subject: RTO Challenges

Iím not a DBA as such and Iíve always skipped over most of the chapters on RMAN etc so very grateful for expert opinions on this please.

  1. We have multi-TB DBs, as everyone does.
  2. The message from DBA policies is that we can only restore at 2 TB per hour.
  3. We have an RTO of 2 hours

As a result, there is a wide initiative pushed down onto application teams that there is therefore an implied 4TB limit to any of the critical applicationsí databases, in the event that we run into those scenarios where we need to restore from backup.

Initially, the infrastructure-provided solution was ZDLRA, for which our firmís implementation thereof was initially promising a 4TB per hour restore rate but in practice is delivering the above 2TB per hour restore rate, and this is the figure used to the DBAs as a key input into this generic firm-wide policy.

My thoughts are that this is still an infrastructure issue and there are probably plenty of alternative infrastructure solutions to this problem. But now it is being presented as an application issue. Of course applications should all have hygienic practice in place around archiving and purging, whilst also considering regulatory requirements around data retention, etc, etc.

But it seems bizarre to me to have this effective database size limit in practice and Iím not aware of this approach above being common practice. 4TB is nothing by todayís standards.

Am I wrong?
What different approaches / solutions could be used?

Thanks

Regards,
Dominic

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Mar 27 2018 - 12:13:53 CEST

Original text of this message