RE: backing up a big DB

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Mon, 28 Mar 2022 13:43:04 -0400
Message-ID: <027601d842cb$478bacc0$d6a30640$_at_rsiz.com>



IF you can double your storage with the same class of storage your current database is on (in place of the acreage on NFS), then in theory the backup itself should be quicker and recovery is just pointing at the other storage. Notice at duplex you are destroying your old backup as you make the new one, so unless you can afford triplex storage to alternate backup destinations (and storage has indeed become quite cheap) then I recommend you do your full back up one tablespace at a time. (Properly done you can recover a hybrid vintage if you have the redo logs to apply. You’re going to want to practice that you have all ducks in a row, so you don’t miss a piece.  

By going “bad” I think you could mean either some block or media corruptions happened or a load was logically incorrect. This method does require extra storage buy not the software costs of BCV or dataguard, and it does give you the lag back to a known backup to which you then apply the new data loads. If it was media corruption and you’re triplex this gives you time to figure out what is broken on your media farm but still have room for non-alternating backups (since the other copy is now live and the previously live storage is undergoing repair.)  

Good luck. This is the budget solution. And of course I’ve waxed on as if you already did the test to non-NFS storage and that latency was the problem. By the way if NFS storage latency or bandwidth is the problem, then before you buy the hardware I suggested, you should also get someone whizbang at the exact best NFS mounts for your flavor of everything you have.  

Oh, another oops: I didn’t directly ask you how your current backup is executed. Since you mentioned not being able to do dataguard I was presuming “begin backup” copy the files at the OS level “end backup” etc. with all the correct pieces rather than dataguard.  

From: Orlando L [mailto:oralrnr_at_gmail.com] Sent: Monday, March 28, 2022 1:07 PM
To: Mark W. Farnham
Cc: oracle-l_at_freelists.org
Subject: Re: backing up a big DB  

Excellent point Mark. Should have told you that before. 1) Purpose: Recover the whole database if production DB goes 'bad' 2) Not that urgent: Users maybe willing to wait a couple of days (or 3) 3) If I understand your question correctly, we need to be able to recover the data at least till the previous backup; this is not an online system, data goes in via nightly/weekly loads.  

Thanks,

OL  

On Mon, Mar 28, 2022 at 12:00 PM Mark W. Farnham <mwf_at_rsiz.com> wrote:

The first thing we need to know to make sensible recommendations is the purpose of the full backup.  

The second thing we need to know is your recovery urgency. This may vary by application, which in turn may mean that the database should be separated into a small number of databases if the data required for the most urgent recoverable database is small. In theory the pieces can be PDBs if you don’t mind recovering at urgency into an prepared container for the urgent PDBs. That would mean some practice. I recommend frequent failovers be built into your procedures so that you avoid the situation of being rusty at your recovery procedure should you ever have to do it for real.  

The third thing we need to know is the order of coherent sets from which you intend to do recovery.  

But since you are currently asking about something being backed up to NFS and you protest that you can’t do anything like coherently split plexes (aka mirrors), I suppose the first diagnostic is to pick some individual tablespace and compare the speed to NFS versus some direct attached disk spare on your server.  

That will tell you whether latency and bandwidth to your NFS destination is the problem (or not).  

You could also spin up a database on the NFS storage and run Kevin’s SLOB on the NFS storage.  

Good luck,  

mwf  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Orlando L Sent: Monday, March 28, 2022 12:06 PM
To: oracle-l_at_freelists.org
Subject: backing up a big DB  

Hi  

We have a 23TB Oracle database and the full backup times are a problem. We are currently backing it up to an NFS on weekends. I am trying to see options on cutting down the time. I am looking into incrementally updated backups, which I think may cut down the backup time drastically. I am concerned about the long run though. Since it copies over only the changed data, I am wondering what will happen if some not-frequently accessed block in the backup goes corrupt in the backup. I am thinking that it may be a problem when it is time to do a restore. Am I warranted in this kind of thinking? I am wondering about the VALIDATE command used on a backup of a big DB of this size. Anyone uses VALIDATE on such big backups? How long does it take. All ideas welcome. 19c.  

PS. No money for BCV or a parallel dataguard server to offload backups.  

Orlando.  

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Mar 28 2022 - 19:43:04 CEST

Original text of this message