RE: Data Guard Rebuild

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Thu, 12 Jun 2014 18:05:39 -0400
Message-ID: <044b01cf868a$733b2ae0$59b180a0$_at_rsiz.com>



I guess I'd copy the archived redo logs to a removable device before your primary backup and ship the device.

Of course more information would be required about your situation to know whether that is appropriate.

It does seem unreasonable to tie up your WAN for multiple days shipping the files. It may be easier to ship them also.

The bandwidth of media on a bus or plane can be incredible. As long as you're okay with the latency it is quite reliable.  

(paraphrase of Gorman and me, separately, circa 1990)  

Now, if physical media transport does not fit your world and the WAN is your only route:  

Your work-around seems to still ship all the archived redo logs. If you're doing that anyway, why not just use some sftp equivalent and tell the recovery process where they live?  

They will be in competition with the datafile transport, but you just need to keep ahead of your delete window, so I don't see the problem since the same seems to be true with Oracle's transport mechanism.  

mwf  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Kenny Payton
Sent: Thursday, June 12, 2014 12:14 PM
To: Jeremy Schneider
Cc: ORACLE-L
Subject: Re: Data Guard Rebuild  

It works. I came up with a couple tricks for this. One was to automatically generate the register statements and execute them on the standby once the rebuild completes. Another was to share the same destination on the primary. That way when the rebuild is complete the destination is deferred, updated to new target and then restarted. That way you minimize duplicate shipping of logs.  

I still wish I could get the logs to ship while the rebuild is running.  

Kenny      

On Jun 12, 2014, at 12:08 PM, Jeremy Schneider <jeremy.schneider_at_ardentperf.com> wrote:

That's a pretty cool workaround actually. I don't have a good solution; I usually somehow find space to temporarily keep a lot of archivelogs online until I get the standby setup - and I watch it closely in the meantime. Or else I do archivelog restores on the primary after the shipping is setup and keep resolving gaps until it's caught up. I might try your idea.  

-J

--

 <http://about.me/jeremy_schneider> http://about.me/jeremy_schneider  

On Thu, Jun 12, 2014 at 9:44 AM, Kenny Payton <k3nnyp_at_gmail.com> wrote:

I create/rebuild standby databases from time to time and when they are large and going across our WAN they can take more than a day to complete, sometimes multiple days. These databases also have a high churn rate and generate a large amount of redo. Our normal backup processes delete the archived logs on the primary prior to completion of the standby and require restoring them on the primary after the managed recovery begins so that they can ship to the standby. We do not reserve enough space to keep multiple days of archived logs around.

I'm curious if anyone has another work around to this.

I have one that requires creating the shell of the database by restoring the control files and offline dropping all data files and configuring transport to ship logs to it. Once managed recover starts I just register the log files that have shipped so far and switch the transport service on the primary. This is a little clunky and fairly manual at this point and I would love an easier approach.

Thanks,
Kenny--
http://www.freelists.org/webpage/oracle-l    

--

http://www.freelists.org/webpage/oracle-l Received on Fri Jun 13 2014 - 00:05:39 CEST

Original text of this message