Re: Purging duplexed archived redo logs with RMAN/RAC

From: Steve Baldwin <stbaldwin_at_multiservice.com>
Date: Wed, 29 Apr 2009 06:59:45 +1000
Message-ID: <35499305-E861-434F-942C-A48C6B18C040_at_multiservice.com>



Vamshi,

Thanks for the quick reply. Since I'm only using two archive log destinations (FRA and local), how is setting LOG_ARCHIVE_DEST_n any different to setting LOG_ARCHIVE_DUPLEX_DEST? I see that the latter is deprecated if you are using the Enterprise edition but we're not so I assume it is still valid.

Anyway, I'm pretty sure there is nothing I can set that will enable my backup job running on node 1 to purge logs on node 2. So are you saying by switching to using LOG_ARCHIVE_DEST_n, if I run a rman job on node 2 and issue a "delete archivelog all" it will find and purge them?

Cheers,

Steve

On 29/04/2009, at 6:25 AM, Vamshi Damidi wrote:

>
> Steve,
>
> Usually the rman check for consistency of log files to be backed up .
>
> Use parameter log_archive_dest_n instead of duplex destination.
> And log_archive_dest_state_n to enable.
>
> Now this should all work good.
>
> Thanks,
> Vamshi .D
>
> -----Original Message-----
> From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org
> ]
> On Behalf Of Steve Baldwin
> Sent: Tuesday, April 28, 2009 4:20 PM
> To: oracle-l_at_freelists.org
> Subject: Purging duplexed archived redo logs with RMAN/RAC
>
> I have a 2 node 11g cluster using ASM as the storage manager. I
> started off with archived redo logs being written to the FRA (which is
> of course on shared storage under ASM control).
>
> I have configured RMAN with :
>
> configure archivelog deletion policy to backed up 5 times to device
> type disk;
>
> My daily backup job (which runs from one of the cluster nodes) issues
> a statement :
>
> delete archivelog all;
>
> This has been working fine and my archived redo logs are being purged
> as expected.
>
> I'm now attempting to set up our own Dataguard-like mechanism to set
> up a standby database. We only have Standard Edition so the actual
> dataguard functionality is not available to us. What I have done is :
>
> # Set up a LOG_ARCHIVE_DUPLEX_DEST to point to local storage on each
> cluster node
> # Set the ARCHIVE_LAG_TARGET to the acceptable data loss interval (say
> 30 minutes)
> # Set up a cron job on each node that runs every 10 minutes to rsync
> files from the LOG_ARCHIVE_DUPLEX_DEST directory to the standby server
> # Apply the logs to the standby server (haven't done this yet)
>
> Now my rman job outputs a bunch of messages like this :
>
> archived log /var/oracle/archive-duplex/2_398_683508574.dbf not found
> or out of sync with catalog
> trying alternate file for archived log of thread 2 with sequence 398
>
> These are obviously redo logs from node 2 (my rman job runs from node
> 1). I figure these are harmless because I still have the redo logs in
> the FRA. The problem I have is that archived redo logs in
> LOG_ARCHIVE_DUPLEX_DEST on node 2 (local storage) are not being
> purged. This doesn't surprise me because there is no way for an RMAN
> job running on one cluster node to purge files located on another
> cluster node. So, I thought I could run a simple RMAN job on node 2 to
> issue a :
>
> delete archivelog all
>
> However, while this returned no errors, it didn't seem to even
> consider the archived redo logs in local storage, and nothing was
> purged. So, obviously I'm approaching this the wrong way. What is the
> right way to do it?
>
> Given that the backup job running on node 1 has already backed up the
> archived redo logs from the FRA, I don't need to *back up* any more
> archived redo logs. The logs that live in LOG_ARCHIVE_DUPLEX_DEST are
> copies of those in the FRA. I'm assuming RMAN is smart enough to only
> back up archived redo logs from one place. Please tell me that by
> defining duplexed archived redo logs I'm not increasing the size of my
> backup.
>
> Am I going about my Dataguard-like solution the wrong way? It just
> seems that adding a duplexed copy of the archived redo logs (which I
> thought was reasonably common practice even without the roll-your-own-
> Dataguard requirement) is adding some complexity in a RAC situation.
> Should I just be forgetting about RMAN on node 2 and using a 'find' -
> based purge?
>
> Thanks for your help,
>
> Steve
>
> This email is intended solely for the use of the addressee and may
> contain information that is confidential, proprietary, or both.
> If you receive this email in error please immediately notify the
> sender and delete the email.
>
> --
> http://www.freelists.org/webpage/oracle-l
>
>
>

This email is intended solely for the use of the addressee and may contain information that is confidential, proprietary, or both. If you receive this email in error please immediately notify the sender and delete the email.

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Apr 28 2009 - 15:59:45 CDT

Original text of this message