Peoplesoft Batch run and excessive redo generation

From: Karl Arao <karlarao_at_gmail.com>
Date: Wed, 20 Oct 2010 13:50:13 +0800
Message-ID: <AANLkTikhj2J41Ex_Fpr0f3PSq7SRro8a10sCfwnCKh4=_at_mail.gmail.com>



We have this client with 30-50GB database on Peoplesoft.. and amazingly when it does batch processing it produces archivelogs (when compounded) way bigger than the database.. this is around 150GB worth of archivelogs.

The issue here is on the Disaster Recovery, if we turn on the log transfer of Data Guard on the period of the batch run the network bandwidth capacity gets consumed easily causing long queue for archive gap. This is the same if we turn off the log transfer, then do the batch run, then re-enable it... the gap resolution process will eat up the entire network bandwidth.

I have read on the doc *"Batch Processing in Disaster Recovery Configurations - Best Practices for Oracle Data Guard"* https://docs.google.com/viewer?url=http://www.hitachi.co.jp/Prod/comp/soft1/oracle/pdf/OBtecinfo-08-008.pdf but
this is 11g, on 10g what I can do is SSH compression http://goo.gl/jc6n but still even with compression enabled the redo requirement may still exceed the network capacity

The RMAN incremental backups (differential & cumulative) does not help here because it will generate and re-apply the same amount of redo

Another thing I'm exploring is just turn off the log transport before doing the batch run, then after that.. duplicate the standby (not incremental update but fresh)

Do you have other ideas around this? If you have encountered this how did you address the issue? Is there a related PeopleSoft bug? This will be very much appreciated.

--

Karl Arao
karlarao.wordpress.com
karlarao.tiddlyspot.com

--

http://www.freelists.org/webpage/oracle-l Received on Wed Oct 20 2010 - 00:50:13 CDT

Original text of this message