Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Terabytes of archived redo logs

RE: Terabytes of archived redo logs

From: <JApplewhite_at_austinisd.org>
Date: Thu, 3 May 2007 09:11:21 -0500
Message-ID: <OFA99FC573.A5A0ED45-ON862572D0.004D7D83-862572D0.004DF403@austinisd.org>


Here's a, perhaps, wild, thought. Could you establish Physical Standby databases on (an)other server(s)? Then you could let your Prod datbases automatically shovel the archived redo logs to them, periodically remove them from the Prod environment as you see they've been transferred to the Standbys. You could also gzip them on the Standby side to further save space. Gzip is such a CPU hog that I'd not want it running on the Prod server.

You'd also get disaster recovery databases in the process. Just a thought.

Jack C. Applewhite - Database Administrator Austin (Texas) Independent School District 512.414.9715 (wk) / 512.935.5929 (pager)

 Same-Day Stump Grinding! Senior Discounts!

"Mercadante, Thomas F \(LABOR\)" <Thomas.Mercadante_at_labor.state.ny.us> Sent by: oracle-l-bounce_at_freelists.org
05/03/2007 07:36 AM
Please respond to
Thomas.Mercadante_at_labor.state.ny.us

To
<avramil_at_concentric.net>, <oracle-l_at_freelists.org> cc

Subject
RE: Terabytes of archived redo logs

Lou,

Although this is a challenge, this problem is really no different than any other database in production. It's just a matter of scale.

You need enough free disk space to hold, let's say, two days worth of archivelog files. And you also need a fast enough tape backup system so that you can run backups, say, every two hours to keep the archivelog files moving off of the system.

That's the theory. Enough free disk in case you have problems with backups and scheduled backups to keep the disk clear.

That's what I would do.

Tom

-----Original Message-----

From: oracle-l-bounce_at_freelists.org
[mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Lou Avrami Sent: Thursday, May 03, 2007 1:09 AM
To: oracle-l_at_freelists.org
Subject: Terabytes of archived redo logs

Hi folks,

I'm about to inherit an interesting project - a group of five 9.2.0.6 databases that produce approximately 2 terabytes (!) of archived redo log files per day.

Apparently the vendor configured the HP ServiceGuard clusters in such a way that it takes over an hour to shut down all of the packages in order to shut down the database. This amount of downtime supposedly can't be supported, so they decided to go with online backups and no downtime.

Does anyone out there have any suggestions on handling 400 gig of archived redo log files per day? I was thinking of either a neear-continuous RMAN job or shell cron that would write the logs to either tape or a storage server. Actually, I think that our tape library might be overwhelmed also by the constant write activity. My thinking right now is a storage server and utilizing a dedicated fast network connection to push the logs over. Storage though might be an issue.

If anyone has any thoughts or suggestions, they would be appreciated. BTW, I already had the bright idea of NOARCHIVELOG mode and cold backups. :)

Thanks,
Lou Avrami

--
http://www.freelists.org/webpage/oracle-l
Received on Thu May 03 2007 - 09:11:21 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US