Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Terabytes of archived redo logs

RE: Terabytes of archived redo logs

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Thu, 3 May 2007 10:02:45 -0400
Message-ID: <041e01c78d8b$ba6e7480$1100a8c0@rsiz.com>


Okay, let's do a little math...

First the ground rules. Since I have to follow Farnham's Rule of DBA job security (never trust your career to a single piece of spinning or ribbon rust), and since there is no real point of backups if a simple fire in your datacenter can put you out of business, I think you need to make two copies.

One cheap way to do this is removable disk drives. You can price this up yourself easily and then let the disk drive vendor compete against that price for bells, whistles, and ease of use.

Now the good news is that as backup receiver targets, you don't really need too many spindles, unless you need to have a really fast recovery speed.

You can get a 1 TB usb/firewire portable drive for about $410 USD retail that will easily sustain 50 MB/second. So let's say you need 10 of those to have a nice rotation with time to fetch a pair of replacements if one goes bad.

So that's $4100 (again, retail, you can do better than that).

And you probably don't want to make the copies on your host, so throw in another $900 for a high bus speed PC with at least a couple independent USB 2 controllers or firewire or firewire 800.

So twice a day you dump 1 TB of archived redo onto one of these puppies. That's going to take about 20,000 seconds, or less than 6 hours at the very conservative 50 MB/sec. You might pipe the files through a checksum if you want to know you copied what you thought you copied.

When the copy finishes, you plug that drive and another into the PC and copy for another 6 hours. If you're doing checksums maybe you invested in a pair of chips for that PC. Now you can take one of those off site and delete that 1 TB from your host and you have another 12 hours to deal with your other 1 TB.

I'm sure you can do better than that, and I suppose if your folks insist on ribbon rust you might have to copy to tapes as well, but then I hate tape drives and have an unfair bias against them. The only disk drives I ever hated were the ones with "floculant sticktion" that would pretty reliable fail to restart if you spun them down after 40 days running and you had about a 50-50 chance of "fixing" them following the instructions of the field diagram that looked like a guy throwing a discus except for the part about not letting go... but that's another story - though I'll bet at least 2 people on this list other than me actually had to do that.

Anyway, I'm sure you can put a reasonable solution in place for under $10,000 USD. Oh - keep a little chart on the MTBF for those drives and cut it in half since you're shipping them a lot.

Regards,

mwf

-----Original Message-----

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Lou Avrami
Sent: Thursday, May 03, 2007 1:09 AM
To: oracle-l_at_freelists.org
Subject: Terabytes of archived redo logs

Hi folks,

I'm about to inherit an interesting project - a group of five 9.2.0.6 databases that produce approximately 2 terabytes (!) of archived redo log files per day.

Apparently the vendor configured the HP ServiceGuard clusters in such a way that it takes over an hour to shut down all of the packages in order to shut down the database. This amount of downtime supposedly can't be supported, so they decided to go with online backups and no downtime.

Does anyone out there have any suggestions on handling 400 gig of archived redo log files per day? I was thinking of either a neear-continuous RMAN job or shell cron that would write the logs to either tape or a storage server. Actually, I think that our tape library might be overwhelmed also by the constant write activity. My thinking right now is a storage server and utilizing a dedicated fast network connection to push the logs over. Storage though might be an issue.

If anyone has any thoughts or suggestions, they would be appreciated. BTW, I already had the bright idea of NOARCHIVELOG mode and cold backups. :)

Thanks,
Lou Avrami

--

http://www.freelists.org/webpage/oracle-l

--

http://www.freelists.org/webpage/oracle-l Received on Thu May 03 2007 - 09:02:45 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US