RE: performance impact of archivelog

From: Mark W. Farnham <>
Date: Tue, 18 Nov 2008 12:43:04 -0500
Message-ID: <>


Have you considered using transportable tablespaces to plug in the large data changes so that block deltas on the precious whole can be archived and therefore useful with all manner of block repair? As the size of your database grows the likelihood of all variety of seemingly low probability block corruptions increases. Depending on the texture of your ETL this process *may* allow for horizontal scaling, secure recovery at every appropriate level, and quite possibly the widest window for analysis queries that have less contention for UNDO.


I have seen a rock solid case for noarchiving; Clay Jackson of USWNV (a member of the 1990's Oracle VLDB) had data that was persistent on a switch for something like two days. With archiving it was touch and go whether he could keep up. With archiving off, he could load it something like 3 or 4 times a day. So he used a data capture database that was most certainly part of the production flow, but which was also only a way-station. If we needed to reload it due to a failure getting it through the "pipe" to more permanent and secure storage, he did just that. I guess the quibble might be whether that was a "production database" whether or not the contents that flowed through it was essential.

3) Never is a high standard. But I tend to agree with Greg and therefore I'd have to characterize the things and situations where I consider noarchive the best solution to either not be databases or not be production...



-----Original Message-----
From: [] On Behalf Of David Ballester
Sent: Tuesday, November 18, 2008 4:58 AM To: Greg Rahn
Cc:;;; Subject: Re: performance impact of archivelog

El lun, 17-11-2008 a las 14:44 -0800, Greg Rahn escribió:

> And I personally would never run a production database in noarchive
> log mode. Never.

Hi Greg:

I can understand your opinion about the archive log mode on production databases, but in very special situations - for example a datawarehouse with 20 TB of data aprox. renewing a lot of data each hour and 24X7, is very difficult to maintain a backup in the Oracle standard mode ( hot backup with rman with archive log mode on ). No window to backup all data, the nologging inserts making a lot of unrecoverable points... we are talking about tablespaces of 360GB, who can backup it at reasonable speed? - I think that in very special cases - other example that comes to my mind, a instance used as application cache or very volatile data - the database could be in noarchivelog mode but after a carefully study, of course. I'm with you, but I say 'For the 99,9% of production databases I would never run it in noarchive log mode' :)

Best regards



Received on Tue Nov 18 2008 - 11:43:04 CST

Original text of this message