mylesv_at_gmail.com wrote:
> DA Morgan wrote:
>> My point was that Oracle gives you the power to make decisions not
>> possible in most other products. For example one gets maximum
>> performance by not having any log file switches (at the risk of
>> increasing the risk of data loss) which would indicate very large
>> files. But the need to minimize possible loss due to a catastrophic
>> hardware failure dictates more frequent log switches. As an Oracle
>> DBA it is your job to achieve a balance between speed and safety.
>>
>> In the other products you have worked with, Sybase and SQL Server,
>> log files work in a completely different manner and must be sized to
>> the amount of redo created by a transactions and the possibility that
>> it will need to be rolled back. Not the case in Oracle where we can
>> perform an infinitely large transaction in finite log space.
>>
>> Your original question led me to believe you were trying to size your
>> log files based on the amount of redo ... rather than a calculation
>> of risk (data loss) vs. reward (speed).
>
> Daniel,
>
> I see your point. I could perform my update with two 1 MB redo logs,
> but it wouldn't be practical. I'm seeking a balance between
> performance and safety. I seems larger redo logs and
> ARCHIVE_LAG_TARGET should work for me.
Unless your hardware is substantially different from everyone else's
the chance of catastrophic hardware failure is minimal. So I'd tend to
err in the direction of performance with larger log files.
--
Daniel A. Morgan
University of Washington
damorgan_at_x.washington.edu
(replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
Received on Tue Oct 17 2006 - 10:24:36 CDT