Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: most expeditious way to update a large number of rows
mylesv_at_gmail.com wrote:
> DA Morgan wrote:
>> Unless my guess about your background is incorrect, you didn't respond >> to that part ... the question not relevant because the size of redo >> logs does not relate to the size of transactions.
My point was that Oracle gives you the power to make decisions not possible in most other products. For example one gets maximum performance by not having any log file switches (at the risk of increasing the risk of data loss) which would indicate very large files. But the need to minimize possible loss due to a catastrophic hardware failure dictates more frequent log switches. As an Oracle DBA it is your job to achieve a balance between speed and safety.
In the other products you have worked with, Sybase and SQL Server, log files work in a completely different manner and must be sized to the amount of redo created by a transactions and the possibility that it will need to be rolled back. Not the case in Oracle where we can perform an infinitely large transaction in finite log space.
Your original question led me to believe you were trying to size your log files based on the amount of redo ... rather than a calculation of risk (data loss) vs. reward (speed).
-- Daniel A. Morgan University of Washington damorgan_at_x.washington.edu (replace x with u to respond) Puget Sound Oracle Users Group www.psoug.orgReceived on Tue Oct 17 2006 - 00:29:08 CDT