Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: unable to produce sequential writes to redo log

Re: unable to produce sequential writes to redo log

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Wed, 6 Oct 1999 08:34:35 +0100
Message-ID: <939195327.25874.2.nnrp-02.9e984b29@news.demon.co.uk>


It is an operating system thing.
Redo writes in in a multiple of what it considers to be the device driver block size (which is often 1K, sometimes 512
bytes).

The file system, tends to work on block sizes which are larger - often 4K or 8K; the operating system can also introduce a side effect due to memory page sizing (often 8K).

Consequently you can do a 1K redo write which requires the file system to do a 4K or 8k read into which the 1K change is
written.

Raw devices, Veritas with the direct i/o option, and NTFS with direct i/o do not have this problem,

(There is a paper (powerpoint presentation) on my web site about Raw vs FS which includes comments on this effect.

--

Jonathan Lewis
Yet another Oracle-related web site: http://www.jlcomp.demon.co.uk

Paul M. Aoki wrote in message <7teimr$li2$1_at_agate-ether.berkeley.edu>...
>a little more fiddling reveals (no, this is not the final answer):
>
>putting the log on a raw device eliminates the read traffic
>(resulting in nice multiblock writes and much lower utilization
>numbers from iostat).
>
>so my *guess* is that some interaction between oracle and ufs is
>causing ufs to perform read-ahead. i can't imagine what that
>would be, since oracle has no reason to do any reads on the redo
>log and writes should not trigger read-ahead.
>
Received on Wed Oct 06 1999 - 02:34:35 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US