Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Log file I/O throughput

Re: Log file I/O throughput

From: Noons <wizofoz2k_at_yahoo.com.au>
Date: Wed, 13 Aug 2003 23:30:06 +1000
Message-ID: <3f3a3d80$0$10360$afc38c87@news.optusnet.com.au>


"Brian Peasland" <dba_at_remove_spam.peasland.com> wrote in message news:3F3A35C4.CCE7C28A_at_remove_spam.peasland.com...
> copies, block by block. In multiplexing you have two distinctly
> different files each of which is *supposed* to contain identical
> information. If you get a corrupted block in a redo log, then if that
> redo log is mirrored, the corrupted block is most likely propagated to
> that mirror copy. If the redo logs are multiplexed, then it is much
> harder for that block corruption to be in both copies. I always

I'm a bit reluctant in accepting that modern hardware is in any way capable of such "selective" corruption.

Certainly in the bad days of native controllers and such. Not with a modern SAN or some of the fancy disk farms out there.

In fact statistically speaking, the probability of corruption from a multiplexed I/O is higher than from a mirrored I/O: there are more hardware elements along the chain that can suffer random failures.

This was discussed a while ago and someone provided a very good argument for multiplexing: Oracle redo log handling is not above the occasional software "glitch" (or "feature", if you prefer the term). These are usually not propagated to a multiplexed redo but they sure are to a mirrored redo.

So really, the reason is more software than hardware nowadays.

--
Cheers
Nuno Souto
wizofoz2k_at_yahoo.com.au.nospam
Received on Wed Aug 13 2003 - 08:30:06 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US