Re: commit_write= BATCH, NOWAIT; ... how long do I wait?
Date: Fri, 10 Jul 2009 21:02:26 +0200
On 10.07.2009 08:18, Jonathan Lewis wrote:
> "Pat" <pat.casey_at_service-now.com> wrote in message > news:b21e89a6-7fb0-4206-a45f-43b242ebd361_at_y19g2000yqy.googlegroups.com...
>> We're running Oracle 10.2.0.4 (64 bit) on top of red-hat linux.
>> Recently, we've started configuring the database servers with:
>> alter system set commit_write=BATCH, NOWAIT;
>> In the event of a database/server crash we can afford the loss of a
>> few transactions. Frankly, we can probably afford the loss of 5
>> minutes or so worth of transactions, so we're not deeply concerned by
>> the durability loss associated with running in asynchronous commit
>> However, I at least, am more than a little curious as to how much data
>> we're putting at risk here.
>> Does anybody know how long Oracle will buffer redo in memory before it
>> commit when running in this mode? I'm operating under a theory that it
>> probably commits every time either A) its redo buffer is full or B)
>> the oldest redo entry is older than x, but that's pure speculation on
>> my part.
>> Is there any guarantee *at all* here that data older than "x" is on
>> disk? I've worked with other databases (mysql/innodb) where there's a
>> guarantee that it'll flush the redo within 1 second of your commit if
>> you run in "weakly" durable mode, but I can't seem to find any Oracle
>> doc that specifies if there is such a commitment.
> > > I don't think there's any documentation that states explicitly anything > like: > "if you set commit_write to 'batch, nowait' then your change will remain > unsecured for N seconds"'. > > However, there is documentation that states that the log writer (LGWR) > wakes up approximately every three seconds or when there is 1MB of > log in the buffer even if nothing else kicks it, : so you can probably > assume > that the most you can lose is about 1MB or 3 seconds, whichever is larger.
What happens if the volume is huge? I mean if there is significant more data to be written than the IO system can deliver when the client does not have to wait?
Personally I find this setting a bit obscure and would be wary to use it. Not getting feedback on successful or failed transactions is something I would only want to do in extreme cases and only after I examined other options.
In this case I'd probably rather think about changing the import to either combine several of the small transactions or make more use of concurrency if that is possible.
My 0.02 EUR
-- remember.guy do |as, often| as.you_can - without end http://blog.rubybestpractices.com/Received on Fri Jul 10 2009 - 14:02:26 CDT