Re: commit_write= BATCH, NOWAIT; ... how long do I wait?
Date: Fri, 10 Jul 2009 12:56:03 -0700 (PDT)
Message-ID: <4344b9e5-7be9-4189-a4a9-0b291385103f_at_b15g2000yqd.googlegroups.com>
On Jul 10, 12:02 pm, Robert Klemme <shortcut..._at_googlemail.com> wrote:
> On 10.07.2009 08:18, Jonathan Lewis wrote:
>
>
>
> > "Pat" <pat.ca..._at_service-now.com> wrote in message
> >news:b21e89a6-7fb0-4206-a45f-43b242ebd361_at_y19g2000yqy.googlegroups.com...
> >> We're running Oracle 10.2.0.4 (64 bit) on top of red-hat linux.
>
> >> Recently, we've started configuring the database servers with:
>
> >> alter system set commit_write=BATCH, NOWAIT;
>
> >> In the event of a database/server crash we can afford the loss of a
> >> few transactions. Frankly, we can probably afford the loss of 5
> >> minutes or so worth of transactions, so we're not deeply concerned by
> >> the durability loss associated with running in asynchronous commit
> >> mode.
>
> >> However, I at least, am more than a little curious as to how much data
> >> we're putting at risk here.
>
> >> Does anybody know how long Oracle will buffer redo in memory before it
> >> commit when running in this mode? I'm operating under a theory that it
> >> probably commits every time either A) its redo buffer is full or B)
> >> the oldest redo entry is older than x, but that's pure speculation on
> >> my part.
>
> >> Is there any guarantee *at all* here that data older than "x" is on
> >> disk? I've worked with other databases (mysql/innodb) where there's a
> >> guarantee that it'll flush the redo within 1 second of your commit if
> >> you run in "weakly" durable mode, but I can't seem to find any Oracle
> >> doc that specifies if there is such a commitment.
>
> > I don't think there's any documentation that states explicitly anything
> > like:
> > "if you set commit_write to 'batch, nowait' then your change will remain
> > unsecured for N seconds"'.
>
> > However, there is documentation that states that the log writer (LGWR)
> > wakes up approximately every three seconds or when there is 1MB of
> > log in the buffer even if nothing else kicks it, : so you can probably
> > assume
> > that the most you can lose is about 1MB or 3 seconds, whichever is larger.
>
> What happens if the volume is huge? I mean if there is significant more
> data to be written than the IO system can deliver when the client does
> not have to wait?
>
> Personally I find this setting a bit obscure and would be wary to use
> it. Not getting feedback on successful or failed transactions is
> something I would only want to do in extreme cases and only after I
> examined other options.
>
> In this case I'd probably rather think about changing the import to
> either combine several of the small transactions or make more use of
> concurrency if that is possible.
>
> My 0.02 EUR
>
> Kind regards
>
> robert
>
> --
> remember.guy do |as, often| as.you_can - without endhttp://blog.rubybestpractices.com/
My understanding of the setting is that its most useful for applications with a large number of small commits. In such cases you're bound by the disk's seek time because you're constantly fsynching very small amounts of data (each commit is an fsync of the redo log). In these cases, batching the commits helps dramatically because the IO scheduler can optimize the writes into a much smaller number of larger writes and you get the same amount of data spooled to disk with less IOOPs.
For an application with a manageable number of very large commits, then, as you indicate, you wouldn't get much of an improvement (if any) out of this setting since you're bounded by the IO subsystem's throughput rather than its IOOP limits.
Most of the apps I tend to work with tend to fall into the first class (too many commits) rather than the second class (too much data), so its been a very useful setting in my experience. Received on Fri Jul 10 2009 - 14:56:03 CDT