Re: commit_write= BATCH, NOWAIT; ... how long do I wait?

From: Pat <>
Date: Fri, 10 Jul 2009 05:32:44 -0700 (PDT)
Message-ID: <>

On Jul 9, 11:18 pm, "Jonathan Lewis" <> wrote:
> "Pat" <> wrote in message
> > We're running Oracle (64 bit) on top of red-hat linux.
> > Recently, we've started configuring the database servers with:
> > alter system set commit_write=BATCH, NOWAIT;
> > In the event of a database/server crash we can afford the loss of a
> > few transactions. Frankly, we can probably afford the loss of 5
> > minutes or so worth of transactions, so we're not deeply concerned by
> > the durability loss associated with running in asynchronous commit
> > mode.
> > However, I at least, am more than a little curious as to how much data
> > we're putting at risk here.
> > Does anybody know how long Oracle will buffer redo in memory before it
> > commit when running in this mode? I'm operating under a theory that it
> > probably commits every time either A) its redo buffer is full or B)
> > the oldest redo entry is older than x, but that's pure speculation on
> > my part.
> > Is there any guarantee *at all* here that data older than "x" is on
> > disk? I've worked with other databases (mysql/innodb) where there's a
> > guarantee that it'll flush the redo within 1 second of your commit if
> > you run in "weakly" durable mode, but I can't seem to find any Oracle
> > doc that specifies if there is such a commitment.
> I don't think there's any documentation that states explicitly anything
> like:
> "if you set commit_write to 'batch, nowait' then your change will remain
> unsecured for N seconds"'.
> However, there is documentation that states that the log writer (LGWR)
> wakes up approximately every three seconds or when there is 1MB of
> log in the buffer even if nothing else kicks it, : so you can probably
> assume
> that the most you can lose is about 1MB or 3 seconds, whichever is larger.
> --
> Regards
> Jonathan Lewis
> Author: Cost Based Oracle: Fundamentals
> The Co-operative Oracle Users' FAQ

Thanks John, that's exactly what I was looking for. I can live with 3 second or a meg worth of redo.

In case anybody is commit contention issues and has relaxed durability constraints, I'd recommend looking into this setting pretty hard.

In order workload at least its led to much better throughput (probably 30% better average transaction time across all transactions), and a much lower incidence of "outlier" transactions. Received on Fri Jul 10 2009 - 07:32:44 CDT

Original text of this message