Lazy Write-Back??

From: Michael Stowe <Michael.Stowe_at_f573.n115.z1.fidonet.org>
Date: Wed, 02 Nov 94 10:31:03 -0600
Message-ID: <783772263_at_f573.n115.z1.ftn>


To eliminate the effects of caching, you (ideally) should shut down Oracle in between each test. Otherwise, caching will give you results that aren't all that useful.

> We ran 3 queries. The first was on a new 1100 row table, the second was
> on the same table after increasing the size to 23,000 rows, and the third
> was on the same table after deleting rows back down to 1100. As expected
> the CPU and hence the elapsed time for the third query was higher than the
> first but lower than the second (it takes fewer CPU cycles to search empty
> datablocks), but we were suprised to find MORE logical and physical reads
> on the third query than on the second.

There are many explanations of this; the most likely is simply that the blocks were cached by your first insert, and therefore did not require physical reads for the second query. Depending on how you ran the "benchmark," the third query may have had to consult the rollback segments in order to retrieve consistent results.
> It seemed that Oracle did more reading of datablocks
> after we deleted some rows than it did before we deleted them.
> I think we may have been caught in the "lazy write-back" algorithm that
> Oracle uses. I have heard of this but do not fully understand it. Can
> someone explain how it works and if it is why we got more logical/physical
> reads for our third query?

The bottom line is that your "benchmark" is measuring all kinds of secondary and tertiary information that has little to do with your original question, nor, for that matter, does the "lazy write-back" algorithm mean all that much.

To proceed: shut down and restart the database before running each query. The results you receive will then be at least meaningful. Received on Wed Nov 02 1994 - 17:31:03 CET

Original text of this message