Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Enable 32K Block in 8K Block DB

Re: Enable 32K Block in 8K Block DB

From: Richard Foote <>
Date: Tue, 30 Mar 2004 10:16:25 GMT
Message-ID: <ZNbac.131497$>

"Jonathan Lewis" <> wrote in message news:c4bbir$icb$
> df
> >
> Good reference, worth reading.
> A significant fraction of it is about a product, but
> it includes sensible technical material, and doesn't
> attempt to distort or be selective about technical
> details in order to sell the product.
> >
> > Note the test that proves it.
> >
> But everyone who reads the paper can also note the following
> preamble to the test:
> <<quote>>
> For example, large (16-32K) blocksize data
> caches can be used to contain data from indexes or
> tables that are the object of repeated large scans or
> tables that would always suffer from row migration
> or chaining.
> <<end quote>>
> The test case is not fully described, but let's we assume
> the cleanest example that would fit the details - the table
> has an id which is sequence based, the query is an
> "index-only" query which does a range scan rather than
> a fast full scan, and the index was built clean with the
> default of pctfree 10, then the query has to be something like:
> select /*+ index(t1) */
> count(*)
> from t1
> where id between 1 and 380000
> (I recorded 836 CR gets, but who's going to fuss about
> a small error - especially when there are a few recursive
> calls in there).
> That's a very special case type of query to get a
> reduction of 400 logical I/Os.

Hi Jonathan, Mike and all,

A point that no one appears to have made yet is that Robin's "test" succeeds in proving the obvious but doesn't prove the more important point that such a reduction in logical I/Os are actually beneficial.

What do I mean ?

Well, by doubling the size of the block size, you effectively 1/2 the number of corresponding blocks. Therefore, (obviously) when you read the same amount of data, Oracle is going to issue 1/2 the number of block reads, thus yes, 1/2ing the number of consistent gets (and all things being equal, 1/2ing the number of corresponding physical block read requests).

*But* what hasn't been taken into consideration is that the *volume* of data read is identical. Also, as Howard has pointed out, if the O/S uses a buffered file system, the *number of O/S get calls* could also be identical. Therefore, although from Oracle's perspective the stats look twice as good and imply things are likewise twice as good, in reality the net effect on actual *response times* would be negligible as under the covers, the same amount of work is being performed regardless.

The way I illustrate this is someone wants to fill in a swimming pool with the water from another pool. The owner of the empty pool says, "here use this really big bucket, you won't need to do as much work". However if the owner of the full pool insists you must use a smaller bucket, what happens. You perform (say) 4 trips to fill in the one big bucket. You then empty the contents of the big bucket into the empty pool and the owner counts *1*. You repeat the process, 4 more little buckets to fill the big bucket and the count becomes *2*.

But what have you actually saved ? How much faster is the pool being filled compared to both owners using the same sized buckets ? Not a lot.

And with buffered I/O, the O/S is insisting on you using a smaller bucket...

What would have been interesting is not seeing the difference with Oracle's count of consistent reads but actually seeing the difference in *response times*.

People have already mentioned various bad side effects of larger block sizes but another one to add to the list, again with buffered I/O, is the example of a simple unique scan. By insisting on using larger block sizes, the actual amount of work to perform a unique scan can *increase* rather than decrease even with a reduction of 1 in the index height because of the extra amount of data that must be accessed as a minimum, due to the multiple O/Ss get() calls per Oracle block.

People who simplistically claim larger block sizes for indexes is always preferable and use a reduction of Oracle consistent reads as "proof" are unfortunately not taking into consideration the full picture.


Richard Received on Tue Mar 30 2004 - 04:16:25 CST

Original text of this message