Re: ORACLE query optimiser question

From: Jonathan Lewis <Jonathan_at_jlcomp.demon.co.uk>
Date: Tue, 28 Feb 1995 20:48:01 +0000
Message-ID: <794004481snz_at_jlcomp.demon.co.uk>


In article <793745292snz_at_granby.demon.co.uk>

           bh_at_granby.demon.co.uk "Bruce Horrocks" writes:

:
: Oracle uses statistics gathered on the data to estimate the number of
: rows in each table that would be affected by the query. The values are
: then used to determine the optimum join order.
:

I understand that the number of likely disk accesses is the critical feature of the optimiser, so Oracle has to consider not only the number of rows and the average number of hits per index key, but also the packing factor of the index, and the clustering factor of index hits.

e.g. 1: All other things being equal, an index working at 50% efficiency would need more (single block) disk accesses to fetch all relevant rowids than the same index packed to 90% efficiency.

e.g. 2: A table where the typical 200 rows for a given key value were probably restricted to 10 blocks would be more 'desirable' than a table where the 200 rows were scattered 1 per block.

I believe that things like the second example can then result in table scans taking place 'unexpectedly' since my understanding is that Oracle will count a single block read (e.g. by rowid) to cost the same as a large multi-block read scan. So with the highly scattered table in example 2 an indexed access to the table would valued at 200 disk accesses (plus a few for the index), whereas a tablescan of the same cost could cover 6,400 blocks if the db_file_multiblock_read_count were set to 32.

Any corrections or confirmation welcomed

-- 
Jonathan Lewis
Received on Tue Feb 28 1995 - 21:48:01 CET

Original text of this message