Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Optimizer and block size changes = big problem.

Re: Optimizer and block size changes = big problem.

From: Nuno Souto <nsouto_at_nsw.bigpond.net.au.nospam>
Date: Thu, 10 May 2001 13:29:04 GMT
Message-ID: <3afa82a8.2140734@news-server>

On Thu, 10 May 2001 11:49:24 +0100, "Jonathan Lewis" <jonathan_at_jlcomp.demon.co.uk> wrote:

>However, by halving the block size, you will
>have affected the index stats to the effect that
>typically:
> average data blocks per key will have doubled
> average index blocks per key will have doubled
> clustering factor may have increased

Thanks a lot for that, Jonathan. Basically, you got it. Think logical disk reads rather than data blocks and you're spot on, as far as my reasoning based on Anjo's info goes.

>
>If tablescans are then costed according to
>Oracle's perception of the O/S max i/o size
>this might be enough to push Oracle to infer
>that index paths are now twice as costly as
>they used to be.

Very much so. My mid size tables suddenly became candidates for hash joins and full scans. The really large ones went through the roof. Physical reads didn't go up too much thanks to DFMR. Confirmed by comparing V$SQLAREA stuff between this one and a copy still at 16K. Logical reads through the roof, disk reads nowhere nearly as much.

What I'd like is a *predictable* way of setting a threshold to bump the optimizer into hash join territory. Right now it's too hazy. Too many variables affecting it. Something like a ratio of physical to logical reads would do it, I guess? Lesse what 9i has got in store.

If it hadn't been for those two precious OPTIMIZER_INDEX parameters, most likely I'd still be in trouble. Even with them it took me a whole night to figger a sensible approach, rather than pot-luck testing all combinations of 100 X 100!

>
>I wonder how much impact this will have on the
> 'what is the best size of a data block'
>argument ?
>

Nice one, eh? Might cause a stir, although it's IMHO a somewhat isolated case. I still think this is partly due to the particular combination of tables/indexes used by Peoplesoft. They have a vast number of tables with in excess of 8 indexes/table. Many very large ones with 10 and 12. Mostly on repeated columns. Kind of:

index01(col_a,col_b,col_c,col_d)
index02(col_d,col_a,col_b,col_c)
and a few more permutations and duplications, that sort of stuff...

The larger and more volatile the table, the more indexes they stick into it. Doesn't take too long for an optimizer to go "duh!" over this. I'm not one and I go too!

And let's not talk about %TMP tables with 0 rows when I take the stats, suddenly growing to 4Mrows/multiple updaters during the day. Buffer busy waits get a thrashing if I don't set the FREELISTS carefully. And does the Peoplesoft doco talk about this? Noooooo......

Do you see why I keep insisting I need multiple tunable buffer caches in 9i? One buffer cache per tablespace, a-la DB2? Much easier to control this kind of DB design with those. I'll gladly trade LMTS for one cache per tablespace! ;-)

KEEP, RECYCLE and DEFAULT are not enough. Although I've managed to put them to good use with Peoplesoft. My KEEP and DEFAULT hum along at over 99% hit rate, while RECYCLE zips around 75. Fine by me, although you don't wanna know how I'm now using them and their lru latches! Talk about walking on a rope...

On a normally designed database, I don't expect this kind of indexing to happen at all. And by the looks of it neither did whoever came up with the current algorithm for the optimizer.

Save me from 3rd party "portable database" designers/developers. Worse than users! :-D <just kidding, PS is actually kinda nice>

Cheers
Nuno Souto
nsouto_at_bigpond.net.au.nospam
http://www.users.bigpond.net.au/the_Den/index.html Received on Thu May 10 2001 - 08:29:04 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US