Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: EXPLAIN PLAN madness!

Re: EXPLAIN PLAN madness!

From: Nuno Souto <nsouto_at_optushome.com.au.nospam>
Date: Sat, 17 Nov 2001 12:35:18 GMT
Message-ID: <3bf6562f.905039@news>


In a sudden spasm of awareness,
Ed Stevens doodled thusly:

>Well, sometimes my awareness comes in spasms. Other times it simply oozes in at
>an excrutiatingly slow pace . .. . ;-)

:-D Don't worry, that's in there for the benefit of others, not this NG!

>
>Differences in sort_area parms seemed to be the difference (see my reply to Mark
>Townsend for more detail.) I'm curious about your desire to avoid hash joins.
>What is the problem with them? Below are the two plans (before I got the two
>databases to perform the same). As you can see, the one with the hash joins is
>far cheaper than the ones without . . . .
>

Plans being cheaper are not necessarily an indication they will execute faster. There is another concurrent thread to this one where you'll find an example of just that. The cost "advertised" by the EXPLAIN PLAN is not an absolute cost.

The problem with the hash joins is that the optimizer will as gladly hash the access to a 10000 row table as to a 1000000 row table. IOW, it works and works very well for small lookup tables, but is a complete disaster for VLT, unless special care is taken.

I've had plenty of examples of joining two tables, one with 600Krows and another with 250Krows, where the otimizer simply insists on hashing the second table, no matter what. Unless I have the memory available to hash that size of key, this usually spells disaster at runtime. Although the hash cost against nested loops is much, much smaller.

Again, this seems to change from point version to point version. 8.0.5 behaves completely different from 8.0.6, on the same database. Same happens between 8.1.5 and 8.1.7. Just too unreliable and unpredictable for my taste. And there is nearly no doco whatsoever on what these changes are and what affects behaviour from point release to point release. Even metaclick has conflicting or simply incorrect info.

That's why I'm now turning the darn thing off unless there is a very specific (and proven) case to use it. Just a smaller cost is not enough.

I'm told that 9i solves this problem by using a different algorithm to calculate access paths, giving more weight to I/O than to buffer access and weighing in _all_ I/O, not just the input table. It should catch hash joins that end up causing more I/O than they save. We'll see...

Cheers
Nuno Souto
nsouto_at_optushome.com.au.nospam Received on Sat Nov 17 2001 - 06:35:18 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US