Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Cost/Time Anomaly

RE: Cost/Time Anomaly

From: Steve Ollig <sollig_at_lifetouch.com>
Date: Fri, 29 Dec 2006 14:30:17 -0600
Message-ID: <76CE541B04343E41AFB23F126A46FE1630148A@exchlt1.LIFETOUCH.NET>


don't get hung up on comparing the "costs" of 2 different queries. cost is used by the CBO to give it a relative comparison of different plans for the SAME query. just think of it that way instead.  

the point is, if you want to outer join all 20M rows from B1 to 2 other ~20M row tables, then hash joins are very likely to be the fastest and least costly way to get the job done. now if you want to see some of the rows sooner, there might be something you could do to accomplish that. but if you want all the rows returned as fast as possible, you may already have it.  

just curious - what are you going to do with all those rows?

-----Original Message-----
From: blr_dba [mailto:deepak_blr_at_youtele.com] Sent: Friday, December 29, 2006 2:04 PM
To: Steve Ollig; ORACLE-L
Subject: Re: Cost/Time Anomaly

Thanks for your reply...  

There is no filter in this query.  

Want to know why in second case the cost is less but time taken to execute the query is more...?  

Any suggestion on improving the performance of the query? I have two CPUs and the big tables are hash partitioned on ID column into two partitions.

And your resultset will contain at least as many rows as B1 (~20M as you said), perhaps many more - correct? Given that, and the fact that B2 and B3 also contain ~20M rows, the CBO seems to be making a perfectly logical choice.  

Is it possible you don't want such a large resultset returned? Is there a filter you left off that would limit the # of rows?

-----Original Message-----
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org]On Behalf Of blr_dba Sent: Friday, December 29, 2006 1:03 PM
To: 'ORACLE-L'
Subject: Cost/Time Anomaly

Hi Gurus,  

Am stuck in a tuning problem and need your expertise to get rid of the issue.

I have 3 huge tables(~20M rows each) and many small look up tables joined in a query as follows...

Assume:
Big tables : BT1, BT2, BT3
Small tables: ST1, ST2, ST3, ST4, ST5, ST6

select * from BT1, BT2, BT3, ST1, ST2, ST3, ST4, ST5, ST6 where

BT1.id=BT2.id(+) and 
BT1.id=BT3.id(+) and 
BT1.id=ST1.id(+) and 
BT1.id=ST2.id(+) and 
BT1.id=ST3.id(+) and 
BT1.id=ST4.id(+) and 
BT1.id=ST5.id(+) and 
BT1.id=ST6.id(+); 

The CBO is using hash joins and the cost is too high (400K) and we are having a lots of "direct path write waits".

I tried to remove the outer joins for the small lookup tables by using sclar sub-queries. The cost reduced drastically (10K) but the overall execution time got increased.

Badly need your expertise to get rid of this issue.

Also would like to know even if the CBO cost is less in the second case, why the overall execution time is more. Is n't the cost inversly proportional to the time taken to execute the query?



This email has been scanned by the MessageLabs Email Security System. For more information please visit http://www.messagelabs.com/email


This email has been scanned by the MessageLabs Email Security System. For more information please visit http://www.messagelabs.com/email
--
http://www.freelists.org/webpage/oracle-l
Received on Fri Dec 29 2006 - 14:30:17 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US