Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: optimize this!

Re: optimize this!

From: Brian Dick <bdick_at_cox.net>
Date: Wed, 6 Oct 2004 12:42:34 -0400
Message-ID: <1qwnhcmginupg.voodgd7dxvvb$.dlg@40tude.net>


On Sun, 03 Oct 2004 21:46:36 -0700, Daniel Morgan wrote:

> FC wrote:
> 

>> Hi,
>> I am trying to optimize the following sql statement (oracle 10g) but I can't
>> get anything better than an index range scan.
>>
>> select distinct to_char(shopping_date,'MONTH/YYYY') d,
>> trunc(shopping_date,'MM') r
>> from bills
>> where owner_id = 25
>> order by 2 desc
>>
>> (null) SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=39 Bytes=390)
>> (null)
>> 1 0 _SORT (ORDER BY) (Cost=3 Card=39 Bytes=390) (null)
>> 2 1 _SORT (UNIQUE) (Cost=2 Card=39 Bytes=390) (null)
>> 3 2 _INDEX (RANGE SCAN) Of BILLS_IDX2 (INDEX) (Cost=1 Card=45
>> Bytes=450) (null)
>>
>>
>> Table bills comes with the following indexes (table and indexes have been
>> analyzed beforehand)
>>
>> 1) a primary key index on column id which is not part of this query.
>> 2) a btree index on columns owner_id and id (used by other queries)
>> 3) a btree index on columns owner_id and shopping_date (this one is actually
>> picked by the optimizer)
>> 4) a function index on owner_id, to_char(shopping_date,'MONTH/YYYY'),
>> trunc(shopping_date,'MM') which is ignored due to low selectivity I presume.
>>
>> So, my question is, given the fact that the query is expected to select
>> around 50 rows from a table containing some 10.000 rows, is the index range
>> scan on index number 3 the best I can get?
>>
>> Does it make any sense to define function indexes with functions like trunc
>> that "flatten" values and therefore reduce selectivity?
>>
>> Of course in my case it does not make sense to keep this function index as
>> it is not going to be used ever.
>>
>> Thanks,
>> Flavio
> 
> You want better than a cost of 3 ... why? Don't you have a job to do?
> 
> Personally I'd expect that a full table scan wouldn't hurt with such a
> small table.

BILLS_IDX2 is a covering non-unique index. You can't get much faster than a index-only range scan. The sorts are more of a concern. You may be able to eliminate the outer sort if you had a descending index, but the distinct sort is going to be the one doing the heavy lifting. They are short rows, so at least you get the benefit having many rows per block.

When you have 10M rows in the table, how many rows do you expect in the intermediate result before the distinct sort? Received on Wed Oct 06 2004 - 11:42:34 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US