Re: where clause filter from an array or use temp table?

From: Stefan Koehler <contact_at_soocs.de>
Date: Thu, 18 Dec 2014 20:05:16 +0100 (CET)
Message-ID: <1504761909.465619.1418929516329.open-xchange_at_app01.ox.hosteurope.de>



Hi Jeff,
unfortunately you have not provided any details about your code or Oracle version and so i will be pretty generic here.

>> I think I am running into bind peaking issues where the first time it gets called they may pass in only 1 values and the optimizer will decide to >> use an index, then the next time they pass in 300 where a full table scan might be better but because it is a bind variable it uses the first plan.

If this snippet is your real problematic part (= wrong cardinality based on table function), then it seems like your issue is based on the "cardinality heuristics for pipelined table functions". Until Oracle 11 R1 Oracle applies a cardinality heuristic to pipelined and table functions. If you are using Oracle 11.1.0.7 or later (as far as i can remember) you can also use dynamic sampling for pipelined and table functions. You may also want to use the extensible optimizer interface to provide the correct cardinality ( http://docs.oracle.com/cd/E11882_01/appdev.112/e10765/ext_optimizer.htm#ADDCI2110 ). It all depends :-))

>> Now I will have a bunch of redo being generated for all the deleting and inserting into the global temp table.

The redo is generated because of the undo for the GTT (and possible indexes). You may want to use the trick of Tom Kyte ( https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:4135403700346803387 ).

>> What do you think about this approach?

I personally would go for the extensible optimizer interface, if possible and if the cardinality estimates of the table function is your real issue. A coding example can be found here - part "the extensible optimiser, table functions and variable in-lists" ( http://www.oracle-developer.net/display.php?id=427 ).

However as previously mentioned - a pretty generic reply as a lot of important information is missing, sorry.

Best Regards
Stefan Koehler

Oracle performance consultant and researcher Homepage: http://www.soocs.de
Twitter: _at_OracleSK

> Jeff C <backseatdba_at_gmail.com> hat am 18. Dezember 2014 um 18:59 geschrieben:
>
> So I have procedure set up with a string parameter that is intended to be a comma separated list of numbers and returns a cursor. Sometimes they
> pass in 1 value and sometimes 300 values. I then take this string and put it into an table of numbers array.
>
> Then in the query uses the array like so in the where clause:
> where emp_id in (select column_value from table(cast(t_numbers as num_tab)))
>
> I think I am running into bind peaking issues where the first time it gets called they may pass in only 1 values and the optimizer will decide to
> use an index, then the next time they pass in 300 where a full table scan might be better but because it is a bind variable it uses the first plan.
>
> Now I was thinking of changing this process to load the values into a global temporary table (preserving rows on commit) and then I can join to the
> temp table instead.
>
> I have tested this method and it proves to be a lot faster. But I am wondering if this is a good idea to do? Now I will have a bunch of redo being
> generated for all the deleting and inserting into the global temp table. I have a lot of procedures that use this method with the array. Not all
> have problems but I would probably eventually change them all over.
>
> What do you think about this approach?
> We also have some procedures where if they pass in a NULL to that parameter instead that means not filter on that column. So there where clause
> looks something like this.
> where (p_values is null or (emp_id in (select column_value from table(cast(t_numbers as num_tab))))
>
>
> Thanks for any input.
>

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Dec 18 2014 - 20:05:16 CET

Original text of this message