RE: where clause filter from an array or use temp table?

From: Iotzov, Iordan <IIotzov_at_newsamerica.com>
Date: Thu, 18 Dec 2014 15:48:06 -0500
Message-ID: <9287D00721407A4CBDC9925C789DBB6703179A86B8_at_nam-wil-exc-l03.newsamerica.com>



Hi Jeff,

As Stefan mentioned, a way to deal with data volatility (0 to 300 records) is to rely on dynamic sampling (DS). DS should be able to help generate a plan that is suitable for the specific data volumes/distributions at execution time.

Please note that dynamic sampling fires only the first time a query is issued (during hard parse). Consequent executions would reuse that plan, even if the data volume and/or distribution in the tables have changed significantly. A solution to that is to force hard parsing every time. That can be done by either changing the SQL text slightly or by using this method http://oracle-randolf.blogspot.com/2009/02/how-to-force-hard-parse.html that replies on VPD.

HTH,
Iordan Iotzov

Iordan Iotzov | Lead Database Administrator, Information Services | News America Marketing 20 Westport Road, 1st floor, Wilton CT 06897 | P 203.563.6472 | C 203.423.9269 iiotzov_at_newsamerica.com<mailto:iiotzov_at_newsamerica.com>| newsamerica.com<http://www.newsamerica.com/> | smartsource.com<http://www.smartsource.com/>

[cid:image001.gif_at_01D01ADA.036F6890]

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Jeff C Sent: Thursday, December 18, 2014 12:59 PM To: oracle-l_at_freelists.org
Subject: where clause filter from an array or use temp table?

So I have procedure set up with a string parameter that is intended to be a comma separated list of numbers and returns a cursor. Sometimes they pass in 1 value and sometimes 300 values. I then take this string and put it into an table of numbers array.

Then in the query uses the array like so in the where clause: where emp_id in (select column_value from table(cast(t_numbers as num_tab)))

I think I am running into bind peaking issues where the first time it gets called they may pass in only 1 values and the optimizer will decide to use an index, then the next time they pass in 300 where a full table scan might be better but because it is a bind variable it uses the first plan.

Now I was thinking of changing this process to load the values into a global temporary table (preserving rows on commit) and then I can join to the temp table instead.

I have tested this method and it proves to be a lot faster. But I am wondering if this is a good idea to do? Now I will have a bunch of redo being generated for all the deleting and inserting into the global temp table. I have a lot of procedures that use this method with the array. Not all have problems but I would probably eventually change them all over.

What do you think about this approach?
We also have some procedures where if they pass in a NULL to that parameter instead that means not filter on that column. So there where clause looks something like this. where (p_values is null or (emp_id in (select column_value from table(cast(t_numbers as num_tab))))

Thanks for any input.



This message and its attachments may contain legally privileged or confidential information. It is intended solely for the named addressee. If you are not the addressee indicated in this message (or responsible for delivery of the message to the addressee), you may not copy or deliver this message or its attachments to anyone. Rather, you should permanently delete this message and its attachments and kindly notify the sender by reply e-mail. Any content of this message and its attachments that does not relate to the official business of News America Incorporated or its subsidiaries must be taken not to have been sent or endorsed by any of them. No warranty is made that the e-mail or attachment(s) are free from computer virus or other defect.


--
http://www.freelists.org/webpage/oracle-l


image001.gif
Received on Thu Dec 18 2014 - 21:48:06 CET

Original text of this message