Re: Long Parse Times, Huge In Lists

From: Tim Gorman <tim_at_evdbt.com>
Date: Fri, 23 Oct 2009 17:37:57 +0000
Message-ID: <W1868615437200551256319477_at_webmail34>



Larry,

Even though they may not have disabled the auto stats gathering job itself in DBMS_SCHEDULER, they may have locked some tables or disabled table monitoring or otherwise disabled stats gathering by that job? Can you check to see how many application tables do not have stats? Then, check the setting of "optimizer_dynamic_sampling" (default is "2" in 10g). The levels are defined in the Oracle Database "Reference" manual as... 14.5.6.4 Dynamic Sampling LevelsThe sampling levels are as follows if the dynamic sampling level used is from a cursor hint or from the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter:

Level 0: Do not use dynamic sampling.
Level 1: Sample all tables that have not been analyzed if the following criteria are met: (1) there is at least 1 unanalyzed table in the query; (2) this unanalyzed table is joined to another table or appears in a subquery or non-mergeable view; (3) this unanalyzed table has no indexes; (4) this unanalyzed table has more blocks than the number of blocks that would be used for dynamic sampling of this table. The number of blocks sampled is the default number of dynamic sampling blocks (32).
Level 2: Apply dynamic sampling to all unanalyzed tables. The number of blocks sampled is two times the default number of dynamic sampling blocks.
Level 3: Apply dynamic sampling to all tables that meet Level 2 criteria, plus all tables for which standard selectivity estimation used a guess for some predicate that is a potential dynamic sampling predicate. The number of blocks sampled is the default number of dynamic sampling blocks. For unanalyzed tables, the number of blocks sampled is two times the default number of dynamic sampling blocks.
Level 4: Apply dynamic sampling to all tables that meet Level 3 criteria, plus all tables that have single-table predicates that reference 2 or more columns. The number of blocks sampled is the default number of dynamic sampling blocks. For unanalyzed tables, the number of blocks sampled is two times the default number of dynamic sampling blocks.
Levels 5, 6, 7, 8, and 9: Apply dynamic sampling to all tables that meet the previous level criteria using 2, 4, 8, 32, or 128 times the default number of dynamic sampling blocks respectively. Level 10: Apply dynamic sampling to all tables that meet the Level 9 criteria using all blocks in the table.

The upshot is that the long parse times, if they are not shared pool contention of some kind (i.e. "latch ..." or "library cache ..." waits), then perhaps the time is being spent dynamically sampling. In that case, the resolution would be to ensure that dynamic sampling is not triggered by enabling stats gathering.

Another way to detect whether dynamic sampling is being used is to query "select * from table(dbms_xplan.display_awr('sql-id-value'))" for a SQL_ID which is seeing these long parse times you describe, and then see if the "additional notes" at the bottom of the output from DBMS_XPLAN indicates that dynamic sampling took place.

Hope this helps...

-Tim

-----Original Message-----

From: Larry G. Elkins [mailto:elkinsl_at_flash.net] Sent: Friday, October 23, 2009 10:58 AM
To: oracle-l_at_freelists.org
Subject: Long Parse Times, Huge In Lists

Listers, 10.2.0.4 EE on Aix 5.3. Asked to look at a new COTS application in development environment. Huge in-lists, 1000's of literal values ("column_name in (1,2,3..1000) or column name in (1001,1002,...2000) or ..." ). Surrogate non-meaningful values in the list, most probably building the statement within a cursor (hey, databases can join tables!!!). Anyway, tracing shows typical parse times of 20 to 200 seconds. Execute and fetch < 0.5 seconds. So, considering we cannot change the code, and it is unlikely we can get the vendor to change the code, any strategies for speeding up the parse times? I'm not familiar with this database but it looks like it is a pretty generic out of the box setup database wise, complete with automatic stats gathering job. Tested removing histograms to minimize evaluation paths, no real improvement. Cursor sharing tried, sometimes gets a hit and avoids the hard parse. Very hit or miss in this case. ASMM is used, and in some cases a resize operation of 5+ minutes gets triggered, during the parse so then the bulk of the time in the trace is on "latch: shared pool" while resize is occurring, parse time still shows 20 - 200 seconds, and then 5+ minutes on the shared pool latch. Considering turning off ASMM, or at least working the min size on various components, probably the former. But that only addresses the occasional case where it triggers a long resize operation, and not the typical 20-200 second parse cases. So, anything that can be done to reduce the parse times? When it is in-house code, we have all kinds of options to eliminate the huge in-list altogether. But being a COTS application, that's off the table, for now anyways. Larry G. Elkins elkinsl_at_flash.net -- http://www.freelists.org/webpage/oracle-l

--

http://www.freelists.org/webpage/oracle-l Received on Fri Oct 23 2009 - 12:37:57 CDT

Original text of this message