Re: Result caching

From: Job Miller <>
Date: Fri, 23 Jan 2015 15:37:50 +0000
Message-ID: <>

>  thus added code to disable cardinality feedback at the session level on the top of the script and my results became consistent.
Is this query meaningful, or just a sample query used to test out your test harness?

If cardinality feedback is indeed working as advertised and fixing a problem with the limitations of stats collection or a data specific issue, do you want to lock your users into bad performance for consistency of testing sake or do you want to fix the problem to get good production results the first time?


     From: stephen van linge <>  To: "" <>; "" <>; "" <>  Sent: Thursday, January 22, 2015 2:17 PM  Subject: Re: Result caching    

I worked offline with Iggy and was able to deduce the cause of the issue, the following is a summary of our efforts and the result (in bold for those of you that don't want to read the details). I ran a trace and tkprof on both the first and second runs of the script, and we noticed that there was significant physical I/O occurring on the first run and not on the second run.  Upon closer examination, each run had a different plan hash, thus Iggy directed me to view v$sql_shared_cursor ( We noticed that the "USE_FEEDBACK_STATS" flag was the only difference between the plans, and so upon further examination Iggy found that the cause of the divergent plans could be cardinality feedback in 11g r2 ( Since this query is likely to be run more "here and there" instead of in rapid succession, I judged that the more production-accurate result for us would be to benchmark the query without cardinality feedback enabled, and thus added code to disable cardinality feedback at the session level on the top of the script and my results became consistent. Stephen 

     From: stephen van linge <>  To: "" <>; "" <>; "" <>  Sent: Tuesday, January 20, 2015 9:17 AM  Subject: Re: Result caching    

_at_IggyOk I'll look at the trace files, this'll be a good excuse to get to know trace files and tkprof better. _at_RajendraThe first run of the benchmarking script (runs the query itself 5 times) is just the first time I hit "execute" on it.  The re-run is the second time the benchmarking script is run with the same query (another 5 times).  It certainly could be the instrumentation, but as the query in question is a SELECT and is being run exactly as-is over and over again, I believe the possibility of an error is minute. Stephen    

   From: "" <>  To:;  Sent: Tuesday, January 20, 2015 9:10 AM  Subject: RE: Result caching    

#yiv3790300871 -- filtered {font-family:Helvetica;panose-1:2 11 6 4 2 2 2 2 2 4;}#yiv3790300871 filtered {font-family:Helvetica;panose-1:2 11 6 4 2 2 2 2 2 4;}#yiv3790300871 filtered {font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;}#yiv3790300871 filtered {font-family:Tahoma;panose-1:2 11 6 4 3 5 4 4 2 4;}#yiv3790300871 p.yiv3790300871MsoNormal, #yiv3790300871 li.yiv3790300871MsoNormal, #yiv3790300871 div.yiv3790300871MsoNormal {margin:0in;margin-bottom:.0001pt;font-size:12.0pt;}#yiv3790300871 a:link, #yiv3790300871 span.yiv3790300871MsoHyperlink {color:blue;text-decoration:underline;}#yiv3790300871 a:visited, #yiv3790300871 span.yiv3790300871MsoHyperlinkFollowed {color:purple;text-decoration:underline;}#yiv3790300871 span.yiv3790300871EmailStyle17 {color:#002060;}#yiv3790300871 .yiv3790300871MsoChpDefault {font-size:10.0pt;}#yiv3790300871 filtered {margin:1.0in 1.0in 1.0in 1.0in;}#yiv3790300871 div.yiv3790300871WordSection1 {}#yiv3790300871 How do you tell the difference between a re-run (average .6 seconds) and a second run (average 70 seconds)My thought is there is some issue with the instrumentation. Another question is how do you validate the results. Is it possible that there is some error in the re-run that is not caught  Regards    

From: [] On Behalf Of stephen van linge Sent: Tuesday, January 20, 2015 11:56 AM To:
Subject: Result caching  Hi, I'm trying to put together a benchmarking wrapper that we can place scripts inside.  The script is simple in design, it does the following: 1) Loop over the query that's being benchmarked x number of times (configurable).2) Throw out the first 2 runs and average the rest of the runs.3) Return the average duration in milliseconds of the runs from (2). So far this has worked great, however I'm having some weird results.  A query I'm benchmarking as an unoptimized case takes ~70 seconds to run on average as reported from the benchmarking wrapper (with 5 runs) which is all fine and dandy, but if I try to run the wrapper again (with 5 more runs), it completes in 0.6 seconds on average. I had two thoughts: 1) Maybe the execution plan is being cached.  But this doesn't explain why it consistently ran so slow for 5 runs.2) Maybe the results are being cached.  I reproduced the issue in our DR server and verified that the results cache didn't change in size, so this is not the issue.3) Maybe it has something to do with the fact that the query being benchmarked is always being benchmarked without bind variables (hardcoded bind values of a slow case to make the wrapper more simple). We are on single-instance Oracle 11gR2 and I'm running this all through PL/SQL developer. I can include the benchmarking wrapper script if necessary.  This is more academic at this point, when we run the unoptimized query through the application, it consistently runs at around 70 seconds a run regardless of the number of times run. Thank you for your time, Stephen Van Linge Please visit our website at for important disclosures and information about our e-mail policies. For your protection, please do not transmit orders or instructions by e-mail or include account numbers, Social Security numbers, credit card numbers, passwords, or other personal information.           

Received on Fri Jan 23 2015 - 16:37:50 CET

Original text of this message