Re: Comparing Top 5 Timed Events and Cache Advisory
Date: Thu, 16 Jun 2011 09:29:52 -0700 (PDT)
Message-ID: <20d92b8c-17e3-48d0-bb69-16e1826d2257_at_q14g2000prh.googlegroups.com>
On Jun 15, 6:38 pm, vsevolod afanassiev
<vsevolod.afanass..._at_gmail.com> wrote:
> There seems to be inconsistency between data in Top 5 Timed Events and
> Cache Advisory
>
> Top 5 Events shows that 90% of all time is spend doing reads:
>
> Top 5 Timed Events
> Avg %Total
> ~~~~~~~~~~~~~~~~~~
> wait Call
> Event Waits Time (s)
> (ms) Time
> ----------------------------------------- ------------ -----------
> ------ ------
> db file sequential read 18,010,465
> 103,171 6 87.3
> CPU time
> 8,135 6.9
> db file scattered read 2,772,236
> 4,252 2 3.6
> read by other session 535,946
> 949 2 .8
> db file parallel read 65,775 856
> 13 .7
>
> However cache advisory shows that "Est % db time for Rds" is only
> 57.3% for current cache size:
>
> Est
> Phys
> Estimated Est
> Size for Size Buffers Read Phys Reads Est Phys %
> dbtime
> P Est (M) Factr (thousands) Factr (thousands) Read Time
> for Rds
> --- -------- ----- ------------ ------ -------------- ------------
> --------
> D 5,008 .1 619 2.0 58,806,702 158,278,272
> 221.9
> D 10,016 .2 1,239 1.7 49,510,240 120,698,597
> 169.2
> D 15,024 .3 1,858 1.5 44,595,719 100,832,333
> 141.3
> D 20,032 .4 2,478 1.4 41,273,666 87,403,398
> 122.5
> D 25,040 .5 3,097 1.3 38,542,248 76,362,022
> 107.0
> D 30,048 .6 3,717 1.2 35,967,561 65,954,188
> 92.5
> D 35,056 .7 4,336 1.1 33,628,296 56,498,044
> 79.2
> D 40,064 .8 4,955 1.1 31,907,259 49,540,991
> 69.4
> D 45,072 .9 5,575 1.0 30,671,422 44,545,292
> 62.4
> D 50,080 1.0 6,194 1.0 29,759,277 40,858,074
> 57.3 <<<<<<<--------
> D 50,176 1.0 6,206 1.0 29,744,366 40,797,800
> 57.2
> D 55,088 1.1 6,814 1.0 29,095,614 38,175,312
> 53.5
> D 60,096 1.2 7,433 1.0 28,598,287 36,164,941
> 50.7
> D 65,104 1.3 8,053 0.9 28,183,870 34,489,710
> 48.3
> D 70,112 1.4 8,672 0.9 27,821,913 33,026,557
> 46.3
> D 75,120 1.5 9,291 0.9 27,496,055 31,709,318
> 44.4
> D 80,128 1.6 9,911 0.9 27,204,516 30,530,810
> 42.8
> D 85,136 1.7 10,530 0.9 26,927,887 29,412,574
> 41.2
> D 90,144 1.8 11,150 0.9 26,651,485 28,295,264
> 39.7
> D 95,152 1.9 11,769 0.9 26,413,195 27,332,006
> 38.3
> D 100,160 2.0 12,389 0.8 24,202,729 18,396,504
> 25.8
>
> Shouldn't these two values to be the same?
>
> This database is running batch processing. We are trying to justify
> purchasing more memory.
> The question is: how much improvement in batch run time we'll get if
> we double memory size? 25%? 50%?
An inherent assumption of the advisory is that your system is doing about the same work all the time. You, however, know that the batch job doesn't make the computer do the same things as normal operations. All the cache advisory is going to do is tell you you need more and more cache. (Yes, that is unfairly negative about the advisory, it does have some usefulness - according to the docs, if you alter system db_cache_advice=on when it is already on, it will reset the stats, and that may make it match the top 5. Haven't tried it myself.).
What you need to do is two-fold: Be sure the batch sql is doing what it needs to do most efficiently, then see what resources it is using for the batch.
What the advisory cannot tell you is how your sql deals with data distribution. You have presumably (from the db file sequential read) a lot of indexed access - perhaps it is more likely the blocks that have the data you need are already in memory, if the data is not scattered about blocks. It is possible that giving more memory will allow more blocks to be in memory over the life of the batch, letting indexed access go faster. It is also possible that this will totally take over your cpu, where before the time taken to grab blocks off disk allowed the cpu to get some actual work done (like those freeway onramp meters, if you've seen them). It is also possible that changing to multiblock reads can load up the cache faster than whatever data and excess data is gotten through probes. Then there's the small table issue Jonathan alluded to, and Oracle auto-tuning with various appropriateness. Too many unknown variables, really, which is why you can't just throw memory at a problem or even predict with queuing theory without doing some empirical testing. Even if you know the current bottleneck, you can't know what other bottleneck will appear if you remove it.
What percent improvement if you double memory size? Somewhere between -1000 and +200%, says this rabbit in my hat.
jg
-- _at_home.com is bogus. http://www.signonsandiego.com/news/2011/jun/15/media-celebrity-arianna-huffington-self-expression/Received on Thu Jun 16 2011 - 11:29:52 CDT