Re: different physical access method because of disabling Automated Memory Management?

From: Martin Berger <martin.a.berger_at_gmail.com>
Date: Mon, 5 Jan 2009 11:55:47 +0100
Message-ID: <cd8f74560901050255l6369a92fka70022ac45916604_at_mail.gmail.com>



Hi Mark,

thank you for your reply!

I followed your suggestion and created a pfile from the manual spfile. *** without any changes => same result.
*** after setting all *__* (double underbar) parameters as the default, I got the desired effect ('*db file sequential read*')!

This brought me a big step forward.
As now only the result is correct, but I still does not know the parameter which causes the effect, I started manipulationg only one parameter at each time:

  1. *_small_table_threshold*:
    I changed this value from 489 down to 100, 90, 89 and 10 - without any effect! (let me point to the fact the table was created with STORAGE (BUFFER_POOL KEEP), so I'm testing the KEEP buffer!)
    => Now I'm pretty sure, _small_table_threshold has only limited effect on
    the keep cache (*see later findings*)
  2. *all other parameters without effects*:
    streams_pool_size= 0 / unset => no change pga_aggregate_target= 188743680 / unset => no change large_pool_size= 4194304 / unset => no change java_pool_size= 4194304 / unset => no change shared_pool_size= 130023424(__) / 536870912(manual) => no change
  3. *db_cache_size*:
    197132288 (__) => '*db file sequential read*' 12582912 (manual) => '*direct path read*'
    => so the size of *db_cache_size* has an direct effect on segments going
    into KEEP buffer pool!

Some more Tests & findings:



I did some tests and calculations (*without* implicize * _small_table_threshold* set in pfile):
db_keep_cache_size = 12582912
db_cache_size = 197132288
=> _small_table_threshold = 489

db_keep_cache_size = 197132288
db_cache_size = 12582912
=> _small_table_threshold = 498

If I sum both caches, I get 209715200, with a block-size of 8k 2% are 512, which is close enough to 489/498 for me, assuming there is some overhead for memory-structures (x$bh etc).

last testcase for now:


db_keep_cache_size = 12582912
db_cache_size = 12582912
=> _small_table_threshold = 59

==> expected 'direct path read'.
but when I change alter session set "*_small_table_threshold*"=700; (just a value really big enough) - I get '*db file sequential read*'. (this is more or less as expected)
*But the other way round:*
db_cache_size = 197132288
db_keep_cache_size = 197132288
and implicite *_small_table_threshold* = *10* gives me also '*db file sequential read*' !?!?

*Summary*:


  • with multiple Caches, *_small_table_threshold* is calculated from the SUM of them (at least for *default *and *keep*, not tested for *recycle* and blocksize specific).
  • *_small_table_threshold* does not always influence the physical access as expected.

I'm not sure if all questions are answered, but at leasts some points are clearer now. I will continue with some more tests, maybe I can draw a more consistent picture in the future.

cheers
 Martin

>
> If I understand it correctly, the __  (double underbar) parameters at the
> very beginning are for "auto" to remember where it was last time it shut
> down, while "manual" will ignore these values (but also apparently not
> change them).
>
>
>
> So to eliminate all differences between "auto" and "manual" that you can, I
> would create a pfile from your existing spfile and use the values from the
> __ values to set all the corresponding "regular" init parameters.
>
>
>
> Then, using the pfile for startup, repeat your manual test. If that has the
> desired effect, then presumably the rest of the differentiating tests can be
> done using pfiles so the earth cannot move out from underneath you with
> varying __ values possible each shutdown. We'll also know that pfile versus
> spfile for startup is not making the difference, so we can focus on
> parameter values for isolating it to which parameter(s) are key to the
> change. By the way, I **predict** this should generate derived parameters
> for the "manual" case that match the "auto" case, presumably including the
> difference in *_small_table_treshold *shown in your tests following up on
> Tanel's suggestion. So avoid patching up the normally derived parameters and
> let them be derived for this test. If it all comes out as it seems would
> make sense, then you **could** winnow it down to which parameter(s) affect
> the plan choice and where the breakpoints are in values. If it turns out to
> be a single parameter you can use a binary halving pattern on the parameter
> value to discover the breakpoint as narrowly as you desire. Sometimes a
> useful number pops out from such a test, such as (made up example, not real
> case) use direct read unless the existing cache has at least 2 times the
> available space free as the size of the object to be scanned. If the result
> is related to multiple different parameters, the multivariant search for
> breakpoints becomes increasingly tedious.
>
>
>
> If using the "starting values" does not have the desired effect with a
> pfile, preserve your existing spfile (because we might want to go back to it
> as a fixed point startup value) as another name and create an spfile from
> the modified pfile and test again starting up from the new spfile. If that
> has the desired effect, then you can proceed as in the first case, only
> using a new spfile for each test.
>
>
>
> If the derived values in either test are still different for 'auto' than
> for 'manual' using the "starting values" from the __ parameters, then I
> think we have ruled out reported changes. That would mean either that the
> effect of some _ (single underbar) that is not reported in traces is
> dominating the result or that simply being auto versus manual affects the
> plan algorithm. That would indeed be most fortunate and something worth
> asking Oracle if it is possible to change so the behavior matches.
>
>
>
> Good luck. And thanks for the completeness of the information in your
> posting.
>
>
>
> mwf
>

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Jan 05 2009 - 04:55:47 CST

Original text of this message