Re: different physical access method because of disabling Automated Memory Management?
Date: Wed, 7 Jan 2009 08:42:57 +0100
I'm pretty sure db_cache_size is ignored unless you're "auto". So the
> relevant amount is the db_keep_cache_size.
My current tests (in 'manual' mode) shows both are relevant. I cannot provide consistent data as the tests where only some kind of ad hoc tests.
I will create a proper test-environment for this particular case, run it and provide my results.
> Theory: If the table blocks are less than x% of free space in buffer cache
> target, then the CBO respects your cache instruction even for a full table
> scan, since that is within the limit it can cache, and you've demonstrated
> that if this theory is true then x is at least 2. It seems that *_small_table_threshold
> *is ignored (at least for the keep target) if the table's fraction of free
> cache size is less than some percentage.
havn't thought about this yet, but it will easy to test it also. (best with table bigger than cache, I guess?)
It is an open question whether this is from total size or free size when the
> plan is generated (I don't know whether the Oracle CBO considers the
> potential cost of cleaning out space from actual value at parse time).
currently I'd say 'total size'.
I can't recall whether the old, before direct read for routine queries and
> before multiple buffers (keep, recycle, default) limit was 5% or 10% of
> cache size, but there was some number near that amount where the fts
> algorithm would use a limited moving window to cache as a percentage of the
> buffer size, regardless of how full the buffer currently was. I **suspect**
> from your results that limit is somewhere in the CBO algorithm now.
2% of *db_block_buffers - according to http://www.ixora.com.au/q+a/cache.htm *
> On the flip side, I think _*small_table_threshold * rules out special
> handling (direct) if it is set large enough, but won't change behavior by
> setting it smaller if it is already smaller than your table (including some
> overhead estimate). So some number about 505 or so and larger should get you
> cached reads for this table if I'm correct. For an underbar parameter that
> makes some sense, so a developer could test performance of cached reads
> using different code segments without having to worry about the CBO reaching
> in and using direct reads instead. That way a large test sample could be
> used so the measurements would be related to the read rates more than to the
> setup costs and it would be easier to see which code was faster.
> Before I ran a test myself I would:
> a) check Steve Adams' site, because that is the sort of thing he
> often does comprehensive tests on (making sure to repeat his tests on your
> release and environment).
> b) try to trick someone like you into testing and reporting results
> that I would then analyze.
> c) be on a paying gig where the legitimate interests of the client
> justified me doing the tests.
> d) be so impatiently curious that my problem solving jones got the
> better of me and I did the tests like a video game junkie until I got a
> satisfying result.
> e) read the manuals to see if they contain an answer (to test).
> Okay, that's partly a joke. I might search the manuals a bit sooner, and
> there are some other personal sites I'd search before I did much work
> myself. I mention Steve because my impression is that this sort of thing is
> the sweetest spot of his research. If I add others, like, say, JL, Cary,
> Wolfgang, TK, then I'll embark on a slippery slope of "where do you stop?"
> (looking for specific results before you break down and do it yourself.)
I did a) and e), b) is not the way I normally treat anybody. Unfortunately c) is not the fact, so I have to do the major tests in my spare time. At last, I'm definitive d) (which is bad for the gaming-software industry).
As my initial quetion is solved, I think it's proper to discuss the details offline, of course I will post the results here ;-)