Re: DSS System -- db_cache_size and pga_aggregate_target
Date: Sun, 20 Jul 2008 15:26:05 +0800
I find it difficult to agree that this should be a blanket rule for DSS.
DSS also does have small tables and also does do Indexed Reads and
Nested Loops at places.
As far as FTSs not being cached, the small_table_threshold may adjust with db_cache_size and some tables can very well switch from "not cached" to "getting cached" even on undergoing FTSs.
As far as PGA is concerned, if we have only one or two jobs running at any time and no parallel query, does hiking PGA from, say 2G to 4G help much ? (not talking about changing _pga_max_size or _smm_max_size).
At 01:42 PM Sunday, Yechiel Adar wrote:
>I do not know any "whit papers" but I think this is common sense.
>In OLTP you make a lot of random access to blocks.
>If you have big cache size, the odds increase that the next time you
>need to access the customer record, the block will be in memory.
>Since you work with small amounts of data, you do not need a lot of
>memory in the PGA.
>In DSS systems, you do a lot of FTS, the blocks get into the lower
>part of the LRU and are not kept in memory, since the space will be
>needed for the next big FTS. The user gets a lot of data to process,
>aggregate and sort so it needs a log of memory in the PGA.
>Just my opinion.
>Hemant K Chitale wrote:
>>I have been informed that advice to increase the db_cache_size
>>makes sense for an OLTP system but not a DSS system.
>>That for a DSS system, the db_cache_size should not be large but
>>the pga_aggregate_target should be large.
>>Are there any "best practices" , "test cases", "white papers",
>>"benchmarks" which indicate such ?
>>Hemant K Chitale
Hemant K Chitale