Re: DSS System -- db_cache_size and pga_aggregate_target

From: Greg Rahn <greg_at_structureddata.org>
Date: Sun, 20 Jul 2008 15:37:08 -0700
Message-ID: <a9c093440807201537g54cb7bc8ldeb886feed0d200c@mail.gmail.com>


If you are not using PQ, then it will be nearly impossible to take advantage of a large (multiple gigabytes) pga_aggregate_target (that is, unless you run some crazy high number of concurrent, serial queries, but even then there is a limit). The pga_aggregate_target memory is well used for sorts/joins with PQ as the more processes that are involved, the more memory that can be leveraged. Even then, after a given amount, it will spill to temp. Generally the db_cache_size should be large enough to keep your frequently used dimension tables (is this a star schema?) and segment information. About the best advice one can give at this point is: make effective use of the memory. You will have to determine by the workload, the execution plans, etc. how to best allocate it.

On Sun, Jul 20, 2008 at 12:22 AM, Hemant K Chitale <hkchital_at_singnet.com.sg> wrote:
>
> No, the vendor hasn't put in any parallelism (ie no DEGREE at table level
> and no usage of Hints) explicitly. Partitioning is used and I am given to
> understand that only the last partition (ie the one for the current month)
> would be actively used in every month's run.
>
> However, from the AWR report given to me I do see 'db file sequential read'
> waits (although lesser than 'db file scattered read') waits.
>
>
> At 02:22 PM Sunday, Greg Rahn wrote:
>>
>> This depends more on the detail of what you mean by DSS in terms of
>> workload. Are you using Parallel Query? Is the workload partition
>> scans and hash joins?
>>
>>
>> http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/usingpe.htm#sthref2504
>>
>> "With the exception of parallel update and delete, parallel operations
>> do not generally benefit from larger buffer cache sizes. Other
>> parallel operations can benefit only if you increase the size of the
>> buffer pool and thereby accommodate the inner table or index for a
>> nested loop join."
>>
>> On Sat, Jul 19, 2008 at 9:34 PM, Hemant K Chitale
>> <hkchital_at_singnet.com.sg> wrote:
>> >
>> > I have been informed that advice to increase the db_cache_size makes
>> > sense
>> > for an OLTP system but not a DSS system.
>> > That for a DSS system, the db_cache_size should not be large but the
>> > pga_aggregate_target should be large.
>> >
>> > Are there any "best practices" , "test cases", "white papers",
>> > "benchmarks" which indicate such ?
>>

-- 
Regards,
Greg Rahn
http://structureddata.org
--
http://www.freelists.org/webpage/oracle-l
Received on Sun Jul 20 2008 - 17:37:08 CDT

Original text of this message